06-15-2018 01:59 AM - edited 03-08-2019 03:22 PM
We currently have a pair of PA Firewalls that connect to our internet and have two inside interfaces. One connected to an older 4948 switch (our server zone) which connects back to our core on vlan 200 as well as another trunk port to our core 6506E which is hosting the SVI for all vlans excluding vlan 200. Our 9k nexus equipment currently hangs off trunk ports on the core.
We are attempting to move the 9k between the core and the PA FW but communication to the two firewall interfaces fail when we do this. (10.1.1.1 and 10.200.1.1) which prevents any communication to the internet or over to our server farm. Devices physically hanging off the 9k are still reachable on any vlan except for 200. I'll post some design pictures and routing that's configured thus far as well. I think the issue is the network works at the moment because most things are directly connected and this fails post cutover.
Show ip route for all devices:
Core
Gateway of last resort is 10.1.1.1 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 10.1.1.1
10.0.0.0/8 is variably subnetted, 227 subnets, 9 masks
C 10.1.0.0/16 is directly connected, Vlan1
L 10.1.0.2/32 is directly connected, Vlan1
L 10.1.2.1/32 is directly connected, Vlan1
C 10.2.0.0/16 is directly connected, Vlan2
L 10.2.0.2/32 is directly connected, Vlan2
C 10.3.0.0/16 is directly connected, Vlan3
L 10.3.0.2/32 is directly connected, Vlan3
C 10.4.0.0/16 is directly connected, Vlan4
L 10.4.0.2/32 is directly connected, Vlan4
C 10.5.0.0/16 is directly connected, Vlan5
L 10.5.0.2/32 is directly connected, Vlan5
C 10.6.0.0/16 is directly connected, Vlan6
L 10.6.0.2/32 is directly connected, Vlan6
C 10.7.0.0/16 is directly connected, Vlan7
L 10.7.0.2/32 is directly connected, Vlan7
PA Firewall
VIRTUAL ROUTER: default (id 1)
==========
destination nexthop
metric flags age interface next-AS
10.0.0.0/8 10.1.0.1
10 A S ae1
total routes shown: 1
9k (used essentially as switch currently, has management IP route that lives on core)
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
0.0.0.0/0, ubest/mbest: 1/0
*via 10.63.0.1, [1/0], 1d00h, static
10.63.0.0/22, ubest/mbest: 1/0, attached
*via 10.63.1.231, Vlan63, [0/0], 1d00h, direct
10.63.1.231/32, ubest/mbest: 1/0, attached
*via 10.63.1.231, Vlan63, [0/0], 1d00h, local
4948 off PA and Core
Gateway of last resort is 10.1.0.1 to network 0.0.0.0
10.0.0.0/16 is subnetted, 1 subnets
C 10.1.0.0 is directly connected, Vlan1
S* 0.0.0.0/0 [1/0] via 10.1.0.1
4948 hosting servers
Gateway of last resort is 10.1.0.1 to network 0.0.0.0
10.0.0.0/16 is subnetted, 1 subnets
C 10.1.0.0 is directly connected, Vlan1
S* 0.0.0.0/0 [1/0] via 10.1.0.1
06-15-2018 05:08 AM
In the first diagram it looks as if there are two inside FW interfaces. Once the 9K is inserted there appears to be one connect connection. Could you provide more details and identify the IP addresses in both scenarios?
Aside from that, if it is one inside FW interface once the 9K is inserted, it would seem the FW is responsible for redirecting traffic in and out the same interface for the users to get to the servers on VLAN 200. That may be the issue. FW logs may help identify what is actually happening.
Regards
06-15-2018 05:41 AM - edited 06-15-2018 05:46 AM
Hello
Chrhussey has a point -- prior to migration it looks like the 9K's went via the 4948 switch for the routed vlan200 on the PA FW
Now the 9K's dont have that L2 path, Have you tried adding one?
res
Paul
06-15-2018 10:35 AM
Ah yes .I should have explained that a bit more. It is one pipe but has sub interfaces on it that have each of the routed addresses. That's the AE2.1 and AE2.3 bit on the board there.
06-15-2018 11:33 AM
Then that may be the issue. The FW may not be able to route in and out the same interface. As stated earlier, FW logs may identify if that is the issue, that and possibly PA support.
Instead of the single link with sub interfaces, perhaps two links from the FW to the 9k. One for the core, the other for the server farm. This way packets would then be truly routed through the FW.
Regards
06-15-2018 11:41 AM
I have done this configuration at another location functionally but the environment
And routing was very different. I'll give this a shot and see if that works as a
Solution but I don't think it's the issue .
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide