12-12-2022 07:04 PM
Hi All,
We worked with a customer to update their 2110 from 6.6.4 to 7.0.4 (both FMC and FTD) They have 2 2110s in the rack, they are independent of each other (was going to be HA, but never eventuated)
Pre Upgrade:
Steps Taken:
Post upgrade:
The 2110-A config was never changed or modified, and yet the VPN failed to work when we rolled back the cables. There was also an issue when we swapped back from 2110-B -> 2110-A where the interfaces came up, but no traffic passed. We discovered this was mostly due to the ARP cache on the downstream router, and clearing it earlier had resolved this, but for some reason at this point, it would no longer work.
Due to needing a production unit, we then decided as a best option, to re-deploy the original PRODICTION policies and licences to 2110-B.
Prior to our emergency break / fix of redeploying 2110-A policies to 2110-B, we checked over all the policies, objects, NAT rules etc.. and it all looked exactly the same on both units.
Current state:
Tracert
PS C:\Users\Jason> tracert <FQDN>
Tracing route to <FQDN> [W.X.Y.Z]
over a maximum of 30 hops:
1 1 ms 2 ms 2 ms 10.227.1.2
2 15 ms 13 ms 14 ms 10.22.0.149
3 14 ms 17 ms 14 ms 10.22.0.148
4 20 ms 15 ms 25 ms 103.3.217.157
5 22 ms 15 ms 15 ms 103.3.217.164
6 18 ms 15 ms 16 ms 45.115.20.0
7 * 16 ms * ip-137.83.254.124.vocus.net.au [124.254.83.137]
8 15 ms 15 ms 15 ms be117.cor01.syd11.nsw.vocus.network [114.31.192.68]
9 15 ms 16 ms 16 ms be100.bdr01.syd11.nsw.vocus.network [114.31.192.81]
10 15 ms 16 ms 16 ms bundle-ether12.ken-edge903.sydney.telstra.net [203.27.185.57]
11 17 ms 18 ms 16 ms bundle-ether2.chw-edge903.sydney.telstra.net [203.50.11.175]
12 16 ms 26 ms 16 ms bundle-ether12.stl-core30.sydney.telstra.net [203.50.11.176]
13 21 ms 18 ms 17 ms bundle-ether19.chw-core10.sydney.telstra.net [203.50.13.144]
14 27 ms 27 ms 29 ms bundle-ether20.woo-core10.brisbane.telstra.net [203.50.11.181]
15 27 ms 26 ms 26 ms bundle-ether1.woo-edge902.brisbane.telstra.net [203.50.11.139]
16 * * * Request timed out.
Thanks all.
Jason
12-12-2022 11:23 PM
Hi @jbates5873,
Based on what you've described, more-less, everything was done properly (I don't want to comment setup in which customer doesn't want to implement proper HA, because they are saving on additional license
Kind regards,
Milos
12-13-2022 02:15 AM
Ok, issue resolved.
It was due to an incorrectly made STATIC NAT rule.
There was a rule at the top that was a NO NAT rule that was allowing the traffic, however there was another NAT rule down the bottom that probaly shouldnt have existed and the traffic was hitting it. From there we determined that it was because the static NAT rule was not set to unidirectional.
After alot of debugging and performing a capture with Trace from Devices tab, on the FTD we were able to work it out that it was this specific NAT rule.
It has been a crazy 24 hours. Thats for sure.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: