cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
14888
Views
32
Helpful
32
Replies

FTDv in Azure

zi
Level 1
Level 1

Did anyone get FTDv working in azure ? The FTDv is not passing external traffic to the VM . 

32 Replies 32

That sounds right... just to summarize (and add one short cut)

 

- FTDv would need a route to 0.0.0.0/0 over its "outside" interface with next hop ".1" on the outside subnet.

- Azure outside subnet already has a default route to the internet for 0.0.0.0/0 (all subnets do) so you shouldn't have to add any outside subnet UDR.  (you only need to add a route when you want to override the default routing behaviors).

 

Some general info:

The Effective Routing Table on any subnet is a combination of automatically built in routes,  UDRs, and routes from other sources.   The most specific route wins (regardless of the source of the route) but UDR takes precedence in case of tie.

 

You can check the "effective routes" on a subnet  by looking at a NIC on the subnet.  There's an "effective route" option where you can see all the various routes in the table and where they came from.

We are connecting to Azure using Express Route so if I don't put a default route in the outside vnet it will take the one we are injecting via MPLS and Express Route. 

AH!  ok.

 

In that case, it would be good to check the effective routes on the "inside" subnets.  Make sure the route pointing to FTDv's "inside" interface wins in the "effective route table".   You may need a UDR that is more specific than the routes learned from Express Route (via Azure BGP).   Or you can turn off BGP propagation on the "inside" route table ( in "configuration")

So I have more info on this issue now. Topology is as follows:

 

inside host (10.15.10.5)--> Inside int FTD (10.15.10.4) --> outside int FTD (10.15.50.4) -- Azure Public IP for FTD interface.

 

Inside FTD route table has BGP Express route routes (including default) so I have configured a UDR of 0.0.0.0/0 pointing to 10.15.10.4.

FTD has a default route to 10.15.50.1 (Azure router IP)

Outside FTD route table is not receiving BGP routes from Express Route so the effective 0.0.0.0/0 route is coming from Azure and pointing to the Internet.

FTD has a NAT policy configured as:

 NAT Rule: Auto NAT Rule

 Type: Dynamic

 Source Interface Object - Inside

 Destination Interface Object - Outside

 Translation Original Source: any-ip

 Translation Pack: Destination Interface IP

I try to ping 8.8.8.8 and turned out debug icmp trace on the FTD CLI and I am seeing the source and destination interfaces are both the outside. How is this possible? 

 

debug icmp trace enabled at level 255
firepower# ICMP echo request from Outside:10.15.10.5 to Outside:8.8.8.8 ID=1 seq=210 len=32
ICMP echo request from Outside:10.15.10.5 to Outside:8.8.8.8 ID=1 seq=211 len=32

My zones appear configured correctly and the vnets is Azure are assigned correctly.

the 10.15.10.0/24 (inside) vnet is assigned to vnic3 which is Gig0/1 and the 10.15.50.0/24 (outside) vnet is vnic2 which is Gi0/0. 

 

Thoughts?

The setup and route config you describe all sounds right to me.

 

But the debug showing the ping packet coming in on the outside interface makes it seem like something is swapped somewhere. 

For myself, my next step would be to double check:

- The effective route table display on the inside hosts NIC

- That Azure routing table for Inside is associated with the Inside subnet, and that the Azure routing table for outside is attached to the Outside subnet.

- The interface names, zones, IPs in FTDv 

- Make sure I can ping the FTDv inside IP from an Inside host and that capture shows it entering and exiting on the inside interface.

- Make sure I can ping the FTDv outside IP from an Outside host and that capture shows it entering and exiting on the outside interface.   

 

Azure routing will determine which FTDv interface the packet will enter on.  If the ping from the inside host is truly coming in on that outside interface then I would suspect a route entry pointing to the wrong nexthop IP.  Or maybe the wrong routing table attached to the wrong subnet.

 

In the ping debug below - it appears to be 2 sequential ICMP requests from the inside host (rather than a packet coming in and out of the outside interface).   

Here's what a debug icmp trace looks like in my environment  ( 12.10.4.4 is inside host, 12.10.0.51 is FTDv outside IP)

ICMP echo request from inside:12.10.4.4 to outside:8.8.8.8 ID=15388 seq=114 len=56
ICMP echo request translating inside:12.10.4.4 to outside:12.10.0.51
ICMP echo reply from outside:8.8.8.8 to inside:12.10.0.51 ID=15388 seq=114 len=56
ICMP echo reply untranslating outside:12.10.0.51 to inside:12.10.4.4

 

you could also try capture (ping only)   - maybe on a single ping packet

cap in int inside match icmp any any

cap out int outside match icmp any any

 

And make sure the packets are entering and exiting on the right interfaces.

 

Gents a quick question please, 

 

My VM's behind the FTDv are having a high latency, ex: ping to 8.8.8.8 is 30-50ms. Whereas a VM not behind the FTDv in azure ping results is 1-2ms to 8.8.8.8. 

 

Have anyone seen this issue? 

Using pings alone to assess latency is not recommended. Can you try pinging while passing traffic? Here's an example of a result from our testing:


PING x.x.x.59 (x.x.x.59) 56(84) bytes of data.
64 bytes from x.x.x.59: icmp_seq=1 ttl=64 time=20.9 ms
64 bytes from x.x.x.59: icmp_seq=2 ttl=64 time=20.9 ms
64 bytes from x.x.x.59: icmp_seq=3 ttl=64 time=20.8 ms
64 bytes from x.x.x.59: icmp_seq=4 ttl=64 time=20.9 ms
64 bytes from x.x.x.59: icmp_seq=5 ttl=64 time=20.9 ms
64 bytes from x.x.x.59: icmp_seq=6 ttl=64 time=0.323 ms <---------- start scp
64 bytes from x.x.x.59: icmp_seq=7 ttl=64 time=0.493 ms
64 bytes from x.x.x.59: icmp_seq=8 ttl=64 time=1.04 ms
64 bytes from x.x.x.59: icmp_seq=9 ttl=64 time=0.818 ms
64 bytes from x.x.x.59: icmp_seq=10 ttl=64 time=0.942 ms
64 bytes from x.x.x.59: icmp_seq=11 ttl=64 time=0.895 ms
64 bytes from x.x.x.59: icmp_seq=12 ttl=64 time=0.673 ms
64 bytes from x.x.x.59: icmp_seq=13 ttl=64 time=0.511 ms
64 bytes from x.x.x.59: icmp_seq=14 ttl=64 time=0.982 ms
64 bytes from x.x.x.59: icmp_seq=15 ttl=64 time=0.631 ms
64 bytes from x.x.x.59: icmp_seq=16 ttl=64 time=0.686 ms
64 bytes from x.x.x.59: icmp_seq=17 ttl=64 time=0.532 ms
64 bytes from x.x.x.59: icmp_seq=18 ttl=64 time=0.977 ms
64 bytes from x.x.x.59: icmp_seq=19 ttl=64 time=0.756 ms
64 bytes from x.x.x.59: icmp_seq=20 ttl=64 time=1.31 ms <---------- end scp
64 bytes from x.x.x.59: icmp_seq=21 ttl=64 time=20.8 ms
64 bytes from x.x.x.59: icmp_seq=22 ttl=64 time=20.7 ms
64 bytes from x.x.x.59: icmp_seq=23 ttl=64 time=21.0 ms
64 bytes from x.x.x.59: icmp_seq=24 ttl=64 time=20.8 ms
64 bytes from x.x.x.59: icmp_seq=25 ttl=64 time=20.9 ms




i tried the suggestion, but the ping results are still the same. is this a normal thing to be? that behind the FTDv ping is hight? 

We’re the traffic and pings hitting the same policies?

yes, the same policy.

We're seeing the same thing. Can't explain it. We need to investigate.


Ok I FINALLY figured this out. When I originally deployed this FTD template in Azure I had a typo in the vnet subnet and not knowing how to properly change this in Azure (being brand new to Azure and all) I fumbled around quite a bit before fixing it. Part of what I did was to disassociate vnic3 from the FTD vm, change the IP and reassociate it. When I did this the vnic essentially switched places with vnic2 (ie making what i thought was the inside interface become the outside interface). I had to disassociate vnic3 again and associate it again which then brought it to the correct vnic "position" on the VM ultimately correcting the problem. I can now browse out from my vm on the inside vnet through the FTD.

I didn't test fully the other day beyond going to whatismyip.com and verifying that I was getting the public IP of my Azure FTD device. I did more testing yesterday and my connection in intermittent. I'm having some issues figuring out why. Pings work then fail. Web pages load but sporadically and not all content. I'm not sure how to check this out on the FTD. On an ASA I would use ASDM debug logging to see all traffic for that host passing through the firewall and if anything was being blocked. The logging on FTD is very different. Ideas?


C:\Windows\System32>ping 8.8.8.8 -t

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=5ms TTL=51
Reply from 8.8.8.8: bytes=32 time=4ms TTL=51
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Reply from 8.8.8.8: bytes=32 time=5ms TTL=51
Request timed out.

I'm thinking that if it was a policy then it would block them all.  But you could check Intrusion Events on FMC to see if packets are being dropped for some reason.  Is there any chance the routes are changing (due to express route or some other cause)?

 

If you want to debug from the FTDv side you could 

 

> capture in interface inside trace detail

> capture out interface outside trace detail

 

Let pings run when they're intermittent

> show capture in

> show capture out

Check to see which system along the chain isn't receiving or sending the expected packets

 

If a ping packet enters but doesn't exit then you can look at what FTDv decided to do with the packet with this command:

> show capture in packet-number <# from the capture> trace

 

On FTDv you can go into diagnostic mode and use ASAv style captures and shows if you're more familiar with that mode.

> system support diagnostic-cli

 

 

 

 

It must have been an Azure issue as it went away on it's own this morning.

Review Cisco Networking for a $25 gift card