Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity to get an update on Dynamic Multipoint VPN with Mike Sullenberger. Mike has been working with TCP/IP networking for 19 years and has been with Cisco for 14+ years where he is a Distiguished Support Engineer (DSE) in Customer Advocacy. His technical expertise is in the areas of TCP/IP, IPsec VPNs, Routing Protocols, NAT and HSRP. He is the principle architect of the Dynamic Multipoint VPN (DMVPN) solution, where he works on DMVPN network designs, troubleshooting and the design of new DMVPN features. Mike has a Bachelors of Science degree in Mathematics and he is a CCIE in Routing & Switching since 1997.
Remember to use the rating system to let Mike know if you have received an adequate response.
Mike might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through October 1, 2010. Visit this forum often to view responses to your questions and the questions of other community member
We are facing one strange issue for more than 3 months. One of our customer is running DMVPN b/w hub and spokes every spoke is running fine except one spoke which is different from other spokes since it is having 7600 router with IPSEC SPA module (Crypto Connect mode) and rest of the spokes are having ISR routers.
This spoke is not having any production impact, but the logging buffer is getting populated with following message appearing every two seconds.
%CRYPTO-4-RECVD_PKT_NOT_IPSEC: Rec'd packet not an IPSEC packet.
(ip) vrf/dest_addr= /10.190.103.142, src_addr= 10.191.8.69, prot= 47
We are unable to find any thing causing this.
As per the Cisco's 7600 IPSEC SPA documentation, Point-to-Point GRE + Tunnel Protection is not supported in any of the images. My question is even if we change the design from Point-to-Point GRE to Multipoint GRE, which we don't need since the customer is not running spoke-to-spoke tunnels.
Have you faced similar issues some where else ? Do you think that this design constraint would be causing these messages ??
Spoke(7600/IPSEC SPA) Tunnel Config:
description **spoke tunnel**
ip address 10.x.x.x 255.255.255.0
ip mtu 1400
ip nhrp authentication DM
ip nhrp map multicast 10.y.y.y
ip nhrp map 10.x.x.x 10.y.y.y
ip nhrp network-id 6
ip nhrp holdtime 300
ip nhrp nhs 10.x.x.x
ip ospf network point-to-point
ip ospf hello-interval 30
tunnel source Loopback3
tunnel destination 10.x.x.x
tunnel protection ipsec profile ALL
crypto engine subslot 2/0
HUB Tunnel Config:
description ***** DMVPN-HUB ***** Cloud for 25 spokes***
ip address 10.x.x.x 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp authentication DM
ip nhrp map multicast dynamic
ip nhrp network-id 6
ip nhrp holdtime 300
ip ospf network point-to-multipoint
tunnel source Loopback6
tunnel mode gre multipoint
tunnel protection ipsec profile ALL
crypto engine slot 1/0
Would appreciate if you can guide the way out.
That error message is saying that the Spoke router is receiving GRE packets that according
to IPsec should have been encrypted (I suspect that you already knew this). What we need
to do is to figure out where these packets are coming from. The destination IP address,
10.190.103.142, of the packet must be the tunnel source on the Spoke. Presumably the
Source IP address, 10.191.8.69 is the tunnel source on the hub, but I can't tell for certain.
In DMVPN we don't have anything that sends a packet every two seconds, so it would be
good to look around on the source router (10.191.8.69) to see if there is some application
that would send at that rate. Hopefully once we know what the packets are we can then
figure out why they are getting GRE encapsulatedm, but not IPsec encrypted. You can
switch to using an mGRE tunnel interface on the Spoke, which should work fine, but I am
not sure that it would effect the problem you are facing. I think the main thing is to figure
out what these packets are that are triggering the error.
One tricky thing about the VPN-SPA and DMVPN is that for DMVPN the GRE processing
must be done on the EARL7 ASIC and not on the VPN-SPA. With mGRE tunnels this is
not an issue since the VPN-SPA won't take over processing for mGRE tunnels, but the
VPN-SPA can take over processing of p-pGRE tunnels.
Hope this helps,
Here are some questions that remained unanswered during the live event because we didn't have enough time.
1. How GETVPN enhances DMVPN?
2. In a Hub and Spoke deployment, does the DMVPN provide better redundancy and load sharing of the hub VPN boxes?
3. Is multicast traffic always replicated to the Hub ? How about the spoke that doesn't request the multicast group ?
4. Is it recommended to use multi-VRF within a DMVPN tunnel and what are the benefits?
5. We've been solving problems with one spoke, which had unreliable internet connection. After repairing the connection it was not able to reconnect back to DMVPN, until I have changed the tunnel interface to another number. Then it immediately reconnected to DMVPN. Is there (in NHRP, IPsec) some "ban" mechanism, knowing tunnel number of spoke?
6. Why would you run MPLS over the top of DMVPN?
7. what type of design allows for two ISP's at our remote offices to carry separate traffic (one ISP for data, one ISP for voice) each spoke connecting one tunnel back to two different hubs and allow for redundancy for failover.
#1: How GETVPN enhances DMVPN?
DMVPN can use GET VPN to do GDOI (group encryption) rather than the standard peer-wise encryption of IPsec. This does mean that for a DMVPN hub that has say 2000 spokes with peer-wise IPsec you need 4000 SAs (2 per spoke), but with GDOI you only need 2 SAs total. So you can save memory. You can also save some process time because with peer-wise SAs you need to match the packet to be encrypted to the correct outbound SA for the peer where it is going, whereas with GDOI you don't need to do this since you use the same outbound SA for all peers. For DMVPN spoke-spoke tunnels with GDOI you don't need to wait to negotiate the IPsec SAs between the two spokes before sending traffic over the GRE tunnel between them, this does save a bit of resources at the hub, since the spoke to spoke data traffic travels via hub until the spoke-spoke (IPsec SAs) is up between the spokes.
Note, GET VPN preserves the "inner" IP header addresses for use in the IP header of the encrypted packet. GET VPN preserves the "inner" IP header addresses to be used on the IP header of the encrypted packet. In a regular GET VPN network (no GRE) this would mean that host IP address are visible in the network between the GET VPN routers. If the host addresses are private address (RFC1918) then the resulting encrypted packet cannot be routed over the Internet, otherwise the packet is routable, but the host IP addresses are visible, which may not be good. Sometimes it is good enough to know who is talking to who even if you can't tell what they are saying. BUT with DMVPN using GDOI encryption this is not an issue since the "inner" IP addresses that are going to be preserved are the tunnel IP address from the GRE tunnel. The host addresses are complete hidden (encrypted) inside the GRE encapsulated packet that GDOI is encrypting.
On the downside with using GET VPN encryption for DMVPN you need to have and setup a separate router(s) as the GET VPN keyserver(s), which I am pretty sure cannot be on a group member so it can't be the DMVPN hub router. Also currently to configure DMVPN with GET VPN you have to configure the GET VPN crypto map on the outbound physical interface (and also don't configure 'tunnel protection ...' on the tunnel). We are looking at adding the capability for the 'tunnel protection ...' command to refer to a GDOI policy and therefore you could use 'tunnel protection ...' on the tunnel and wouldn't have to configure the separate GDOI crypto map on the outbound interface. Another point is that with GDOI encryption the same encryption key is used for all traffic and therefore if the encryption key is ever "broken" all traffic between all routers in the DMVPN network can be read. With peer-wise encryption keys if a key is "broken" then only the traffic between those two peers can be read, all other traffic is still unreadable.
All of the above assumes you are GDOI encrypting the GRE tunnel packet. If, on the other hand, you want to GDOI encrypt the data packet and then GRE encapsulate it you can, but this is not recommended. It is not recommended, because the DMVPN control traffic (NHRP) would not be encrypted between the DMVPN nodes and therefore vulnerable between DMVPN nodes. Also because GDOI preserves the "inner" IP addresses (the host addresses in this case) these again would be visible (you have to dig through a the GRE/IP header) to anyone in the network between the DMVPN nodes. Though the hosts can use private addresses since in this case it is the GRE tunnel addresses that are on the outer IP header of the packet.
So that is more or less the interaction between GET VPN and DMVPN.
Hope this helps,
#2. In a Hub and Spoke deployment, does the DMVPN provide better redundancy and load sharing of the hub VPN boxes?
The simple answer is yes. With regular IPsec you can setup two peers to use for encryption of hosts/subnets that are behind those peers, but you can only have one peer active at a time. So they are redundant, but cannot do any load-balancing. When regular IPsec switches over to the secondary peer or back to the primary peer there is packet loss during the time that it takes ISAKMP keepalives to recognize the loss of the peer and the time it takes to build the IPsec SA with the backup peer (switchover) or with the primary peer (switchback).
With DMVPN, because we are encrypting the GRE tunnel between the peers rather that the hosts/subnet traffic we can have encrypted tunnels up with both peers at the same time. We then use a routing protocol to direct traffic to use one tunnel (primary/secondary) or both (load-balancing). In either case if one tunnel (peer) goes down all the traffic automatically goes via the other tunnel (peer). The switchover is done as fast as the routing protocol recognizes that one peer is down (usually about 15 seconds). Note, there is packet loss during this time. The switchback (when the primary router comes back up) is done such that there will be no packet loss, because the backup peer is used until the primary peer is completely up and ready.
#3. Is multicast traffic always replicated to the Hub ? How about the spoke that doesn't request the multicast group?
With DMVPN, IP multicast traffic only flows over the spoke-hub and hub-hub tunnels, not over the spoke-spoke tunnels. When the hub gets an IP multicast packet that is going out the DMVPN tunnel interface the hub will replicate the IP multicast packet one copy for each spoke that wants that IP multicast stream. 'ip pim nbma-mode' and 'ip pim sparse-mode' is configured on the tunnel interfaces, so that PIM will keep track of each spoke that wants an IP multicast stream. In this way we only replicate and send a copy of the IP multicast packet to those spokes that want it and not those that don't.
Note, If the IP multicast source is behind a DMVPN spoke then the IP multicast traffic will be sent to the hub (multicast RP is configured at or behind the hub, and 'ip pim spt-threshold infinity' is configured), and the hub will then replicate and send the IP multicast packet to any other spokes that want that IP multicast stream.
#4. Is it recommended to use multi-VRF within a DMVPN tunnel and what are the benefits?
#6. Why would you run MPLS over the top of DMVPN?
I am not sure what you mean by multi-VRF within a DMVPN tunnel. If you are referring to 2547oDMVPN (running MPLS VPNs over DMVPN), then this is supported and used to do traffic separation (or network virtualization). This is the case where you may have different traffic on your network that you want to keep completely separated (like different groups (Sales, Marketing, HR, Development ,..) or internal versus guest traffic). In a LAN environment you can use vLANS with VRFs to do this. To keep this separation over DMVPN to remote sites you can do this in two ways.
Use 2547oDMVPN (MPLS VPNs over DMVPN). In this case you MPLS tag-switch over a single DMVPN network. The MPLS VPN tag on the packet is used to separate the packets back out to the correct VRF on the remote side. In this case you do need to configure MPLS, LDP and BGP with redistribution of the LAN routing protocol in/out, etc. This is fully supported for a DMVPN hub-and-spoke only network. It can be setup for DMVPN with spoke-spoke tunnels, BUT all spoke to spoke traffic will be dropped until the direct spoke-spoke tunnel comes up. Most of the time it only takes a second or two, but if for some reason the spoke-spoke tunnel doesn't come up then all the spoke to spoke traffic is lost (dropped).
Note, only with 2547oDMVPN can you have multiple VRFs within a single DMVPN tunnel, otherwise a DMVPN tunnel can only be in one VRF (or global).
I am thinking of this design for our campus network. I am glad to hear that it's supported. :-)
We plan to have about 30 tunnels for Romote Office/Branch Office type network, most of them will just use comcast business service gateway standard with 5Mbps/1Mbps down/up, some will use 16Mbps/2Mbps. We would also want to have another 5 ot 6 vrf-lite mGRE for PCI ROBO like vending machine, car gate, laundry etc. There requirement is far less than 5Mbps/1Mbps. What kind of headend will you recommend for the DMVPN hub? ISR 2800, ISR3800 or ISR 2900 or ISR 3900?
We would also like to explore the remote-access vpn within vrf-lite, is it supported? For example, we would like to have one VPN group on the DMVPN hub in each vrf-lite to support the remote-access to these devices, is it possible?
So given your numbers lets assume the following: 25 tunnels at 5/1 Mbps and 5 tunnels at
16/2 Mbps which would give us a total of about 6*25+5*18 = 150+90 = 250 Mpbs. This pretty
much moves you into either a 3945e or 7200. At this point I would recommend a 3945e,
which will do encryption at around 400 Mbps.This would give you plenty of room to handle
the PCI ROBO connections and remote-access connections as well.
For remote-access I am assuming that you want an IPsec connection directly from a PC
(non IOS router) directly to the DMVPN hub. You can also have IPsec do iVRF (data
packets in VRF and encrypted packet in global). Note, it is a bit complex, so it will
probably take a little bit of playing around with the configuration to get things to work
#5. We've been solving problems with one spoke, which had unreliableinternet connection. After repairing the connection it was not able toreconnect back to DMVPN, until I have changed the tunnel interface toanother number. Then it immediately reconnected to DMVPN. Is there (inNHRP, IPsec) some "ban" mechanism, knowing tunnel number of spoke?
When a spoke NHRP registers with the its hub it by default marks the mapping information as unique. Which means that only this spoke IP tunnel address to NBMA address mapping is allowed. Effectively what happens is that once the spoke has registered the hub (while it has the NHRP mapping entry) will not accept another mapping entry using the same tunnel IP address, but a different NBMA address. When the mapping entry expires, which doesn't normally occur, since the spoke would refresh the mapping on a regular basis (1/3 the NHRP hold time), then the hub could accept this tunnel IP address to a different NBMA address mapping. So this is likely what you ran into, and if you waited for the NHRP mapping for this spoke on the hub to expire then the spoke could have re-registered. I recommend NHRP hold times with a value between 300-600 seconds, though the default NHRP hold time is 7200 (2 hours) seconds.
If you have spoke that gets its NBMA address dynamically (DHCP, PPP) or is NATted then this waiting for the old mapping to expire before the hub would accept a new mapping is not going to work. In that case you configure 'ip nhrp registration no-unique' on the spoke to clear the unique flag on the NHRP registrations so that hub would accept a same tunnel IP to new NBMA IP address mapping immediately.
# 7. what type of design allows for two ISP's at our remote offices to carry separate traffic (one ISP for data, one ISP for voice) each spoke connecting one tunnel back to two different hubs and allow for redundancy for failover.
In this case you would use two DMVPN networks (clouds) one for each ISP. This is a Dual DMVPN Dual Hub type scenario. You use either a 'vrf-lite' or 'tunnel route-via' type configuration to "lock"a tunnel to a particular outbound interface for a particular ISP on the hubs and those spokes that are attached to two ISPs. On the spokes that are attached to only one ISP you can either have it be a member of only one DMVPN cloud or both, where both tunnels would use the same outbound physical interface over the single ISP. In either case where a spoke only has a single tunnel interface or two tunnel interfaces the routing protocol running over the tunnels is used to decide which tunnel to use for forwarding packets.
One thing to note is that you can only build spoke-spoke tunnels within a DMVPN network (cloud), not between DMVPN networks (clouds). If two spokes end up routing packets for each other out different DMVPN networks (clouds) than they will not be able to build a spoke-spoke tunnel to each other. These nodes will still be able to reach each other using a spoke-hub-spoke path. This situation could arise because either the routing happens to be set up that way or because of network failures (access to one ISP or the other) has caused the spokes to be on different DMVPN networks (clouds).
If you want to load-balance your use of the two ISPs than this is the only way to get reasonable statistical load-balancing of the host traffic, since the host traffic will have two routes one for each tunnel interface, where each tunnel goes over a different ISP. In this way we are able to statistically balance many host flows over the two ISPs. This can play havoc with trying to bring up and use spoke-spoke tunnels. Depending on how the two directions of the flows are forwarded out a tunnel, you may or may not get a spoke-spoke tunnel and you may or may not use the spoke-spoke tunnel in one direction and this can be different for each host flow. Load-balancing over the two DMVPN networks (ISPs) is fairly easy to setup, but as noted spoke-spoke tunnel usage may be fairly erratic. Often for this type of network it is best to have a primary/backup ISP setup rather than a try for a load-balanced ISP setup.
In your case the forwarding of packets is a little more complicated, since you are trying to route by application type. If your voice endpoints are in a different (sub)network from the data endpoints then this can be fairly easy to do since you can separate them by destination IP (sub)net. You can have Hub1 be preferred (better metric) for the voice subnets and Hub2 be preferred for the data subnets. If the you cannot separate out the destinations by IP addresses then you will need to use PBR (Policy Based Routing) which can forward traffic based on layer 4 headers rather than just layer 3 IP destination addresses. The problem with PBR is that it is more static and tricky to setup for redundancy and failover. To give PBR failover capability we often combine it with IP SLA which probes the Primary path and if that fails it causes the PBR to switch to the secondary path. Another possibility would be to use PfR, which I Have heard is trying to do some more on application level forwarding, but at this time I don't know if it can help for this.
Can you please reply to these questions? These are the last questions which remained unanswered during the live webcast event due to time constraints.
1. When you configure a hub farm for dmvpn, do you assign the same network ID to every hub, or use different ID's for each hub, and create a separate spoke tunnel at each remote?
2. Can you talk about designing QoS on top of DMVPN?
3. Can we have multihoming spoke router for dmvpn setup?
4. If I would like to sell dmvpn service to multiple customers by dedicating hub router at my WAN cloud, do we need to run VRF or mpls on the spoke and hub router?
5. Is there any plan to support GRE tunnels on the ASAs?
6. This is a request: please DMVPN on ASA
7. Can I have a spoke connecting back to 2 different hubs using 2 different ISP's and seperate the traffic over two tunnels, one tunnel going over each ISP?
8. What is the best and easiest way to route the Internet traffic from a spoke through the tunnel back to the hub and then routed to our corporate Internet gateway which is in a different location than the hub ?
9. I have a dual cloud design terminated on the same hub router connected to different ISPs. I use PBR to forward router's traffic to the proper ISP with no luck. It has been a bug reported that PBR doesn't handle GRE traffic for route manipulation, so I stick with static routes to the spokes. Is there any workaround of it?
10. If we have a lot HUB link connection will the SPOKE to SPOKE connect continue to work ? (NHRP issue)
11. This is a continuation about the 802.1x...What I meant is to authenticate the computers that connect to the spoke router once the tunnel is built to the hub router. Thanks!
12. If you're running DMVPN over the Internet . . . and there is a problem with a path between two spokes - does DMVPN deal with this issue by sending traffic?
13. I want to use DMVPN as a backup with dual hub and MPLS as the primary connection, if a spoke does not have either MPLS connectivity and DMVPN to the primary, and have MPLS or DMVPN connectivity to the backup hub and the backup hub still has connectivity to the primary hub can we reroute the traffic to the hub?
14. Can we use all the kind of encryption that we use in VPN
15. Is there any limitation for spoke router? How can we use spoke in single dmvpn layout?
16. I would like to see the trouble shooting parameter for DMVPN, If my NHRP us not stable then what should I do ?
17. As per Cisco doucmentation for DMVPN we can should use EIGRP . Why is that? Is it that we can tweak the cost parameter used by EIGRP ie. K1 K2 K3 -- K5?
#1. When you configure a hub farm for dmvpn, do you assign the same networkID to every
hub, or use different ID's for each hub, and create a separate spoke tunnel at each remote?
When doing DMVPN SLB the hubs in the hub farm are configured the same so that
the spoke with a single tunnel interface can be connected to any of the hubs and not
be able to tell the difference. To do this the hubs are configured with a loopback interface
that has the same IP address that the hub uses as its 'tunnel source ...' and the IP address,
NHRP network-id, tunnel key (optional), etc. are identical on all the hubs. If the you want
to support spoke-spoke tunnels (DMVPN Phase 3) then the hubs are also configured with
another DMVPN network (separate mGRE tunnel) with unique IP address, but the same
NHRP network-id. This gives a communication path between the hubs, since the hubs
cannot use the mGRE tunnel interface with the spokes to talk with each other.
#2. Can you talk about designing QoS on top of DMVPN?
Hubs - With DMVPN per-tunnel QoS enhancements (not available on6500/7600 or ASR) (12.4(22)T:
This uses NHRP to configure a spoke into an NHRP group and then on the hub to map an NHRP group to a QoS policy. More than one spoke may be in the same NHRP group. When a spoke connects to the hub it will send its NHRP group name to the hub and the hub will match that to a QoS policy. A separate instance of that QoS policy will be instantiated for that spoke even when more than one spoke is in the same NHRP group. This means that each spoke's traffic will be measured separately against the shaping bandwidth (and/or policing) defined by the policy.
Note: If a spoke isn't configured into an NHRP group or an NHRP group is not mapped to a QoS policy then that spoke's traffic will not be subject to QoS.
Note: If you have configured per-tunnel QoS then you cannot also configure a separate QoS policy on the physical interface where the tunnel packets leave the router. The opposite of this is also true. Development is currently working to remove this restriction.
Note: The QoS shapers can take up significant CPU, even if there isn't sufficient traffic to a spoke to force queuing (traffic rate > shape rate) and therefore when you apply per-tunnel QoS on a hub the number of spokes that can be supported by that hub will be significantly reduced. In most cases you will need to reduce the number of supported spokes by half as a rule of thumb. You can test this by watching the CPU utilization as you add more spokes using per-tunnel QoS. Be sure to test the case when the routing protocol needs to reconverge (usually tested by reloading a hub).
It is required that a hierarchical QoS (HQF) policy is used, where the parent policy shapes the tunnel traffic to the inbound rate of the spoke and the child policy will do any policing within that shape rate. The QoS policy doesn't need to specify information for matching the tunnel traffic for shaping in the parent policy, this is taken care of automatically when an instance of the QoS policy is mapped to that spoke when it connects. Also you don't need to use 'qos pre-classify' on the mGRE tunnel. The QoS policing function (child policy) is done on the IP data packet before it is GRE encapsulated and IPsec encrypted.