We have two hubs and two spokes at each site, two hubs are for redundancy
spoke 1 has tunnel 1 and spoke 2 has tunnel 2 and both are connected to both hubs.
tunnel 1 has internal apps traffic hosted in hq and tunnel 2 takes branch office ip phones traffic to
call manager in hq and phones get registered to call manager through that tunnel.
we use offset lists to prefer traffic over one tunnel vs other
real problem is branch side, spokes talk to each other via tunnel 1 and tunnel 2
so site A will talk to Site B via tunnel 1 directly and tunnel 2 will talk to tunnel 2.
problem is when tunnel 1 goes down on any side due to an ISP issue so lets say if tunnel 1 is down at
site A now site A can talk to site B using tunnel 2 but it goes to hq and then site B and this causes
high latency, any chance we can fix that in DMVPN so that tunnel 1 at any spoke side can talk directly
to tunnel 2 on another spoke side without going through hub.
If we cannot acvhieve that in DMVPN, is there any other solution to overcome this problem. Thanks
The Configuration Guide and CVD have a lot of great info on designing and scaling DMVPN. Here's a good place to get started. http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_conn_dmvpn/configuration/xe-16/sec-conn-dmvpn-xe-16-book/sec-conn-dmvpn-backup-nhs.html
I think it really depends on DMVPN phase in use and routing design.
Phase 3 could be a problem, as shortcut would created in response to Hub's redirect.
I think phase 2 should work fine, as if tunnel 1 goes done, spoke B would see more specific on tunnel 2 only.
What phase are you currently configured at? I would recommend trying phase 2 or phase 3 configurations if you haven't yet.
If you still experience issues with these configurations, the only next options I can think of would be to implement some type of QoS at your WAN or increase your WAN link size.