cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1488
Views
0
Helpful
13
Replies

Connectivity between HQ and Branch offices

quietylady
Level 1
Level 1

Hello,

I have several branch offices that communicate to HQ via MPLS VPN connectivity. I have configured DMVPN for LAN routing with eigrp protocol. What I have been noticing is connectivity between HQ and Branch office on the WAN interface is OK, BUT on the LAN interface it fluctuates a lot even when there is no device connected behind the LAN interface. The fluctuation is so severe that it affects the overall performance of the branch office considering the fact that all applications are centralized at HQ.

 

Please advice what could be causing such behavior.

13 Replies 13

Richard Burts
Hall of Fame
Hall of Fame

I am not clear what you mean when you say the it fluctuates a lot. Can you clarify this? Can you also provide some details about the branch environment? What kind of network equipment is it? How is it configured? At times when there is a problem have you checked the network equipment to see if they are reporting any errors or issues?

 

HTH

 

Rick

HTH

Rick

Hello,

By fluctuating I mean the link becomes extremely slow eve if the b/w utilization is only 50%. I have cisco devices behind both sites (@900 series routers and 2960 series switches)

Yes I have checked the devices and there are no errors found. In fact the mis-behavior is continuous and is there even when there is no traffic passing from LAN to WAN.

I have DMVPN configured between HQ and all branch offices with eigrp routing protocol

I hope you questions have all been answered



Hello,

 

By fluctuating I mean the link becomes extremely slow eve if the b/w utilization is only 50%. I have cisco devices behind both sites (@900 series routers and 2960 series switches)

 

Yes I have checked the devices and there are no errors found. In fact the mis-behavior is continuous and is there even when there is no traffic passing from LAN to WAN.

 

I have DMVPN configured between HQ and all branch offices with eigrp routing protocol

 

I hope you questions have all been answered

Hello,

 

could be an MTU problem. Can you post the configs of the hub and one of the affected branches ?

 

What is the MTU setting of your tunnel ? 1416 is usually a good value. Try that and PMTU:

 

interface Tunnel0
ip address 10.1.1.2 255.255.255.0
no ip redirects
ip mtu 1416
ip hold-time eigrp 1 35
no ip next-hop-self eigrp 1
no ip split-horizon eigrp 1
ip nhrp network-id 1
ip nhrp nhs 10.1.1.1 nbma 192.168.0.2 multicast
ip nhrp shortcut

tunnel path-mtu-discovery
tunnel source 192.168.11.2
tunnel mode gre multipoint
tunnel protection ipsec profile DMVPN

Please fine below

 

interface Tunnel0
bandwidth 100000
ip address 192.168.54.44 255.255.255.0
no ip redirects
ip mtu 1400
no ip split-horizon eigrp 100
ip nhrp authentication xxxxx
ip nhrp map multicast dynamic
ip nhrp map multicast x.x.x.x
ip nhrp map 192.168.54.1 x.x.x.x
ip nhrp network-id 1000
ip nhrp holdtime 3600
ip nhrp nhs 192.168.54.1
delay 10000
tunnel source GigabitEthernet0/0
tunnel mode gre multipoint
tunnel key 10
tunnel path-mtu-discovery
tunnel protection ipsec profile xxxxxx

the mtu value does it need to be the same both for branch as well as hq??

 

Please find for the HUB

 

interface Tunnel0
bandwidth 100000
ip address 192.168.54.1 255.255.255.0
no ip redirects
ip mtu 1500
no ip split-horizon eigrp 100
ip nhrp authentication xxxxx
ip nhrp map multicast dynamic
ip nhrp network-id 1000
ip nhrp holdtime 3600
delay 5000
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
tunnel key 10
tunnel path-mtu-discovery
tunnel protection ipsec profile xxxx
end

Hello,

 

change the MTU on the hub to 1400, the values need to match. Also. what is the purpose of the delay value ? 

The delay value is for the prefered link between primary and secondary links that we have.

So in all links have tun0 with lesser delay configuration and tun1 with higher delay configuration.

Whenever PR link goes off then Secondary picks

 

Hope that is clear

OK, understood. But make sure the MTU values match, that is really important...

Seem to have solved the problem

So what is the concept behind this change. i had a consultant who did the initial configuration. Is it possible to share some few details?

Hello,

 

basically, GRE/IPSec add overhead bytes to the frames, that is what you need to account for. If one side sends larger frames than the other, these packets need to be fragmented, which then causes delay...

Hello,

 

Ooh okay that you for the clarification.

Although the frequent timeouts still seem to exist while pinging LAN

 

PING FROM BRANCH TO HUB LAN IP

Sending 500, 100-byte ICMP Echos to 192.168.54.1, timeout is 2 seconds:
!!!!!!!!!.!!!!!!!!!!!!.!!!!!!!!!!!!!..!!!!!!!.!!!!!!!.!!!!!!!!.!.!!!!!
!!!......!!.!.!!!!!!!.!...!!!!!!!.!!!!!!!!!!...!.......!!!!!!!..!.....
..!!!!!!!!!!!!!.!!.!.!!!!!!.!..!...!.!!..!!!!!!.!!!!!!!!!!!.......!.!!
.!!!!!!.!.!.!..!.!.!!!!!!..!!...!..!!!!!!!....!!...!.!!!!!!!.!.!!!!!!!
!..!!!!!!!!!!!!!.!.!..!!!!!!!.!!!!!!..!..!..!!!!!!!!!!..!!!!!!!..!...!
!!!!!!.!!!....!....!!!!!!!!.!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!.!!!!
!!!!!!!!!!!!!!!!!!!.!!!!!!.!.!....!....!!!.!.!.!!.....!!!!!..!........
!!!!!!!!!.
Success rate is 70 percent (352/500), round-trip min/avg/max = 24/27/132 ms


PING FROM TUNNEL TO HUB WAN IP

Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to x.x.x.x, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (100/100), round-trip min/avg/max = 24/24/28 ms

 

Any idiea?

Though I would like to have them under monitoring for at least 24 hrs to be sure the problem has been resolved

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card