cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1378
Views
0
Helpful
6
Replies

Gre Tunnel Routing Problem

pmarchant1
Level 1
Level 1

I have created a gre tunnel between my data center and my remote VPS server. Data center is a 2800 and the pfsense router is a ESXI VM. the method i have in the data center is "router on a stick" implementation and the tunnel is up successfully and i can ping all endpoints at the VPS side, typically the router and the win10 box so far.

However if i try to ping the reverse i can only get as far as the cisco 2800. 

Strangely i can RDP from the datacenter to the win10 box AND see the web end of the the PfSense box too.

 

Data Center Lan     Meraki MX Router                                                     PfSense      VPS Lan

192.168.0.0/24 ---------|--------144.144.34.2-------80.0.34.2-----|-------192.168.3.0/24

                                         |               Tunnel 10.10.10.1     Tunnel 10.10.10.2

                                          |

                                 Cisco 2800

                                 Eth0/0 192.168.0.166

 

I cannot work out if i need a route in place on the PfSense box or the Cisco. My understanding is that if my pings/traceroutes only get as far as 192.168.0.166 then i need somthing extra on the cisco to forward on the traffic ? As i say everything is fine from the DC to the VPS its just the other way round !

 

Thanks all for looking at my post. All help appreciated. 

                                                      

6 Replies 6

Richard Burts
Hall of Fame
Hall of Fame

There are several things that are not clear to me about your situation. You describe the data center router as router on a stick. But your drawing shows only a single subnet. You describe the 2800 router and its inside interface as 192.168.0.166. Is the outside interface of the router 144.144.34.2? How does the Meraki router fit into this? How are you routing traffic through the tunnel?

 

When you ping from the data center is this from the router or from devices inside the data center lan? Could I suggest a step by step approach to testing from the remote device. 

1) ping the VPS end of the tunnel 10.10.10.2

2) ping your end of the tunnel 10.10.10.1

3) ping the router inside interface 192.168.0.166

4) ping some device inside your network

 

Can you post the configuration of your router?

 

HTH

 

Rick

HTH

Rick

Hi Rick,  thank you so much for the reply to my post. I have been working on this for many weeks before posting this as i wanted to work the problem :)

For your clarification. yes only single subnet on the DC side of things at this time, and the 2800 plugs in to the MX. The MX being the main router for the network and the 2800 only serving as the GRE part.

 

 

Tests as follow >>

<Testing from the Cisco2800>

1) ping the VPS end of the tunnel 10.10.10.2

     Sending 5, 100-byte ICMP Echos to 10.10.10.2, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms

2) ping your end of the tunnel 10.10.10.1

    Sending 5, 100-byte ICMP Echos to 10.10.10.1, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 32/37/40 ms

3) ping the router inside interface 192.168.0.166

    Sending 5, 100-byte ICMP Echos to 192.168.0.166, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
4) ping some device inside your network

     Sending 5, 100-byte ICMP Echos to 192.168.0.6, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms

 

<Testing from the VPS Side>

1) ping the VPS end of the tunnel 10.10.10.2

    Pinging 10.10.10.2 with 32 bytes of data:
Reply from 10.10.10.2: bytes=32 time=37ms TTL=254
Reply from 10.10.10.2: bytes=32 time=37ms TTL=254
Reply from 10.10.10.2: bytes=32 time=37ms TTL=254
Reply from 10.10.10.2: bytes=32 time=35ms TTL=254

2) ping your end of the tunnel 10.10.10.1

    Pinging 10.10.10.1 with 32 bytes of data:
Reply from 10.10.10.1: bytes=32 time<1ms TTL=64
Reply from 10.10.10.1: bytes=32 time<1ms TTL=64
Reply from 10.10.10.1: bytes=32 time<1ms TTL=64
Reply from 10.10.10.1: bytes=32 time<1ms TTL=64

3) ping the router inside interface 192.168.0.166

    Pinging 192.168.0.166 with 32 bytes of data:
Reply from 192.168.0.166: bytes=32 time=33ms TTL=254
Reply from 192.168.0.166: bytes=32 time=34ms TTL=254
Reply from 192.168.0.166: bytes=32 time=38ms TTL=254
Reply from 192.168.0.166: bytes=32 time=51ms TTL=254

4) ping some device inside your network

    Pinging 192.168.0.6 with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.

 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Current configuration : 1354 bytes
!
! Last configuration change at 18:02:50 UTC Sat Jan 5 2019
!
version 15.1
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname router
!
boot-start-marker
boot-end-marker
!
!
no logging console
!
no aaa new-model
!
dot11 syslog
ip source-route
!
!
!
!
!
ip cef
no ipv6 cef
!
multilink bundle-name authenticated
!
!
!
!
!
!
!
!
!
!
!
voice-card 0
!
crypto pki token default removal timeout 0
!
!
!
!
license udi pid CISCO2801 sn FCZ124712CT
!
redundancy
!
!
!
!
!
!
!
!
!
!
interface Tunnel0
description Tunnel to VPS
ip address 10.10.10.2 255.255.255.252
ip mtu 1400
ip tcp adjust-mss 1360
tunnel source FastEthernet0/0
tunnel destination 144.*.*.* (IP Obscured)
tunnel path-mtu-discovery
!
interface FastEthernet0/0
ip address 192.168.0.166 255.255.255.0
no ip route-cache cef
no ip route-cache
duplex auto
speed auto
!
interface FastEthernet0/1
no ip address
shutdown
duplex auto
speed auto
!
ip forward-protocol nd
no ip http server
no ip http secure-server
!
!
no ip route static inter-vrf
ip route 0.0.0.0 0.0.0.0 192.168.0.1
ip route 192.168.3.0 255.255.255.0 Tunnel0
!
logging esm config
!
!
!
!
!
!
control-plane
!
!
!
!
mgcp profile default
!
!
!
!
!
!
line con 0
speed 115200
line aux 0
line vty 0 4
login
transport input all
!
scheduler allocate 20000 1000
end

 

Thank you . Paul

Paul

 

Thank you for the information. I believe that the problem is a routing issue. Traffic from VPS comes through the GRE and gets to the device in your network. When your device responds the response packet does not come to the 2800 but goes to their default gateway which is the MX. And my guess is that MX does not know that the GRE exists and should be the way back to VPS. Ordinarily I would look at something like Policy Based Routing as a way to solve this. I do not know if MX supports any thing like that. Do you know if MX supports anything like that?

 

It would be a bit of a kludge but it occurs to me that it might work if you configure the 2800 as the default gateway for the devices in your network. If all your devices sent their outbound traffic to the 2800 it could send appropriate traffic through the GRE and forward everything else to MX.

 

HTH

 

Rick

HTH

Rick

Hi Rick. Thank you again for your valued support.



I do have a static route for the traffic to the VPS which I assume is why this works as should.



<192.168.3.0/24>



This is the blurb I have found on PBR https://documentation.meraki.com/MX/Firewall_and_Traffic_Shaping/MX_Load_Balancing_and_Flow_Preferences



If this is a bust then I will just have to look for a different option.



Thank you again Rick !



Paul


Paul

 

I do not claim expertise with Meraki. But reading the reference that you sent it certainly sounds like you would be able to configure the MX with a second "WAN" interface and to send traffic headed for the VPS subnet over it. 

 

HTH

 

Rick

HTH

Rick

Hi Rick!

Yes I see that and thought the same too.
Ok that give me something to work on. Thank you again for you valued help and support on this I really appreciate it.

Take care
All best Paul