cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2694
Views
15
Helpful
11
Replies

Very slow CIFS transfer from GRE tunnel

Hello,

 

For several months, we have noticed slowness in CIFS transfer from two sites via the DMVPN.
I tried everything without success, change of operator link, change of switch, the link is not overloaded.

Change MTU (1450)..
Any suggestions please

thank you

1 Accepted Solution

Accepted Solutions

Hello,

 

what OS are the endpoint clients running ? If it is Windows 10, CIFS/SMB 3.1.1 is used, which has a mandatory pre-authentication integrity check using SHA-512 hash. This could explain the slowness.

 

Have a look at the link below:

 

https://www.mirazon.com/issues-with-smb-file-transfer-performance-over-vpn/

View solution in original post

11 Replies 11

marce1000
VIP
VIP

 

 - For the tunnel try  IP MTU 1400 and IP TCP adjust-mss 1360. , check if this helps.

 M.



-- Each morning when I wake up and look into the mirror I always say ' Why am I so brilliant ? '
    When the mirror will then always repond to me with ' The only thing that exceeds your brilliance is your beauty! '

Hello Marce,

Thank you for your help, I will try this values MTU/MSS

Everyting is ok, IPSLA tcp connect corect ( 21ms), ,no errors on interfaces.
The only issue is slow tranfer CIFS :

PC (test)<->Switch<->Router<->Mistral(devide encryption) <->Router provider<->MPLS<->Router provider<->Mistral<->Router<->Switch<->PC (test).

BW provider 200Mb, but the transfer CIFS still very slow(600ko/s).

We have the shaping on interface Router to Mistral (Burst) for 200 Mb

Actually, the tunnel GRE is configured with : IP MTU 1500 and IP TCP adjust-mss 1430.

Capturing Data Packets on Wireshark we saw a several retransmission and drop packets.

Hello,

 

what OS are the endpoint clients running ? If it is Windows 10, CIFS/SMB 3.1.1 is used, which has a mandatory pre-authentication integrity check using SHA-512 hash. This could explain the slowness.

 

Have a look at the link below:

 

https://www.mirazon.com/issues-with-smb-file-transfer-performance-over-vpn/

Hello
As stated by @marce1000  check the MTU on the DMVPN tunnels
I also would suggest check if you have any issues on switch/rtrs ports either side of the file transfer as close to the sources as possible and work my way through upstream towards the wan edge..( look for interface errors)
Is this occurring just on file transfers or all traffic?

Do you have any congestion on the network, are you performing any traffic shaping/policing etc..?
Have you performed any traffic capture to/from source/destination (wireshark) or an EPC from the rtrs?

You should be able to obtain an traffic RTT measurement between rtrs with a basic ip sla echo operation at one end and a sla responder at the other end to see if you do have any delay 

 


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

Hello Paul,

Thank you for your help

Everyting is ok, IPSLA tcp connect is correct ( 21ms), ,no errors on interfaces.
the only issue is slow tranfer CIFS

PC (test)<->Switch<->Router<->Mistral(devide encryption) <->Router provider<->MPLS<->Router provider<->Mistral<->Router<->Switch<->PC (test).

BW provider 200Mb, but the transfer CIFS still very slow(600ko/s).

we have the shaping on interface Router to Mistral (Burst) for 200 Mb

Actually, the tunnel GRE is configured with : IP MTU 1500 and IP TCP adjust-mss 1430.

Capturing Data Packets on Wireshark we saw a several retransmission and drop packets.

 

Regards

Amine

 

 

HI,
Many thanks for your suggestion
FYI, We already tried change the value MTU in the tunnel address
ip mtu 1500
IP MTU 1400
We also changed the MTU of the laptop (windows test) to 1450 without improvement.

Joseph W. Doherty
Hall of Fame
Hall of Fame

For just a "pure" GRE tunnel, running over Ethernet with its 24 byte overhead, you want 3 commands on the tunnel interfaces.

ip mtu 1476
ip tcp adjust-mss 1436
tunnel path-mtu-discovery

However, first you mention using DMVPN, which I thought normally uses IPSec.  In a later post you then mention a Mistral encryption device, so unclear what device is actually tunneling, which is doing encryption, and how they interact.

To give you some idea of the issues you can run into, you might want to review: Resolve IPv4 Fragmentation 

Also in the same later post, you mention shaping with bursts set to 200 Mbps.  What's the router interface to the Mistral device actual policy?

BTW, your hosts do have PMTUD enabled, correct?

Hello,

 

We had try to change value MSS and MTU without succes.

The slow transfere still present,

Below  configuration tunnel  :

 

interface Tunnel124
vrf forwarding VRF124
ip address 10.2.158.12 255.255.255.248
no ip redirects
ip mtu 1500
ip nhrp authentication tnet78
ip nhrp map multicast 10.130.63.124
ip nhrp map 10.2.158.10 10.130.63.124
ip nhrp map multicast 10.130.62.124
ip nhrp map 10.2.158.9 10.130.62.124
ip nhrp network-id 124
ip nhrp holdtime 18000
ip nhrp nhs 10.2.158.9
ip nhrp nhs 10.2.158.10
ip nhrp registration timeout 300
ip tcp adjust-mss 1360
ip ospf network broadcast
ip ospf priority 0
ip ospf 124 area 0
ip ospf cost 10
tunnel source Loopback124
tunnel mode gre multipoint

 

Any suggestion please ?

 

Also, we saw many retransmission and drop packets ( capture Wireshark )

 

Thanks

I see:

interface Tunnel124
vrf forwarding VRF124
.
.
ip mtu 1500
.
.
ip tcp adjust-mss 1360
.
.

And don't see:

tunnel path-mtu-discovery

So, don't know what you've tried.

What about the other side of the tunnel?  It's config?

Further, still unknown what your encryption device is doing.  Have you also tried the tunnel w/o the encryption device in-line?  Or have you tried a data transfer w/o tunnel?

We also don't know your bandwidths, on both tunnel end-points and whether there's any in-path congestion.

There are other variables that can, very much, impact WAN transfer data performance.  I.e. this might not be a "tunnel" issue.

My experience, with Cisco GRE tunnels, properly configured, they don't much impair transfer data performance.

Hello Joseph,


We had bypasse the encryption device, and tried many transfer CIFS with this configuration tunnel below

the BW site A :200M

the BW site B :400M

There is not congestion or saturation during the test transfer

 Site A

interface Tunnel123
description A-B
vrf forwarding VRF123
ip address 10.2.158.4 255.255.255.248
no ip redirects
ip mtu 1476
ip nhrp authentication tnet78
ip nhrp map multicast 10.130.63.123
ip nhrp map 10.2.158.2 10.130.63.123
ip nhrp map multicast 10.130.62.123
ip nhrp map 10.2.158.1 10.130.62.123
ip nhrp network-id 123
ip nhrp holdtime 18000
ip nhrp nhs 10.2.158.1
ip nhrp nhs 10.2.158.2
ip nhrp registration timeout 300
ip tcp adjust-mss 1436
ip ospf network broadcast
ip ospf priority 0
ip ospf 123 area 0
ip ospf cost 10
tunnel source Loopback123
tunnel mode gre multipoint
tunnel path-mtu-discovery

 

 Site B :

interface Tunnel123
description A-B
vrf forwarding VRF123
ip address 10.2.158.3  255.255.255.248
no ip redirects
ip mtu 1476
ip nhrp authentication tnet78
ip nhrp map multicast 10.130.63.123
ip nhrp map 10.2.158.2 10.130.63.123
ip nhrp map multicast 10.130.62.123
ip nhrp map 10.2.158.1 10.130.62.123
ip nhrp network-id 123
ip nhrp holdtime 18000
ip nhrp nhs 10.2.158.1
ip nhrp nhs 10.2.158.2
ip nhrp registration timeout 300
ip tcp adjust-mss 1436
ip ospf network broadcast
ip ospf priority 0
ip ospf 123 area 0
ip ospf cost 10
tunnel source Loopback123
tunnel mode gre multipoint
tunnel path-mtu-discovery

Here are the results:

 Copy from  A to B : 14 Mo/s (remains weak)

 Copy from B to A : 1 Mo/s (very weak)

 

I tried everything without real results
Maybe we'll do some tests directly on the operators' routers.

 

it's a fairly complex subject, which impacts a lot of users.
Thank you again for the help and your comments

 

AM

Thank you for trying that!

Have you done any packet captures with that configuration, and if so, still seeing re-transmissions and/or packet drops?

In an earlier post, you also mentioned you're shaping.  Where exactly is that configured, and if on a Cisco device, what's the configuration look like?

Since this is a multi-point configuration, is it possible, multiple DMVPN tunnel interfaces might be sending to the same termination tunnel interface?

What you might also try, is a UDP packet generator.  Starting on one end, send different fixed rate streams to the other, and see what the received stream's rate is.  One difference between SLA and ping tests, bulk data transfers can, in theory, saturate the path.  So it's possible a constant heavy data steam will show an issue.  For example, if you send 300 Mbps across a path that should be able to support 200 Mbps, you should receive 200 Mbps (the other 100 Mbps being lost - but this would confirm the path supports 200 Mbps, as expected).  Also keep in mind, this kind of test, unless you have a great QoS configuration, will generally "crush" other concurrent traffic.  So if you're w/o QoS, suggest you do such a test during off-hours (although doing this off hours doesn't confirm prime time bandwidth).

Review Cisco Networking for a $25 gift card