cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3615
Views
5
Helpful
8
Replies

Poor throughput of the GRE tunnel between ASRs

Argamak
Level 1
Level 1

Hello everyone. There are two sides connected by a provider channel. On each side is an ASR-902 router (Cisco IOS XE Software, Version 16.09.04). A GRE tunnel is configured between the routers for OSPF. The data transfer rate in this tunnel is about 1 Mbps. When connecting laptops instead of routers and measuring the speed between them, the speed will be slightly over 20 Mbps. What could be the speed issue in the gre tunnel between the two ASR902s? Maybe this is some kind of firmware bug in ASR?

 

#show running-config interface tunnel 0
Building configuration...

Current configuration : 178 bytes
!
interface Tunnel0
 description Tunnel_to_B
 bandwidth 10000
 ip address x.x.x.x/30
 tunnel source y.y.y.y
 tunnel destination z.z.z.z
end

#show interface tunnel 0
Tunnel0 is up, line protocol is up
  Hardware is Tunnel
  Description: Tunnel_to_B
  Internet address is x.x.x.x/30
  MTU 17916 bytes, BW 10000 Kbit/sec, DLY 50000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation TUNNEL, loopback not set
  Keepalive not set
  Tunnel linestate evaluation up
  Tunnel source y.y.y.y, destination z.z.z.z
  Tunnel protocol/transport GRE/IP
    Key disabled, sequencing disabled
    Checksumming of packets disabled
  Tunnel TTL 255, Fast tunneling enabled
  Tunnel transport MTU 1476 bytes
  Tunnel transmit bandwidth 8000 (kbps)
  Tunnel receive bandwidth 8000 (kbps)
  Last input 00:00:01, output 00:00:01, output hang never
  Last clearing of "show interface" counters 1y33w
  Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 2786
  Queueing strategy: fifo
  Output queue: 0/0 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     133985756 packets input, 74757220179 bytes, 0 no buffer
     Received 0 broadcasts (0 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     99270058 packets output, 13988905457 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 output buffer failures, 0 output buffers swapped out

#show ip interface tunnel 0
Tunnel0 is up, line protocol is up
  Internet address is x.x.x.x/30
  Broadcast address is 255.255.255.255
  Address determined by setup command
  MTU is 1476 bytes
  Helper address is not set
  Directed broadcast forwarding is disabled
  Multicast reserved groups joined: 224.0.0.5 224.0.0.2
  Outgoing Common access list is not set
  Outgoing access list is not set
  Inbound Common access list is not set
  Inbound  access list is not set
  Proxy ARP is enabled
  Local Proxy ARP is disabled
  Security level is default
  Split horizon is enabled
  ICMP redirects are always sent
  ICMP unreachables are always sent
  ICMP mask replies are never sent
  IP fast switching is enabled
  IP Flow switching is disabled
  IP CEF switching is enabled
  IP CEF switching turbo vector
  IP Null turbo vector
  Associated unicast routing topologies:
        Topology "base", operation state is UP
  IP multicast fast switching is enabled
  IP multicast distributed fast switching is disabled
  IP route-cache flags are Fast, CEF
  Router Discovery is disabled
  IP output packet accounting is disabled
  IP access violation accounting is disabled
  TCP/IP header compression is disabled
  RTP/IP header compression is disabled
  Probe proxy name replies are disabled
  Policy routing is disabled
  Network address translation is disabled
  BGP Policy Mapping is disabled
  Input features: MCI Check

#show interface BDI 555
BDI555 is up, line protocol is up
Hardware is BDI, address is 848a.8d4b.2abf (bia 848a.8d4b.2abf)
Description: B
Internet address is y.y.y.y/30
MTU 1500 bytes, BW 10000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not supported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:02, output 00:00:02, output hang never
Last clearing of "show interface" counters never
Input queue: 0/375/0/1406 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
134716628 packets input, 78189079604 bytes, 0 no buffer
Received 0 broadcasts (1236 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
161215711 packets output, 79038873019 bytes, 0 underruns
0 output errors, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
8 Replies 8

marce1000
VIP
VIP

 

 - Check this document :

             https://community.cisco.com/t5/networking-documents/gre-tunnel-mtu-interface-mtu-and-fragmentation/ta-p/3673508

 M.



-- Each morning when I wake up and look into the mirror I always say ' Why am I so brilliant ? '
    When the mirror will then always repond to me with ' The only thing that exceeds your brilliance is your beauty! '


Hello,

 

try and set the values below on both tunnel interfaces:

 

Ip mtu 1400

ip tcp adjust-mss 1360

Argamak
Level 1
Level 1

Changing the mtu value does not affect the speed increase in this problem.

pman
Spotlight
Spotlight

Hi,

 

I have a number of questions that may be able to give you direction in finding the solution:

1. Can you elaborate a bit on the "provider channel" perhaps a topology for example?
2. Did you make sure there is as little ip fragmentation as possible ?
3. Can you elaborate a bit on how you test the speed (What software do you use and what kind of traffic goes between the routers)?
4. Is the OS the same on both sides?
5. In what software do you test speed?

1. GRE over leased channel provider

msk_volh.jpg

2. We have a lot of gre-tunnels on other routers, speed problem occurs on ASRs. We tried changing the mss parameter, but it didn't any better.

3-5. iperf on laptops instead of routers

Joseph W. Doherty
Hall of Fame
Hall of Fame

The usual cause, for poorer performance across any kind of tunnel, is the tunnel's encapsulation size is smaller than MTU of the data being encapsulated, and the issues this creates.

Cisco has a nice TechNote on the subject, found here: https://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html

The best option is using the "TCP adjust-MSS" (on the tunnel interface), as already suggested by @Georg Pauwen.  As your tunnel's lists its MTU as

Tunnel transport MTU 1476 bytes

, normally you would set the adjust-MSS parameter 40 bytes less, or, rather than what Georg suggest, to 1436.

Do understand, this only works for TCP traffic that "opens" its session across the interface when/where this command is active.  It doesn't work for non-TCP traffic and using a value 40 bytes less can also be incorrect, as both IP and TCP options can increase either beyond their usual 20 bytes.

Even with TCP, there are other issues you can run into - again, TechNote discusses those.

Also, again, non-TCP traffic can be problematic, because it often doesn't use PMTUD.  So, @pman 's 3rd question is significant (as would what kind of "normal/expected" data traffic will be using this tunnel?

Argamak
Level 1
Level 1

Could it be that gre is being handled by the ASR's CPU while the rest of the traffic is handled differently, causing poor speed in GRE? Perhaps, for the correct operation of GRE, it is necessary to somehow switch the ASR to a different operating mode? For example, change the SDM template?

Argamak
Level 1
Level 1

"Prior to Cisco IOS XE Gibraltar Release 16.12.x, GRE tunnels could provide a speed of 520 kbps for unidirectional and 250 kbps for bidirectional traffic. Starting with Cisco IOS XE Gibraltar Release 16.12.x, GRE tunnels enable the traffic to pass at a speed according to the size of the interface".

 

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_gre/configuration/xe-3s/asr903/16-12-1/b-gre-xe-16-12-asr900.html 

 

What prevented them from fixing this problem earlier?! WTF???

Review Cisco Networking for a $25 gift card