05-20-2017 04:45 PM - edited 03-05-2019 08:34 AM
I am building a WAN and I having low throughput across (See diagram for more information).
- I have 10 sites [ 5 sites in Europe and 5 sites in the US] connected via a provider VPLS
- 3 routers are acting as route reflectors with same cluster ID [one router reflector in Europe and one in the US].
- I am using L3VPN Encapsulation [GRE] and MPLS [To import and export rouute].
- The connection between the core routers and the PEs are via BGP
- The config on all the routers are similar.
- the sites with the router reflectors have 100 MB circuit and the other sites have 50 MB circuit
- The PE’s are connected via IBGP
- Each PE is connected to its core VIA EBGP
Normal ping a trace route show < 20 mili second delay, however when I try to transfer a 2 Gig files across the sites I am getting < 2 MB/s throughput.
Can someone help me to fix this problem?
05-21-2017 12:02 AM
Hello,
the usual suspects are MTU and MSS size. Make sure your tunnels are configured like this:
Tunnel0
ip mtu 1400
ip tcp adjust-mss 1360
tunnel path-mtu-dscovery
Also, can you post the configuration of one of the routers ?
05-21-2017 01:36 PM
Here is the config:
- I add the " ip tcp adjust-mss" on the interface between the PE and the core
- I use "L3VPN Encapsulation" which automatically creates tunnel 0 interface [I can not make any change under the Tunnel interface]
SITE-1#sh run int tu0
Building configuration...
Current configuration : 5 bytes
end
- I tried to change MTU size under the loopback 6
ip vrf SITE-1
rd 211:1
route-target export 211:1
route-target import 211:1
route-target import 322:1
route-target import 197:1
route-target import 411:1
route-target import 193:1
route-target import 511:1
route-target import 411:1
route-target import 192:1
route-target import 100:1
route-target import 611:1
interface Loopback100
description LINK TO CORE
ip vrf forwarding SITE-1
ip address 10.0.0.1 255.255.255.255
interface Loopback65200
Description IBGP PEERING
ip address 172.16.0.1 255.255.255.255
interface GigabitEthernet0/0/0
Description VPLS LINK
ip address 192.168.0.1 255.255.255.0
!
interface GigabitEthernet0/0/1.100
description TRANSIT TO CORE
encapsulation dot1Q 100
ip vrf forwarding SITE-1
ip address 10.0.0.1 255.255.255.248
ip tcp adjust-mss 1340
l3vpn encapsulation ip L3VPN-ENC
transport ipv4 source Loopback65200
protocol gre key 2017
router ospf 65200
router-id 10.0.0.1
network 172.16.0.1 0.0.0.0 area 0
network 192.168.0.1 0.0.0.0 area 0
router bgp 65200
template peer-policy RR-CLIENT-POLICY
soft-reconfiguration inbound
send-community extended
exit-peer-policy
!
template peer-policy EBGP-PEER-POLICY
send-community both
exit-peer-policy
!
template peer-session RR-CLIENT-SESSION
remote-as 65200
update-source Loopback65200
inherit peer-session GLOBAL-BGP-TEMPLATE
exit-peer-session
!
template peer-session GLOBAL-BGP-TEMPLATE
password 7 xxxxxxxxxxxxx
timers 5 15
exit-peer-session
!
template peer-session EBGP-PEER-SESSION
remote-as 65400
password 7 xxxx
ebgp-multihop 2
update-source Loopback100
timers 5 15
exit-peer-session
!
bgp router-id 172.16.0.1
bgp log-neighbor-changes
bgp graceful-restart restart-time 120
bgp graceful-restart stalepath-time 360
bgp graceful-restart
neighbor 172.16.0.200 inherit peer-session RR-CLIENT-SESSION
neighbor 172.16.0.200 description ROUTE-REFLECTOR-1
neighbor 172.16.0.201 inherit peer-session RR-CLIENT-SESSION
neighbor 172.16.0.201 description ROUTE-REFLECTOR-2
neighbor 172.16.0.202 inherit peer-session RR-CLIENT-SESSION
neighbor 172.16.0.202 description ROUTE-REFLECTOR-3
!
address-family vpnv4
neighbor 172.16.0.200 activate
neighbor 172.16.0.200 send-community extended
neighbor 172.16.0.200 inherit peer-policy RR-CLIENT-POLICY
neighbor 172.16.0.200 route-map L3VPN-ENCAPSULATION in
neighbor 172.16.0.201 activate
neighbor 172.16.0.201 send-community extended
neighbor 172.16.0.201 inherit peer-policy RR-CLIENT-POLICY
neighbor 172.16.0.201 route-map L3VPN-ENCAPSULATION in
neighbor 172.16.0.202 activate
neighbor 172.16.0.202 send-community extended
neighbor 172.16.0.202 inherit peer-policy RR-CLIENT-POLICY
neighbor 172.16.0.202 route-map L3VPN-ENCAPSULATION in
exit-address-family
!
address-family ipv4 vrf SITE-1
neighbor 10.0.0.2 inherit peer-session EBGP-PEER-SESSION
neighbor 10.0.0.2 description CORE-ROUTER-1
neighbor 10.0.0.2 activate
neighbor 10.0.0.2 inherit peer-policy EBGP-PEER-POLICY
neighbor 10.0.0.2 route-map CORE_INBOUND in
neighbor 10.0.0.2 route-map CORE_OUTBOUND out
exit-address-family
route-map L3VPN-ENCAPSULATION permit 10
set ip next-hop encapsulate l3vpn L3VPN-ENC
05-21-2017 02:33 PM
Hello,
my bad, I wasn't aware of the details of your setup. Either way, try and add:
l3vpn encapsulation ip L3VPN-ENC
transport ipv4 source Loopback65200
protocol gre key 2017
mpls mtu max
to your l3vpn profiles...
05-21-2017 04:33 PM
I added "mpls mtu max" on 2 sites - I am still seeing the low throughput - Less than 1 MB/s. I performed an extended ping with high packet size and set the TTL to 1 sec - I didn ot see any drops.
05-21-2017 11:46 PM
Hello,
are the 'tcp adjust-mss' settings in your configuration, or did you add this after my suggestion ? Either way, remove that line from your interface configuration:
interface GigabitEthernet0/0/1.100
description TRANSIT TO CORE
encapsulation dot1Q 100
ip vrf forwarding SITE-1
ip address 10.0.0.1 255.255.255.248
ip tcp adjust-mss 1340
Also, can you post the output of:
show l3vpn encapsulation ip L3VPN-ENC
and
show interfaces GigabitEthernet0/0/1/100
You might want to try and add a shaper to your interfaces. For the 100MB circuit it would look like this:
policy-map 100MB_OUT
class-default
shape average 100000000 10000000
and for the 50MB circuit:
policy-map 50MB_OUT
class-default
shape average 50000000 5000000
Apply the service policy outbound,e.g.:
interface GigabitEthernet0/0/1.10
description TRANSIT TO CORE
encapsulation dot1Q 100
ip vrf forwarding SITE-1
ip address 10.0.0.1 255.255.255.248
service-policy 50MB_OUT out
05-22-2017 05:04 AM
Besides a possible MTU/MSS issue you're working with Georg, LFNs (long fat networks) are also prone to slow throughput if your receiving hosts do not provide a RWIN (receive window) sufficient for the BDP (bandwidth delay product) or there are any drops causing CA (congestion avoidance).
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide