cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4603
Views
5
Helpful
7
Replies

6to4 IPv6 tunnelling doesn't work in MPLS network

Hello expert,

When configuring IPv6 6to4 tunneling on our WAN router where it is connected to our service provider's MPLS network (the interface is running BGP protocol); to our remote sites. It does not work at all but if configuring manual IPv6 tunneling (point to point), it has no problem.

I wonder ... why it doesn't work with 6to4 IPv6 tunneling on WAN router connecting to service provider's MPLS network?. Can anyone explain to me in detail?

Thank you very much for your explanation.

Regards,

Alex

1 Accepted Solution

Accepted Solutions

The problem with point to point tunnels, as you have discovered, is that you and your ISP have to manually configure the two endpoints so they know where each other lives; there is nothing automatic about them. Not something a home user could do, or which scales to billions of users.  And as with any tunnel, you lose 20 bytes off the MTU size and add a few milliseconds of latency going through gateway processing.

Somewhere around 2022 the v6 rollout is likely to be far enough along that the tier-1 backbone providers go v6 only, cutting router TCAM usage about 75% in the process (v6 routes are 2x the size of v4, but there are 7x fewer of them due to better aggregation).  At that point it will be the remnant legacy v4 traffic which has to worry about tunneling. v4 probably doesn't die until the mid 2030's.

View solution in original post

7 Replies 7

James Leinweber
Level 4
Level 4

Unless your WAN router is sending the anycast 6to4 traffic for 192.88.99.0/24 to your own 6to4 relay, you are depending on the upstream backbone having a nearby, 3rd party functioning anycast 6to4 router which will accept your traffic.  Probably there is no such device; most folks who were experimenting with RFC-3068 style 6to4 relays gave up on it several years ago and decommissioned their equipment; certainly my own UW-Madison did.  Also note that 6to4 is incompatible with NAT44, since the special 2002::/16 v6 prefix has to be suffixed with a public v4 address in order for the destination's reply relay to re-encapsulate the v6 payload with a routable v4 envelope.  Recall that 6to4 relays have to be dual stack anycast for both 192.88.99.0/24 on v4 and 2002::/16 on v6.

The performance characteristics reported out of APNIC by Geof Huston showed that you were lucky if you got TCP connection success as high as 85% (lots of firewalls block return traffic on protocol 41, the internet had blackhole relays whose anycast routing reach was wider than their forwarding ACL's, ...) and even the successful connections had nasty latency and jitter issues compared to native v6.  Only Teredo tunnels were worse.   The upshot is that  6to4 was deprecated to historic "don't use this in production networks" status by RFC-7526 last year.

The replacement for 6to4 relays is RFC-5569 "6rd" (rapid deployment) where the upstream ISP is running its own protocol 41 relay with native v6 routing back to their own AS.  This is primarily used for home deployments when the ISP last hop is still v4 only at the DSLAM or cable head end, even though the customer premises modem and the ISP backbone are already dual-stack.  The French ISP "Free" dual-stacked 1.5 million customers with v6 using 6rd in 5 weeks using only two engineers back around 2012; the "rapid" in the title isn't a joke.  The CPE router has to be provisioned with the address of the 6rd relay, of course.  6rd relays are stateless, like 6to4 relays, but don't require anycast, allow private v4 sources, and only add about 3ms latency and have low jitter.  You do still lose 20 bytes off your IPv6 MTU for the protocol 41 encapsulation with a v4 header.

Business users should insist that their ISP provide native IPv6 instead.

Hello James,

Thank you for your detail explanation. Fyi, we not using anycast 6to4 address on our IPv6 tunnel. Is that still considered okay to use? Our service provider haven't gives us timeline for their own IPv6 migration so we (as client) still relies on 6to4 tunnel for our network. Below is the snippet of our IPv6 tunnel;

--------------------------------------------------------

interface Tunnel0
 no ip address
 no ip redirects
 load-interval 30
 ipv6 address 2002:ABA:F912::1/64
 ipv6 enable
 tunnel source 10.20.249.2
 tunnel mode ipv6ip 6to4
 tunnel path-mtu-discovery

----------------------------------------------------------

Thank you very much and regards,

Alex

Using a point-to-point tunnel while you wait for your ISP to go full end-to-end native is exactly the recommended strategy.  Actually working with v6 puts you way ahead of the average organization.  A more official name for this is "6in4", cisco's keywords not withstanding.

There are a lot of tunnel types that use the protocol 41 encapsulation (throw a v4 header directed to an upstream gateway on the front of a v6 payload) including 6in4, 6to4, 6over4, 6rd, etc. It's the automatic tunnels, now deprecated, that are the most problematic (ISATAP ignores your vlan structure, 6to4 we discussed, 6over4 used completely insane amounts of multicast, ...).  The only survivors from that lot are 6in4 (point to point) and 6rd (anycast inside a single AS).

Stuff people are actually using: ISPs may use 6PE (over a v4-only MPLS backbone) or 6rd (last mile, anycast, mostly consumer) or 6in4 (last mile, point-to-point, mostly business) while they finish their end-to-end rollout.  Cellphone providers with LTE4 data networks in the US are trending to 464xlat (native v6 transport, extra v6 routing prefix for v4 traffic tunneling to a nat64 CGN).

The problem with point to point tunnels, as you have discovered, is that you and your ISP have to manually configure the two endpoints so they know where each other lives; there is nothing automatic about them. Not something a home user could do, or which scales to billions of users.  And as with any tunnel, you lose 20 bytes off the MTU size and add a few milliseconds of latency going through gateway processing.

Somewhere around 2022 the v6 rollout is likely to be far enough along that the tier-1 backbone providers go v6 only, cutting router TCAM usage about 75% in the process (v6 routes are 2x the size of v4, but there are 7x fewer of them due to better aggregation).  At that point it will be the remnant legacy v4 traffic which has to worry about tunneling. v4 probably doesn't die until the mid 2030's.

Hi James,

Thank you for your explanation. Just to share what I did about IPv6 tunneling in my company:

Instead of configuring a long list of generic IPv6 tunnel for each remote sites (obviously IPv6 6to4 mode doesn't work) at our WAN router which connected to our service provider's MPLS network. I had configured 'one arm' router just for IPv6 6to4 tunnel which connected directly to our WAN router. This method works well in our dual-stack environment where all our remote sites are 'connected' via IPv6 6to4 tunnel at our 'one arm' router. The configuration snippet in our previous discussion is actually taken from our 'one arm' router. Similar configuration also applies on each of our remote sites' router.

Speaking about MTU issue, I did encountered that recently and it is 'pain-in-the-ass' to figure out what actually caused it until our team found out it is due to IPv6 option is not enabled in our web proxy. Our web proxy filters web browsing from all remote sites on IPv4 but not on IPv6. 

For now so far quite good and we are ironing a few small technical issues before we starts gradually our IPv6 transitioning  for all remote sites.

MTU issues are more severe for IPv6 than IPv4, because in v6 routers don't fragment, so overlarge packets just drop.  You tend to only see that in v4 when someone is doing PMTU discovery and deliberately setting the "don't fragment" flag in the IPv4 header.

Reliability of ICMPv6 "packet too big" feedback is mediocre, as some folks overlook the need to allow it in through their firewalls, tunneled v6 in v4 probably can't generate it in the first place (e.g. if the ISP is using 6PE over MPLS), and overloaded ISP and internet exchange routers tend to preferentially rate-limit and/or drop ICMP.

One controversial recommendation is to configure dual stack servers with an MTU of 1280 (the v6 minimum) to help avoid the packet-too-big issue for unfortunately located clients.  Once the world's IP backbone is full end-to-end native v6 this will become moot.

Thank you very much for your helpful and informative sharing. Appreciate it very much.

regards,

Alex

Review Cisco Networking for a $25 gift card