cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2315
Views
0
Helpful
32
Replies

IPv6 not working over dot1q-tunnel ?

ralfw
Level 1
Level 1

Hello,

 

we have a dot1q tunnel to our co-location
and can ping the servers in the co-location
over IPv4, but not over IPv6. The servers
get an IPv6 address through the dot1q tunnel
from the router interface per stateless
address autoconfiguration, but I can't reach
the servers over IPv6. Between the servers
on each site of the tunnel IPv6 is working.

 

I was thinking dot1q tunnel is layer 2 and
IPv4/IPv6 doesn't matter ?

 

Is there something special to configure for
IPv6 through a dot1q tunnel ?


Regards
Ralf

 

32 Replies 32

Traceroute stops immediatly.. since the router interface and
the server are in the same subnet / vlan.

 

root@p7920b:~# traceroute6 -n 2001:638:XXX:14D1:21C:B1FF:FEAC:BC00
traceroute zu 2001:638:XXX:14D1:21C:B1FF:FEAC:BC00 (2001:638:XXX:14d1:21c:b1ff:feac:bc00) von 2001:638:XXX:14d5:7aac:44ff:fe36:5a40, 30 hops max, 24 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * *

 

root@p7920b:~# ifconfig eno3
eno3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.22.144.42 netmask 255.255.240.0 broadcast 172.22.159.255
inet6 fe80::7aac:44ff:fe36:5a40 prefixlen 64 scopeid 0x20<link>
inet6 2001:638:XXX:14d5:7aac:44ff:fe36:5a40 prefixlen 64 scopeid 0x0<global>
ether 78:ac:44:36:5a:40 txqueuelen 1000 (Ethernet)
RX packets 21155291 bytes 2167918365 (2.1 GB)
RX errors 0 dropped 399743 overruns 0 frame 0
TX packets 22372880 bytes 9335174214 (9.3 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x92a00000-92afffff


root@p7920b:~# netstat -6 -rn
Kernel-IPv6-Routentabelle
Destination Next Hop Flag Met Ref Use If
2001:638:XXX:14d5::/64 :: Ue 1024 29 0 eno3
fe80::/64 :: U 256 1 0 ovsbr0
fe80::/64 :: U 256 1 0 eno3
::/0 fe80::21c:b1ff:feac:bc00 UG 1024 88 0 eno3
2001:638:XXX:14d5:7aac:44ff:fe36:5a40/128 :: Un 0 25 0 eno3
fe80::68e7:9dff:fee6:ad41/128 :: Un 0 3 0 ovsbr0
fe80::7aac:44ff:fe36:5a40/128 :: Un 0 12 0 eno3
ff00::/8 :: U 256 90 0 ovsbr0
ff00::/8 :: U 256 74 0 eno3
::/0 :: !n -1 1 0 lo

 

Hello,

 

on both routers, configure a default route:

 

ipv6 route ::/0 ipv6-addr

 

I know it shouldn't be necessary, but since the normal routing process does not work...

There is only one router on the head office side and that router have a default ipv6 route.. we only expand over the dot1q tunnel our vlans.. i have on both customer switches the ports, connected to the ports of both tunnel switches, configured as trunk ports. From the customer view i see from head office directly the switch on the co-location. And for IPv4 all is working fine, i can connect to servers in different vlans

over the tunnel. And for IPv6 it seems only autoconfig is working, since the servers get ipv6 addresses over the tunnel from the router interfaces in that vlans..

 

Regards

Ralf

 

 

Hello,

 

can you post a brief schematic drawing of what your topology, showing how everything is connected ? I have a feeling we are missing something basic.

 

Are there any ISPs involved at all in the dot1q tunnel link ?

Yes, we have a rented dedicated 10gb link between the two locations. I don't know the details of that link, if its only passive fiber link or there is something active in the link. Allegedly there is something active in the link. The distance between the two locations are a few kilometers in the same city. The links between the CSw1/TSw1 and TSw2/CSw2 are passiv fiber links.


[ CSw1 ]---[ TSw1 ] +++ [ TSw2 ]---[ CSw2 ]

--- passive fiber link
+++ rented dedicated link

 

Hello,

 

although it is not very probable that the ISP (or whoever you rent these lines from) causes this specific issue, I would ask them anyway, just to be sure.

I guess the isp will say it should work.. but we could do alternative try to tunnel ipv6 in ipv4 or use a dedicated vlan and try if ipv6 works in that vlan. But if the dot1q tunnel would be working for ipv6 too that would be the best solution.

 

Regards

Ralf

 

Hello,

 

I ahve done some further research, and it somehow looks like the dual stack sdm template is necessary for IPv6 support. I am not sure if that has already been mentioned, but are any IPv6 commands at all available on the dot1q switches ?

All switches know ipv6 commands, but one of the tunnel switches does not know sdm at all and the other have only access template.  If sdm dual stack is necessary for the ipv6 support over dot1q tunnel, then that's the problem.. I guess sdm support is a hardware feature and i can't try to use other ios image on the switches ?

 

Regards

Ralf

 

Hello,

 

I cannot find any consistent information on whether the sdm template is really necessary...

 

I did find any interesting article about MTU size in IPv6 (see the link attached), maybe it is related to your issue.

 

--> All IPv6 networks must support an MTU size of 1,280 bytes or greater (RFC 2460). This is because IPv6 routers do not fragment IPv6 packets on behalf of the source. IPv6 routers drop the packet and send back an ICMPv6 Type 4 packet (size exceeded) to the source indicating the proper MTU size. It then falls on the shoulders of the source to perform the fragmentation itself and cache the new reduced MTU size for that destination so future packets use the correct MTU size.

 

https://www.networkworld.com/article/2224654/mtu-size-issues.html

 

QinQ adds 8 byest to the Ethernet frame. Can you do an (IPv4) ping between both endpoints, to find out the maximum packet size supported on this link ? You can try from a DOS prompt and increase the packet size (-l) until the message 'Packet needs to be fragmented but DF set' appears...

 

 

Looks like 1472 bytes is the max. size before packets will be fragmented.

 

root@op7040:~# ping -M do -c 3 -s 1472 172.22.144.41
PING 172.22.144.41 (172.22.144.41) 1472(1500) bytes of data.
1480 bytes from 172.22.144.41: icmp_seq=1 ttl=63 time=0.575 ms
1480 bytes from 172.22.144.41: icmp_seq=2 ttl=63 time=0.627 ms
1480 bytes from 172.22.144.41: icmp_seq=3 ttl=63 time=0.629 ms

--- 172.22.144.41 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2036ms
rtt min/avg/max/mdev = 0.575/0.610/0.629/0.025 ms

 

root@op7040:~# ping -M do -c 3 -s 1473 172.22.144.41
PING 172.22.144.41 (172.22.144.41) 1473(1501) bytes of data.
ping: local error: message too long, mtu=1500
ping: local error: message too long, mtu=1500
ping: local error: message too long, mtu=1500

--- 172.22.144.41 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2044ms

 

 

Hello,

 

it might be worth trying to set the MTU to 1508 (that is the standard Ethernet size plus the 8 bytes required for QinQ). I don't see how this setting would impact only IPv6 traffic (since IPv4 works without any problems), but give it a try anyway.

I think we have a mtu size of 1504 on one of the tunnel endpoints. I thought dot1q tunnel adds 4 bytes. I will ask my colleague to increase the mtu size on that tunnel endpoint to 1508. The other tunnel endpoint is already set to 9216.

 

Regards

Ralf

 

Maybe that is the problem.. the customer ports are configured as trunk ports and not as normal uplink ports, so we must add the extra 4 bytes.. I will report..

 

Regards

Ralf

 

The tunnel interfaces have a mtu size of 9216 and 9198, the costumer interfaces 1500. Or should the costumer interfaces mtu size greater too ?

 

Regards

Ralf

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: