01-02-2018 03:28 PM - edited 03-05-2019 09:42 AM
Hello,
I am encountering a strange issue during a setup of a new ASR9K router.
Simplified illustration of the nodes:
(PE-A) - (PE-B) - (PE-C)
1.1.1.1 2.2.2.2 3.3.3.3
I have established rsvp tunnels between A to B, and B to A.
Traffic to C is to B and B to C over LDP.
Traffic flow direction from C to B to A is working well.
Traffic flow from A to B is working well, but between A to C is not working when TE between A to B is up.
From my checks, it's clearly that LDP labels are not passing through the tunnels, and I'm missing the outgoing levels.
Before establishing the TE tunnels, traffic was passing between all 3 nodes.
Meaning, LDP labels are in place.
I have added the tunnels at the head lsp router to the "mpls lsp" hierarchy.
I'm not seeing what I'm missing here.
Kindly suggest further checks that I could do.
I replaced the real ips in notepad out of privacy reasons, but other than that, the output is real.
The following output is all from PE-A alone and contains ldp and rsvp sessions and settings to PE-B
mpls ldp router-id 1.1.1.1 ! interface tunnel-te5247 ! interface tunnel-te5747 ! interface TenGigE0/0/2/1.2552 ! interface TenGigE0/0/2/2.2557
Tunnel example:
======================
interface tunnel-te5247 description TO_PE-B ipv4 unnumbered Loopback0 load-interval 30 signalled-bandwidth 0 load-share 750 autoroute announce ! destination 2.2.2.2 path-option 1 explicit name EP_TO_PE-B
With LDP Alone (tunnels are shutdown):
=====================================
PE-A#sh mpls forwarding prefix 3.3.3.3/32 det
Tue Jan 2 22:50:19.660 UTC
Local Outgoing Prefix Outgoing Next Hop Bytes
Label Label or ID Interface Switched
------ ----------- ------------------ ------------ --------------- ------------
24057 24000 3.3.3.3/32 Te0/0/2/1.2552 10.46.0.213 7443889704
Updated: Dec 30 15:34:55.590
Version: 4045, Priority: 3
Label Stack (Top -> Bottom): { 24000 }
NHID: 0x0, Encap-ID: N/A, Path idx: 0, Backup path idx: 0, Weight: 0
MAC/Encaps: 18/22, MTU: 8986
Packets Switched: 7131894
24000 3.3.3.3/32 Te0/0/2/2.2557 10.46.0.217 3516632056
Updated: Dec 31 17:14:05.727
Version: 4045, Priority: 3
Label Stack (Top -> Bottom): { 24000 }
NHID: 0x0, Encap-ID: N/A, Path idx: 1, Backup path idx: 0, Weight: 0
MAC/Encaps: 18/22, MTU: 8986
Packets Switched: 4441161
With TE up (tunnels unshut):
=============================
RP/0/RSP0/CPU0:PE-B#sh mpls forwarding prefix 3.3.3.3/32 det
Tue Jan 2 22:55:52.704 UTC
Local Outgoing Prefix Outgoing Next Hop Bytes
Label Label or ID Interface Switched
------ ----------- ------------------ ------------ --------------- ------------
24057 Unlabelled 3.3.3.3/32 tt5247 2.2.2.2 0
Updated: Jan 2 22:54:45.001
Version: 4349, Priority: 3
Label Stack (Top -> Bottom): { Imp-Null Unlabelled }
NHID: 0x0, Encap-ID: N/A, Path idx: 0, Backup path idx: 0, Weight: 0
MAC/Encaps: 18/18, MTU: 8986
Packets Switched: 0
Unlabelled 3.3.3.3/32 tt5747 2.2.2.2 7108
Updated: Jan 2 22:54:45.001
Version: 4349, Priority: 3
Label Stack (Top -> Bottom): { Imp-Null Unlabelled }
NHID: 0x0, Encap-ID: N/A, Path idx: 1, Backup path idx: 0, Weight: 0
MAC/Encaps: 18/18, MTU: 8986
Packets Switched: 172
RP/0/RSP0/CPU0:PE-B#sh mpls forwarding prefix 2.2.2.2/32 det
Tue Jan 2 22:56:47.603 UTC
Local Outgoing Prefix Outgoing Next Hop Bytes
Label Label or ID Interface Switched
------ ----------- ------------------ ------------ --------------- ------------
24008 Pop 2.2.2.2/32 tt5247 2.2.2.2 414
Updated: Jan 2 22:54:45.001
Version: 4347, Priority: 3
Label Stack (Top -> Bottom): { Imp-Null Imp-Null }
NHID: 0x0, Encap-ID: N/A, Path idx: 0, Backup path idx: 0, Weight: 0
MAC/Encaps: 18/18, MTU: 8986
Packets Switched: 5
Pop 2.2.2.2/32 tt5747 2.2.2.2 0
Updated: Jan 2 22:54:45.001
Version: 4347, Priority: 3
Label Stack (Top -> Bottom): { Imp-Null Imp-Null }
NHID: 0x0, Encap-ID: N/A, Path idx: 1, Backup path idx: 0, Weight: 0
MAC/Encaps: 18/18, MTU: 8986
Packets Switched: 0
RP/0/RSP0/CPU0:PE-B#sh mpls interfaces
Tue Jan 2 22:57:12.589 UTC
Interface LDP Tunnel Static Enabled
-------------------------- -------- -------- -------- --------
tunnel-te5247 Yes No No Yes
tunnel-te5747 Yes No No Yes
TenGigE0/0/2/1.2552 Yes Yes No Yes
TenGigE0/0/2/2.2557 Yes Yes No Yes
RP/0/RSP0/CPU0:PE-B#sh mpls ldp bindings 3.3.3.3/32
Tue Jan 2 23:04:06.598 UTC
3.3.3.3/32, rev 171
Local binding: label: 24057
Remote bindings: (2 peers)
Peer Label
----------------- ---------
2.2.2.2:0 24000
4.4.4.4:0 24003
Under "sh mpls ldp forwarding detail" all ldp sessions going through the TE tunnels are down.
Solved! Go to Solution.
01-04-2018 06:16 AM
Hello yevgenl1,
thanks for your kind remarks
>> I appears that not all TE tunnels from router PE-B facing PE-A were also under mpls ldp.
this was the root cause
>>
I'm not sure why. Could it be that the labels signaling were from B to A was passing via rsvp tunnels which were not carrying them?
Could you explain the logic?
The logic is that the LSR node when performs LDP tunneling might choice a tunnel not properly configured, because MPLS operations are quite "brute force."
To make an example connectivity in an MPLS L3 VPN can be broken if two IGP equal cost paths exist between PE nodes one with MPLS LDP enabled and the other with MPLS disabled
You are running quite new IOS XR 6.1.4 that might not need anymore the command with the accept option.
However, for correct LDP tunneling operation all the tunnels have to be listed within the mpls ldp config section because the choice of the exit tunnel to be used is based on the LDP label value of the MPLS frame to be tunneled. Traffic carried inside the LDP signaled label switched path might be not inspected deeper.
Again to make a comparison Juniper LDP tunneling requires tunnel interface to be configured under the hierarchy [edit protocols ldp ] and [edit mpls].
This helps the backbone node to save on memory used and allows the LDP signaled LSPs to travel an inner core made of MPLS traffic engineered RSVP TE signaled LSPs.
This is a scalability tecnique.
Hope to help
Giuseppe
01-03-2018 08:56 PM
Hi,
What version of XR are you running? Could you provide the output of the "show mpls ldp discovery"from PE-A and PE-B? Could you try configuring the following on both PE-A and PE-B and see if it fixes the issue:
mpls ldp discovery targeted-hello accept
Regards,
01-04-2018 01:42 AM
Hello,
I appreciate the answers of all of you, guys.
The version of the routers PE-A and PE-B is Version 6.1.4 which is ASR9000, and of PE-C is 1K 03.13.03.S.
I also tried the command "mpls ldp discovery targeted-hello accept" but i dont have the "accept" part.
I have:
PE-A(config-ldp)#discovery hello ?
holdtime Hello holdtime
interval Hello interval
However, I did manage to solve it by "accident".
I appears that not all TE tunnels from router PE-B facing PE-A were also under mpls ldp.
Adding all of them resolved the original issue (labels of PE-C not appearing on PE-A when TE facing PE-B was up).
I'm not sure why. Could it be that the labels signaling were from B to A was passing via rsvp tunnels which were not carrying them?
Could you explain the logic?
01-04-2018 06:16 AM
Hello yevgenl1,
thanks for your kind remarks
>> I appears that not all TE tunnels from router PE-B facing PE-A were also under mpls ldp.
this was the root cause
>>
I'm not sure why. Could it be that the labels signaling were from B to A was passing via rsvp tunnels which were not carrying them?
Could you explain the logic?
The logic is that the LSR node when performs LDP tunneling might choice a tunnel not properly configured, because MPLS operations are quite "brute force."
To make an example connectivity in an MPLS L3 VPN can be broken if two IGP equal cost paths exist between PE nodes one with MPLS LDP enabled and the other with MPLS disabled
You are running quite new IOS XR 6.1.4 that might not need anymore the command with the accept option.
However, for correct LDP tunneling operation all the tunnels have to be listed within the mpls ldp config section because the choice of the exit tunnel to be used is based on the LDP label value of the MPLS frame to be tunneled. Traffic carried inside the LDP signaled label switched path might be not inspected deeper.
Again to make a comparison Juniper LDP tunneling requires tunnel interface to be configured under the hierarchy [edit protocols ldp ] and [edit mpls].
This helps the backbone node to save on memory used and allows the LDP signaled LSPs to travel an inner core made of MPLS traffic engineered RSVP TE signaled LSPs.
This is a scalability tecnique.
Hope to help
Giuseppe
01-04-2018 12:41 PM
Thank you, Giuseppe.
Your detailed answer helps a lot for my understanding of how this works.
01-04-2018 01:17 PM
01-03-2018 09:38 PM
01-04-2018 01:03 AM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide