cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1542
Views
0
Helpful
9
Replies

Segment Routing + ECMP

Dears 

Hope you all are doing well 

I have a network topology of 8 Routers on GNS3 and using IOS XRv 6.1.3 

I configured OSPF and then Segment Routing 
when testing the Link performance I found a huge delay and latency after segment routing 

I feel that ECMP Maybe not working 

anyone can help with this problem ?

Much appreciate your support 

1 Accepted Solution

Accepted Solutions

Hello Muhammad,

 

>> and yes exactly what you said is what I can notice with SR i can see the larger packet and so on the problems as you described 

 

To avoid MTU issues you need the following:

increase MTU size on all links from 1514 (default for untagged ethernet) to 1514 + 8*4 = 1546 bytes

! this can accomodate up to 8 MPLS label in a stack

if available in IOS XR increase also the mpls mtu to 1546 bytes.

If necessary use higher values if the MPLS TE tunnel uses more then one label. Probably 1550 bytes or 1554 bytes are better choices.

 

To be noted in real routers some linecards may have issues with so many MPLS labels in the stack as before SR introduction most implementations used a maximum of 5 labels.

 

Hope to help

Giuseppe

 

View solution in original post

9 Replies 9

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Muhammad,

first of all you are using IOS XRv in GNS3 and not real routers.

You have enabled segment routing and you see an increase in round trip times when you issue a ping.

 

This can be caused by limitations on the simulation environment or by some configuration issues.

Can you post as attachment text files the configuration of the routers and a network diagram picture.

 

Be aware that there are some limitations on IOS XRv for example for L2VPN services and VPLS only control plane is supported and there is no connectivity at all between L2 CE nodes.

You still have connectivity but with higher delay this may be acceptable on an emulated environment.

 

About ECMP you can verify if it is still happening using appropriate show commands. But this shouldn't be the root cause of increased delay in ping tests.

 

Edit:

you can find training videos on Segment routing on the Cisco Learning network see the link below

note: you need to register to CLN to access content, CLN uses your cisco CCO account.

https://learningnetwork.cisco.com/community/learning_center/ccie-sp-training-videos

 

 

 

Hope to help

Giuseppe

 

hi Giuseppe 

 

Many thanks for your response 
I am doing also IPerf test between two hosts 

if the reason is limitations in GNS3 it should appear also without segment routing 

I am also using a traffic engineering tunnel to steer traffic I think the cause maybe in the TE Tunnel but all links are similar and have the same BW 


Hello Muhammad,

performing iperf performance tests over a GNS3 simulated network can be a nice exercise but it does not tell you nothing about performance of real devices.

With Segment Routing introduction you are likely going to use a bigger MPLS stack (with more labels) when compared with operation when only MPLS is used with OSPF.

the MPLS TE tunnel uses also its own label (or more the one if it is performing Fast ReRoute using a detour path).

So your user packets are going to be sent with a greater overhead this can impact the performance tests results if the iperf hosts can detect a reduced MTU on the path they can use a reduced TCP MSS otherwise fragmentation can occur on first hop router before sending and this can explain increased delay and reduced performance.

But without seeing your configuration and how you have defined the SR paths using SID lists it is difficult to say more.

In any case you can build a demo but you cannot take performance results on GNS3 as indicative of what real routers with IOS XR will do with similar configuration.

 

Hope to help

Giuseppe

 

Hi Giuseppe 
Many thanks again for your support 

yes sure GNS3 will be different from real Routers 

 

but i can notice the bad performance while comparing with and without SR so now we are testing on the same conditions 

and yes exactly what you said is what I can notice with SR i can see the larger packet and so on the problems as you described 

 what I am trying to build is a prove of concept that shows the benefits of SR and with this scenario I can get negative performance not positive :) 

so what do you advice and how can I solve that ?

Kindly find the topology and configuration attached and Much appreciate your support 


Hello Muhammad,

 

>> and yes exactly what you said is what I can notice with SR i can see the larger packet and so on the problems as you described 

 

To avoid MTU issues you need the following:

increase MTU size on all links from 1514 (default for untagged ethernet) to 1514 + 8*4 = 1546 bytes

! this can accomodate up to 8 MPLS label in a stack

if available in IOS XR increase also the mpls mtu to 1546 bytes.

If necessary use higher values if the MPLS TE tunnel uses more then one label. Probably 1550 bytes or 1554 bytes are better choices.

 

To be noted in real routers some linecards may have issues with so many MPLS labels in the stack as before SR introduction most implementations used a maximum of 5 labels.

 

Hope to help

Giuseppe

 

Hi Giuseppe 

Many Thaaaanks 

increasing MTU helped to solve the problem 
I increased it till 2000

now I can have the model and can show the difference in Jitter and Losses during a link congestion scenario 

The results showing a great benefit of SR 

I'd Like to discuss more about proving other benefits of SR that i can prove on the same network 
That would help me as I am doing my master thesis about SR 

Thanks again and Much appreciated 

 

Hi Giuseppe

another question regarding the Iperf test 

while testing BW till 512 K it is ok and i can notice the transferred packet is about 64K but while increasing the BW to be tested i can notice that the packet is increasing and that affect the test 

any help with that ?

Kindly find the examples below 

 

Iperf 512k.jpg

 




 IPERF 8 Mb.jpg

Hello Muhammad,

from the screen captures we see that performance results are 976 KB transferred on 60 seconds.

To be noted a TCP segment greater then MTU is sent in multiple IP packets by TCP sender.

976 KB = 976 * 1024 = 999424 bytes sent in 122 TCP segments means

TCP segment size = 999424 /122 = 8192 bytes but these at layer 3 are sent in multiple IP packets.

This is not IP fragmentation.

However, you should look at iperf documentation

https://iperf.fr/iperf-doc.php

 

There should be a flag option to set the TCP MSS to use.

 

https://www.openmaniak.com/iperf.php

 

it should be -M  <value> see above

 

Hope to help

Giuseppe

 

Thanks Giuseppe for your continuous support 

 

yes I am trying to set the MSS size but unable to set the correct one 

Also is that with TCP only I am also using UDP in the test (-u) 

I am testing after and before Segment routing and using 2 Labels for Segment routing 

 

I'd appreciate your support 

 

Thanks

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco