cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
324
Views
2
Helpful
6
Replies

CEF load sharing between two VTI tunnels on Catalyst Edge 8300

Flang3r
Level 1
Level 1

Hello.

I have set up two IPsec VTI tunnels to AWS with equal cost routing and need to somehow utilize both links for egress traffic. As CEF inserts tunnel interfaces as point2point into its adjacency table and subsequently uses actual physical interface where tunnel is sourced from, I'm stuck with one tunnel being fully saturated (1.25Gbps AWS limit) and another almost idle, due to nature of CEF's universal load-sharing algorithm being used at this moment.

The problem is that I can't enable per packet load balance on tunnel interfaces, because turns out it's not supported at least on C8300 series that I use (in autonomous mode) and only per-destination command is present under interface, which is enabled by default anyways.

I found out global CEF command with tunnel keyword

RTR(config)#ip cef load-sharing algorithm ?      
  dpi            Deep Packet Inspection
  include-ports  Algorithm that includes layer 4 ports
  original       Original algorithm
  src-only       Algorithm that uses Src Addr only
  tunnel         Algorithm for use in tunnel only environments
  universal      Algorithm for use in most environments

RTR(config)#ip cef load-sharing algorithm tunnel ?
  <1-FFFFFFFF>  Fixed ID
  <cr>          <cr>

Universal algorithm is being used by default and currently, does anyone know what exactly changes from using SRC/DST/UID XOR when it's changed to tunnel algorithm? How will this affect CEF behavior globally for tunneled and non-tunneled traffic or only tunneled? If this is not the way, what other means of manipulation would you advise, except destination prefix splitting?

1 Accepted Solution

Accepted Solutions

Flang3r
Level 1
Level 1

After assessing the network environment of this router, I made a decision to give tunnel algorithm a try and it solved my problem.

With default universal algorithm:
https://i.imgur.com/ZTBqEV7.png

With tunnel algorithm enabled:
https://i.imgur.com/LwcZ1qy.png


Interestingly CEF path walk output still indicates it's as per-destination with 16 hash buckets, but I guess it works as per-packet fashion for next hops that are marked as tunnel interfaces in CEF adjacency.

path list 7CF399872788, 6 locks, per-destination, flags 0x269 [shble, rif, rcrsv, hwcn, bgp]
ifnums:
Tunnel30(30)
Tunnel31(31)
2 paths
path 7CF39C0E3D30, share 1/1, type recursive, for IPv4
recursive via 169.254.211.33[IPv4:Default], fib 7CF39B037E08, 1 terminal fib, v4:Default:169.254.211.33/32
path list 7CF3998726D0, 2 locks, per-destination, flags 0x69 [shble, rif, rcrsv, hwcn]
path 7CF39C0E3C60, share 1/1, type recursive, for IPv4, flags [dsnt-src-via, cef-intnl]
recursive via 169.254.211.32/30<nh:169.254.211.33>[IPv4:Default], fib 7CF39BCAA228, 1 terminal fib, v4:Default:169.254.211.32/30
path list 7CF399872C90, 3 locks, per-destination, flags 0x49 [shble, rif, hwcn]
path 7CF39C0E4620, share 1/1, type connected prefix, for IPv4
connected to Tunnel30, IP midchain out of Tunnel30 7CF396CC6438
path 7CF39C0E3E00, share 1/1, type recursive, for IPv4
recursive via 169.254.252.121[IPv4:Default], fib 7CF39B02CDA8, 1 terminal fib, v4:Default:169.254.252.121/32
path list 7CF399872618, 2 locks, per-destination, flags 0x69 [shble, rif, rcrsv, hwcn]
path 7CF39C0E3B90, share 1/1, type recursive, for IPv4, flags [dsnt-src-via, cef-intnl]
recursive via 169.254.252.120/30<nh:169.254.252.121>[IPv4:Default], fib 7CF39B409B38, 1 terminal fib, v4:Default:169.254.252.120/30
path list 7CF3998728F8, 3 locks, per-destination, flags 0x49 [shble, rif, hwcn]
path 7CF39C0E43B0, share 1/1, type connected prefix, for IPv4
connected to Tunnel31, IP midchain out of Tunnel31 7CF396CC6208
1 output chain
chain[0]: loadinfo 80007CF39980A2D8, per-session, 2 choices, flags 0003, 6 locks
flags [Per-session, for-rx-IPv4]
16 hash buckets
< 0 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 1 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 2 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 3 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 4 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 5 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 6 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 7 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 8 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 9 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<10 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<11 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<12 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<13 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<14 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<15 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
Subblocks:
None

 

View solution in original post

6 Replies 6

Torbjørn
Spotlight
Spotlight

The documentation states the following: "Tunnel algorithm -The tunnel algorithm is designed to balance the per-packet load when only a few source and destination pairs are involved." Your situation sounds like a scenario where this could make sense to enable.

Changing this will alter global CEF behavior. I can't find specifics about exactly what is different about the tunnel algorithm, but it seems to take source/destination IP out of the equation.

Does the router handle other traffic over multiple paths/where load-sharing algorithm is important? If in doubt, I would probably schedule a maintenance window with time to both make the change and observe how the traffic pattern changes.

Happy to help! Please mark as helpful/solution if applicable.
Get in touch: https://torbjorn.dev

Joseph W. Doherty
Hall of Fame
Hall of Fame

"If this is not the way, what other means of manipulation would you advise, . . ."

Possibly PfR.  Unlike CEF, it does dynamic LB.

Joseph W. Doherty
Hall of Fame
Hall of Fame

"The problem is that I can't enable per packet load balance on tunnel interfaces, . . ."

Possibly that's actually a good thing, as per packet LB often has bunch of "gotchas".

Flang3r
Level 1
Level 1

This router is pretty much border BGP box, which unfortunately has to handle several high bandwidth cloud VPN tunnels as well and as multipath is globally enabled in BGP process, regular L3 traffic also traverses via ECMP in some critical segments, so changing global CEF behavior without clear understanding of the algorithm seems risky to me.

I'd understand "gotchas" of per-packet when we consider different quality paths, with out of order TCP degradation etc. but both tunnels have exactly the same underlay and it's just AWS BW limitation of 1.25Gbps per tunnel that I'm trying to improve. I really don't understand why would they remove support for functioning part of CEF, which could be controlled per interface basis...

Thanks for PfR tip! I didn't consider it at all. Can I still use it, if I have no control over routing entities over AWS side? It's quite limited what you can do in AWS console, certainly not something like PfR configuration

If we're considering just egress, I suspect PfR and AWS would be "blind" to each other.

What PfR does, for egress, it can monitor interface loading, and shuffles individual flows, but inserting/removing destination routes, between egress ports, to attempt to meet LB ratios (i.e. it's not limited to just ECMP).

I don't know for a fact PfR is available for a 8300.  If it is, besides the two VTIs, it could also manage your other ECMP paths, which it would do better, because again, it actually does dynamic LB, not just static round-robin flow LB.

Also, if it is available, believe it may now require its own feature license.

Flang3r
Level 1
Level 1

After assessing the network environment of this router, I made a decision to give tunnel algorithm a try and it solved my problem.

With default universal algorithm:
https://i.imgur.com/ZTBqEV7.png

With tunnel algorithm enabled:
https://i.imgur.com/LwcZ1qy.png


Interestingly CEF path walk output still indicates it's as per-destination with 16 hash buckets, but I guess it works as per-packet fashion for next hops that are marked as tunnel interfaces in CEF adjacency.

path list 7CF399872788, 6 locks, per-destination, flags 0x269 [shble, rif, rcrsv, hwcn, bgp]
ifnums:
Tunnel30(30)
Tunnel31(31)
2 paths
path 7CF39C0E3D30, share 1/1, type recursive, for IPv4
recursive via 169.254.211.33[IPv4:Default], fib 7CF39B037E08, 1 terminal fib, v4:Default:169.254.211.33/32
path list 7CF3998726D0, 2 locks, per-destination, flags 0x69 [shble, rif, rcrsv, hwcn]
path 7CF39C0E3C60, share 1/1, type recursive, for IPv4, flags [dsnt-src-via, cef-intnl]
recursive via 169.254.211.32/30<nh:169.254.211.33>[IPv4:Default], fib 7CF39BCAA228, 1 terminal fib, v4:Default:169.254.211.32/30
path list 7CF399872C90, 3 locks, per-destination, flags 0x49 [shble, rif, hwcn]
path 7CF39C0E4620, share 1/1, type connected prefix, for IPv4
connected to Tunnel30, IP midchain out of Tunnel30 7CF396CC6438
path 7CF39C0E3E00, share 1/1, type recursive, for IPv4
recursive via 169.254.252.121[IPv4:Default], fib 7CF39B02CDA8, 1 terminal fib, v4:Default:169.254.252.121/32
path list 7CF399872618, 2 locks, per-destination, flags 0x69 [shble, rif, rcrsv, hwcn]
path 7CF39C0E3B90, share 1/1, type recursive, for IPv4, flags [dsnt-src-via, cef-intnl]
recursive via 169.254.252.120/30<nh:169.254.252.121>[IPv4:Default], fib 7CF39B409B38, 1 terminal fib, v4:Default:169.254.252.120/30
path list 7CF3998728F8, 3 locks, per-destination, flags 0x49 [shble, rif, hwcn]
path 7CF39C0E43B0, share 1/1, type connected prefix, for IPv4
connected to Tunnel31, IP midchain out of Tunnel31 7CF396CC6208
1 output chain
chain[0]: loadinfo 80007CF39980A2D8, per-session, 2 choices, flags 0003, 6 locks
flags [Per-session, for-rx-IPv4]
16 hash buckets
< 0 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 1 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 2 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 3 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 4 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 5 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 6 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 7 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 8 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
< 9 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<10 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<11 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<12 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<13 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<14 > IP midchain out of Tunnel30 7CF396CC6438
Platform adj-id: 0xF80001E6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
<15 > IP midchain out of Tunnel31 7CF396CC6208
Platform adj-id: 0xF80001F6, 0x0, tun_qos_dpidx:0

IP adj out of BDI320, addr 203.0.113.2 7CF396CC6F28
Subblocks:
None

 

Review Cisco Networking for a $25 gift card