cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1422
Views
0
Helpful
2
Replies

Multipod IPN Configuration with GRE over IPSec

john.dejesus
Level 1
Level 1

I'd like to check and verify GRE over IPSec can be setup between the IPN Devices? My concern is fragmentation. In the ACI Multipod White Paper it states

 

"Increased MTU support: Since VXLAN data-plane traffic is exchanged between Pods, the IPN must ensure to be able to support an increased MTU on its physical connections, in order to avoid the need for fragmentation and reassembly. Before ACI release 2.2, the spine nodes were hard-coded to generate 9150B full-size frames for exchanging MP-BGP control plane traffic with spines in remote Pods. This mandated support for that 9150B MTU size across all IPN devices. From ACI release 2.2, a global configuration knob has been added on APIC to allow proper tuning of the MTU size of all the control plane packets generated by ACI nodes (leaf and spines), including inter-Pod MP-BGP control plane traffic. This essentially implies that the MTU support required in the IPN becomes solely dependent on the maximum size of frames generated by the endpoints connected to the ACI leaf nodes (that is, it must be 50B higher than that value)."

 

My APIC version if 4.2, so with the statement above, can I tune down the Default MTU Control Plane in the APIC from default 9000 Bytes to a lower value to cater the overhead of VXLAN, MG-BGP EVPN, GRE and IPSec headers? I will be using ASR-1001-HX as the IPN device. I configured the Tunnel MTU to 9000 so considering the overhead, 9000 is not enough, so if I tune down the APIC's control plane MTU this might work. And also, on the tcp adjust-mss the maximum value I can set is only 1460, not sure if this is a limitation on the Hardware or Software.

 

Anyone has a similar setup or have tested this in actual production traffic?

 

Thank you in advanced.

2 Replies 2

joezersk
Cisco Employee
Cisco Employee

You have two challenges to overcome. 

First, you can, in fact, reduce the Control Plane MTU of ACI as a whole under System >Systems Settings > Control Plane MTU.  The default in 4.2 is 9000 bytes.  Pick something smaller that works for you.  Your overhead will come from GRE overhead (24 bytes?) + IPSEC (50-60 bytes?).  There is no encap overhead in BGP.  ACI Control Plane does not encap in VxLAN, just standard IP routing here.

You might have a second problem to overcome.  You definitely never want to fragment VxLAN packets in the dataplane.  If you have any host in your fabric that uses jumbo frames, and tries to send them across your IPN, it won't work or work very poorly due to fragmentation.  There is not much you can do here in a useful sense, as these are the rules of TCP/IP and VxLAN as industry standards.  For TCP you could look into Path MTU to auto adjust MSS.  For some hosts that may not support that, it would be a manual thing.  That is a losing game to do anything manual for stuff like this. You also don't have a good solution for UDP packets in that situation. 

I'll close by saying you might even have a third problem.  Does the device doing GRE do that in hardware or software?  GRE is not what I would call an efficient encap function and when done in SW pushes the CPU of the router to 100% with only a minimum of traffic.  Even if done in HW, you have to figure that this device will be seeing dataplane traffic too (an added 50 bytes of VxLAN overhead).  So, this GRE tunnel becomes your weak point in the design as both CP and DP depend on it. 

My conclusion (unofficially as your friend here) is to avoid GRE if you can.  IPSEC could still be useful on the WAN facing interfaces of your IPN (again assuming this is done in HW).

.

Review Cisco Networking for a $25 gift card

Save 25% on Day-2 Operations Add-On License