cancel
Showing results for 
Search instead for 
Did you mean: 
cancel

Who Me Too'd this topic

Multipod IPN Configuration with GRE over IPSec

john.dejesus
Level 1
Level 1

I'd like to check and verify GRE over IPSec can be setup between the IPN Devices? My concern is fragmentation. In the ACI Multipod White Paper it states

 

"Increased MTU support: Since VXLAN data-plane traffic is exchanged between Pods, the IPN must ensure to be able to support an increased MTU on its physical connections, in order to avoid the need for fragmentation and reassembly. Before ACI release 2.2, the spine nodes were hard-coded to generate 9150B full-size frames for exchanging MP-BGP control plane traffic with spines in remote Pods. This mandated support for that 9150B MTU size across all IPN devices. From ACI release 2.2, a global configuration knob has been added on APIC to allow proper tuning of the MTU size of all the control plane packets generated by ACI nodes (leaf and spines), including inter-Pod MP-BGP control plane traffic. This essentially implies that the MTU support required in the IPN becomes solely dependent on the maximum size of frames generated by the endpoints connected to the ACI leaf nodes (that is, it must be 50B higher than that value)."

 

My APIC version if 4.2, so with the statement above, can I tune down the Default MTU Control Plane in the APIC from default 9000 Bytes to a lower value to cater the overhead of VXLAN, MG-BGP EVPN, GRE and IPSec headers? I will be using ASR-1001-HX as the IPN device. I configured the Tunnel MTU to 9000 so considering the overhead, 9000 is not enough, so if I tune down the APIC's control plane MTU this might work. And also, on the tcp adjust-mss the maximum value I can set is only 1460, not sure if this is a limitation on the Hardware or Software.

 

Anyone has a similar setup or have tested this in actual production traffic?

 

Thank you in advanced.

Who Me Too'd this topic