cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8221
Views
0
Helpful
5
Replies

VXLAN UNDERLAY - MTU considerations

MambaRod16
Level 1
Level 1

Hello, 

 

In VXLAN, which are the interfaces that must be configured with an MTU value of 9216?

 

It must be the physical interfaces that interconnect the Leafs with the Spines?

The SVI interfaces (Ex: vrf Tenant-1) for the servers must have MTU of 9216?

The SVI interface for the L3VNI must have an MTU of 9216?

2 Accepted Solutions

Accepted Solutions

horia.gunica
Level 1
Level 1

Hello Carlosperez!

 

The following interfaces MUST have MTU 9216 :

 - L3 VNI Interfaces (Ex: SVI for Tenant-1)

 - IGP (IS-IS or OSPF) links - this includes

      - Leaf-Spine interfaces

      - Leaf-Leaf SVI interfaces for vPC Peer Peer Links in case of vPC

 

For the L2VNI SVI Interfaces ( aka the service interfaces with subnets in Tenant-1 for example) - you set the MTU what you want the MTU for the service to be . For example in our production environment we set this to 9100 - in order for additional encapsulation to occur if needed - and also give servers jumbo MTU (9000) if needed . This worked fine for us until now.

 

Best regards!

View solution in original post

Sergiu.Daniluk
VIP Alumni
VIP Alumni

Hi @MambaRod16 

Just wanted to add something on top of what @horia.gunica already mentioned.

Technically speaking, the requirement for the underlay is to increase the MTU by 50 bytes, on top of the overlay MTU (service traffic MTU). Ergo, if the overlay uses 1500 byte MTU, the underlay needs to be configured with a minimum of 1550-byte MTU. This requirement come from the MAC-in-UDP encapsulation, where VXLAN introduces these 50-byte overhead.

If the servers are sending traffic with MTU higher then 1500, it is required to have jumbo MTU enabled both for the overlay and underlay.

Overlay:

+ SVI interface for the servers (servers gateway)

Underlay:

+ L3VNI (the SVI associated to the tenant VRF)

+ L3 underlay network (on all L3 interfaces on the path between the VTEPs)

 

Stay safe,

Sergiu

 

 

View solution in original post

5 Replies 5

horia.gunica
Level 1
Level 1

Hello Carlosperez!

 

The following interfaces MUST have MTU 9216 :

 - L3 VNI Interfaces (Ex: SVI for Tenant-1)

 - IGP (IS-IS or OSPF) links - this includes

      - Leaf-Spine interfaces

      - Leaf-Leaf SVI interfaces for vPC Peer Peer Links in case of vPC

 

For the L2VNI SVI Interfaces ( aka the service interfaces with subnets in Tenant-1 for example) - you set the MTU what you want the MTU for the service to be . For example in our production environment we set this to 9100 - in order for additional encapsulation to occur if needed - and also give servers jumbo MTU (9000) if needed . This worked fine for us until now.

 

Best regards!

Sergiu.Daniluk
VIP Alumni
VIP Alumni

Hi @MambaRod16 

Just wanted to add something on top of what @horia.gunica already mentioned.

Technically speaking, the requirement for the underlay is to increase the MTU by 50 bytes, on top of the overlay MTU (service traffic MTU). Ergo, if the overlay uses 1500 byte MTU, the underlay needs to be configured with a minimum of 1550-byte MTU. This requirement come from the MAC-in-UDP encapsulation, where VXLAN introduces these 50-byte overhead.

If the servers are sending traffic with MTU higher then 1500, it is required to have jumbo MTU enabled both for the overlay and underlay.

Overlay:

+ SVI interface for the servers (servers gateway)

Underlay:

+ L3VNI (the SVI associated to the tenant VRF)

+ L3 underlay network (on all L3 interfaces on the path between the VTEPs)

 

Stay safe,

Sergiu

 

 

Hi Segiu,

Just to confirm, for L3VNI symmetric IRB the mtu 9216 must be configured as well? Because I faced weird issue wherein the ping test from vxlan fabric to outside network with 9000bytes was successful. However, I am encountering vmotion issue between two Data Center pods. When Cisco TAC run the elam, it was reported "ip mtu check failure" destination BD is the L3VNI.

Here is the L3VNI configuration I currently have.

interface Vlan3001
description L3-VNI-For-TN-1
no shutdown
vrf member TN-1
no ip redirects
ip forward
no ipv6 redirects

Yes, MTU for SVI for L3VNI also should be configured for jumbo (9126 or whatever consistent across fabric).

 

Otherwise, big frames will be punted to CPU (can be confirmed with ethanalyzer) and dropped by CoPP (can be confirmed with show policy-map interface control-plane).

Hi Pavel,

So any mtu mismatch will be punted to CPU and handled by CoPP? Thanks.

Review Cisco Networking for a $25 gift card