why does lowering MTU below default work when 1500 MTU passes without fragmentation
I have a layer 2 circuit (gigabit fiber) from a provider that connects between a datacenter and main office.
At the main office, the handoff is plugged into at 3560x, the SFP is GLC-ZX-SM=. It is trunked, no errors on interface.
At the datacenter, the handoff is plugged into a 3750G, the SFP is a SFP-GE-L=. This is also trunked, no errors on interface.
jumbo mtu is 9000, system mtu is 1998, routing mtu is 1998 on both.
intervlan routing is done on both switches at their locations. MTU on the SVIs is 9000
ISP layer 2 vlan is 918. We also have another vlan on the handoff at the main office side that is for internet, we do not see issues on that.
When passing traffic across this line, users experience timeouts and slowness. All captures indicate that no packets are being lost end to end. pinging across up to 1998 bytes works fine with df bit on.
One example of a reproducible issue is an application that runs against an MS SQL database. Plugging a client machine into the 3560 and on vlan 918, when connecting to the SQL server at the datacenter, we see timeout errors. Captures on the SFP ports on both switches do not show the frames being modified across the path.
Plugging the same client into a port on the 3750 at the datacenter, on the same ISP vlan. The application works as expected.
the servers are on a vlan that is routed off the 3750.
Using a L2L VPN connection over the internet between the two sites, both endpoints being ASA5510s, all applications run normally. MSS is 1380 across the VPN though.
Also, setting the MTU on the client machine when at the main office to 1460 or below, it will connect properly to the SQL server at the datacenter.
We've configured the ip tcp adjust-mss on the SVIs that traffic passes through on both switches, but based on captures, the mss value is not being adjusted in SYN packets.
The ISP does not see anything out of the ordinary on their end and have checked and confirmed that all devices and ports across their network have an MTU of 9100.
This turned out to be a bug in the providers Juniper equipment that caused a buffer overflow on packets ranging between 302 and 320 bytes(invalidating them), when the header of those packets exceed a certain size. The provider was able to make a modification on their end to reduce the overall header size, and prevent the bug from triggering.
Cisco DNA Center
What's new in Cisco DNA Center 2.1.2
Cisco DNA Center 2.1.2.x Features and Capabilities
Cisco DNA Center -Intent Based Networki...
A major international airport is looking to build a cutting-edge new terminal, designed to run 24/7 with no interruptions. With the airport always on round the clock, a critical component required to support this is the surveillance infrastructure, which ...
Dear expert,I am facing an issue which you may come across before. Grateful if you would teach me how to do it.I have a Cisco WS-C3650-24TS switch in MZ which I would like to configure so that on the GigabitEthernet1 / 0/1 portis configured with VLAN 100,...
Hi AllWe are looking at some new switches for our top of racks in our DC.We have looked at the 9300 series UX models with the big buffers which is classed as a high scale model.I have tried to look at some Nexus models for top of rack, but there appears t...