02-07-2012 08:46 AM - edited 03-07-2019 04:47 AM
I have a layer 2 circuit (gigabit fiber) from a provider that connects between a datacenter and main office.
At the main office, the handoff is plugged into at 3560x, the SFP is GLC-ZX-SM=. It is trunked, no errors on interface.
At the datacenter, the handoff is plugged into a 3750G, the SFP is a SFP-GE-L=. This is also trunked, no errors on interface.
jumbo mtu is 9000, system mtu is 1998, routing mtu is 1998 on both.
intervlan routing is done on both switches at their locations. MTU on the SVIs is 9000
ISP layer 2 vlan is 918. We also have another vlan on the handoff at the main office side that is for internet, we do not see issues on that.
When passing traffic across this line, users experience timeouts and slowness. All captures indicate that no packets are being lost end to end. pinging across up to 1998 bytes works fine with df bit on.
One example of a reproducible issue is an application that runs against an MS SQL database. Plugging a client machine into the 3560 and on vlan 918, when connecting to the SQL server at the datacenter, we see timeout errors. Captures on the SFP ports on both switches do not show the frames being modified across the path.
Plugging the same client into a port on the 3750 at the datacenter, on the same ISP vlan. The application works as expected.
the servers are on a vlan that is routed off the 3750.
Using a L2L VPN connection over the internet between the two sites, both endpoints being ASA5510s, all applications run normally. MSS is 1380 across the VPN though.
Also, setting the MTU on the client machine when at the main office to 1460 or below, it will connect properly to the SQL server at the datacenter.
We've configured the ip tcp adjust-mss on the SVIs that traffic passes through on both switches, but based on captures, the mss value is not being adjusted in SYN packets.
The ISP does not see anything out of the ordinary on their end and have checked and confirmed that all devices and ports across their network have an MTU of 9100.
05-06-2012 07:47 AM
This turned out to be a bug in the providers Juniper equipment that caused a buffer overflow on packets ranging between 302 and 320 bytes(invalidating them), when the header of those packets exceed a certain size. The provider was able to make a modification on their end to reduce the overall header size, and prevent the bug from triggering.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide