07-05-2012 10:40 PM - edited 03-07-2019 07:37 AM
hi
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them.
Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms
--- 10.200.12.2 ping statistics ---
20 packets transmitted, 14 packets received, 30.00% packet loss
Any ideas would be greatly appreciated.
thanks
phil
Solved! Go to Solution.
07-15-2012 11:04 PM
Phil,
I think we are in contact already, however I though it is important to share this information here:
Nexus by default applies CoPP and gives you the option to select between 3 different levels of protection
Not sure if you have modified the default attributes of CoPP, but it is possible that ICMP traffic is being dropped due to a too strict CoPP policy
Verify if there are any drops on the monitoring class issuing the show policy-map interface control-plane command
If so it is possible that you need either to increase this particular class BW or remove CoPP, also it is important to check if this packet loss is also noticed when pings are traversing the devices instead of being generated and terminated on the devices themselves
More information can be found: http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_1/nx-os/security/configuration/guide/sec_cppolicing.html#wp1072128
07-06-2012 01:09 AM
Hi Phil
If you have configured MTU on N7K and send jumbo pings through it or to it you need to be sure that all devices in the path to your destination support jumbo frames. Otherwise one of them will drop your packets in case DF bit set.
HTH,
Alex
07-06-2012 01:42 AM
Thanks Alex
i believe it's a dark fibre layer 2 link, so there shouldn't be any other hops/devices between the 2 Nexus.
Are there any show commands, that could help? Show interface, shows that the mtu is 9216 and there are no drops or errors, but 30% loss. Smaller mtu packets have zero loss.
I'm assuming that it's as simiple as putting the mtu size on the interface and the system jumbemtu ?
thanks
phil
07-06-2012 02:58 AM
Phil
Start fomr this:
show int eth 1/10 counters detailed all
show queuing interface eth x/y
show policy-map interface control-plane
show hardware internal errors mod x | diff - to check the errors on the module port on which doesn't receive thr traffic
HTH,
Alex
07-12-2012 07:00 PM
hi Alex
I can see the mtu size in the show commands, though i'm not sure about in the show queuing interface eth x/y
I get
DC1SW01-Primary# sho queuing interface ethernet 4/26
Interface Ethernet4/26 TX Queuing strategy: Weighted Round-Robin
Port QoS is enabled
Queuing Mode in TX direction: mode-cos
Transmit queues [type = 1p7q4t]
Queue Id Scheduling Num of thresholds
_____________________________________________________________
1p7q4t-out-q-default WRR 04
1p7q4t-out-q2 WRR 04
1p7q4t-out-q3 WRR 04
1p7q4t-out-q4 WRR 04
1p7q4t-out-q5 WRR 04
1p7q4t-out-q6 WRR 04
1p7q4t-out-q7 WRR 04
1p7q4t-out-pq1 Priority 04
Yet i can see mtu is correct on interfaces;
DC1SW01-Primary# sho queuing interface ethernet 4/26 | i MTU
DC1SW01-Primary# sho queuing interface ethernet 4/26 | i mtu
DC1SW01-Primary# sho int e4/26 | i MTU
MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC1SW01-Primary# sho int port-c 10 | i MTU
MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec,
thanks
phil
07-12-2012 08:31 PM
philbe wrote:
hi
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them.
Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms
--- 10.200.12.2 ping statistics ---
20 packets transmitted, 14 packets received, 30.00% packet lossAny ideas would be greatly appreciated.
thanks
phil
Is this a layer 3 (routed) link?
If so, you need to enable jumbo frames in the SVI configuration as well as the physical ports, EG:
int vlan 40
ip address x.x.x.x y.y.y.y
mtu 9216
Exactly the same thing bit me in the backside when I implemented N7K's. I set the system jumbo MTU, the interface MTU, the port channel MTU - but forgot the SVI MTU, and had the same problem.
Cheers.
07-12-2012 09:29 PM
thanks Darren, unfortunately i've added that:
DC2SW01-Primary# sho run int vla 352 | i mtu
mtu 9216
DC2SW01-Primary# sho run int vla 353 | i mtu
mtu 9216
DC1SW01-Primary# sho run int vla 352 | i mtu
mtu 9216
DC1SW01-Primary# sho run int vla 353 | i mtu
mtu 9216
Though even if i try to ping via the layer 2 , it still drops pkts.
With the HSRP being the .1 address, pinging from the .2 address in DC1 to the .3 address in DC2, i still get drops?
DC1SW01-Primary# ping
Vrf context to use [default] :
No user input: using default context
Target IP address or Hostname: 10.200.74.3
Repeat count [5] : 50
Datagram size [56] : 4000
Timeout in seconds [2] :
Sending interval in seconds [0] :
Extended commands [no] : y
Source address or interface : 10.200.74.2
Data pattern [0xabcd] :
Type of service [0] :
Set DF bit in IP header [no] : y
Time to live [255] :
Loose, Strict, Record, Timestamp, Verbose [None] :
Sweep range of sizes [no] :
Sending 50, 4000-bytes ICMP Echos to 10.200.74.3
Timeout is 2 seconds, data pattern is 0xABCD
4008 bytes from 10.200.74.3: icmp_seq=0 ttl=254 time=2.28 ms
4008 bytes from 10.200.74.3: icmp_seq=1 ttl=254 time=3.066 ms
--- 10.200.74.3 ping statistics ---
50 packets transmitted, 43 packets received, 14.00% packet loss
round-trip min/avg/max = 1.597/3.742/13.524 ms
07-12-2012 10:29 PM
philbe wrote:
thanks Darren, unfortunately i've added that:
Sorry mate, I'm stumped then.
All I can say it that you should double check that *every* interface in the physical/logical connection chain has the MTU set to 9216.
I notice you have two interfaces in your port channel 10 (it's reporting 20 gig of bandwidth), but you only show the configuration for one interface (e4/26) - what about the other interface bound into the port channel? Are you 100% sure you have the MTU set in that interface as well?
Cheers.
07-15-2012 03:57 PM
double checked , but all MTU 9216, thanks though.
DC1SW01-Primary# sho port-c sum | i 10
10 Po10(SU) Eth LACP Eth4/26(P) Eth7/26(P)
DC2SW01-Primary# sho int e4/26 | i MTU
MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC2SW01-Primary# sho int e7/26 | i MTU
MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC2SW01-Primary# sho int port-c 10 | i MTU
MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec,
DC1SW01-Primary# sho int e4/26 | i MTU
MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC1SW01-Primary# sho int e7/26 | i MTU
MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC1SW01-Primary# sho int port-c 10 | i MTU
MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec,
07-15-2012 10:59 PM
The reason why the jumbo packets are being dropped is COPP
To confirm this, please get the output of the “show policy-map interface control-plane”
Look for any drops on the class copp-system-class-monitoring
You will see that your Copp configuration is policing 130Kbps of traffic which I think is too low for when you are sending jumbo packets
Also another way to verify this is sending the ping between machines on each datacentre instead of generate the traffic from the Nexus box itself.
At the same time the server team updated the drivers on the server nics and all was well.
So the correct answer was, update your server nics first and check the CoPP on the NEXUS.
Many thanks
07-15-2012 11:04 PM
Phil,
I think we are in contact already, however I though it is important to share this information here:
Nexus by default applies CoPP and gives you the option to select between 3 different levels of protection
Not sure if you have modified the default attributes of CoPP, but it is possible that ICMP traffic is being dropped due to a too strict CoPP policy
Verify if there are any drops on the monitoring class issuing the show policy-map interface control-plane command
If so it is possible that you need either to increase this particular class BW or remove CoPP, also it is important to check if this packet loss is also noticed when pings are traversing the devices instead of being generated and terminated on the devices themselves
More information can be found: http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_1/nx-os/security/configuration/guide/sec_cppolicing.html#wp1072128
08-10-2018 04:27 PM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide