cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8699
Views
0
Helpful
11
Replies

Jumbo frames dropped on Nexus7000

philbe
Level 1
Level 1

hi

I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.

system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them.

Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes -  100% loss.

8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms

--- 10.200.12.2 ping statistics ---
20 packets transmitted, 14 packets received, 30.00% packet loss

Any ideas would be greatly appreciated.

thanks

phil

1 Accepted Solution

Accepted Solutions

jdcardenas
Level 1
Level 1

Phil,

I think we are in contact already, however I though it is important to share this information here:

Nexus by default applies CoPP and gives you the option to select between 3 different levels of protection

Not sure if you have modified the default attributes of CoPP, but it is possible that ICMP traffic is being dropped due to a too strict CoPP policy

Verify if there are any drops on the monitoring class issuing the show policy-map interface control-plane command

If so it is possible that you need either to increase this particular class BW or remove CoPP, also it is important to check if this packet loss is also noticed when pings are traversing the devices instead of being generated and terminated on the devices themselves

More information can be found: http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_1/nx-os/security/configuration/guide/sec_cppolicing.html#wp1072128

View solution in original post

11 Replies 11

Oleksandr Nesterov
Cisco Employee
Cisco Employee

Hi Phil

If you have configured MTU on N7K and send jumbo pings through it or to it you need to be sure that all devices in the path to your destination support jumbo frames. Otherwise one of them will drop your packets in case DF bit set.

HTH,

Alex

Thanks Alex

i believe it's a dark fibre layer 2 link, so there shouldn't be any other hops/devices between the 2 Nexus.

Are there any show commands, that could help? Show interface, shows that the mtu is 9216 and there are no drops or errors, but 30% loss. Smaller mtu packets have zero loss.

I'm assuming that it's as simiple as putting the mtu size on the interface and the system jumbemtu  ?

thanks

phil

Phil

Start fomr this:

show int eth 1/10 counters detailed all

show queuing interface eth x/y

show policy-map interface control-plane

show hardware internal errors mod x | diff - to check the errors on the module port on which doesn't receive thr traffic

HTH,

Alex

hi Alex

I can see the mtu size in the show commands, though i'm not sure about in the show queuing interface eth x/y

I get

DC1SW01-Primary# sho queuing interface ethernet 4/26

  Interface Ethernet4/26 TX Queuing strategy: Weighted Round-Robin

  Port QoS is enabled

    Queuing Mode in TX direction: mode-cos

    Transmit queues [type = 1p7q4t]

    Queue Id                       Scheduling   Num of thresholds

    _____________________________________________________________

       1p7q4t-out-q-default        WRR             04

       1p7q4t-out-q2               WRR             04

       1p7q4t-out-q3               WRR             04

       1p7q4t-out-q4               WRR             04

       1p7q4t-out-q5               WRR             04

       1p7q4t-out-q6               WRR             04

       1p7q4t-out-q7               WRR             04

       1p7q4t-out-pq1              Priority        04

Yet i can see mtu is correct on interfaces;

DC1SW01-Primary# sho queuing interface ethernet 4/26 | i MTU

DC1SW01-Primary# sho queuing interface ethernet 4/26 | i mtu

DC1SW01-Primary# sho int e4/26 | i MTU

  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,

DC1SW01-Primary# sho int port-c 10  | i MTU

  MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec,

thanks

phil

darren.g
Level 5
Level 5

philbe wrote:

hi

I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.

system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them.

Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes -  100% loss.

8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms

--- 10.200.12.2 ping statistics ---
20 packets transmitted, 14 packets received, 30.00% packet loss

Any ideas would be greatly appreciated.

thanks

phil

Is this a layer 3 (routed) link?

If so, you need to enable jumbo frames in the SVI configuration as well as the physical ports, EG:

int vlan 40

ip address x.x.x.x y.y.y.y

mtu 9216

Exactly the same thing bit me in the backside when I implemented  N7K's. I set the system jumbo MTU, the interface MTU, the port channel MTU - but forgot the SVI MTU, and had the same problem.

Cheers.

thanks Darren,  unfortunately i've added that:

DC2SW01-Primary# sho run int vla 352 | i mtu

  mtu 9216

DC2SW01-Primary# sho run int vla 353 | i mtu

  mtu 9216

DC1SW01-Primary# sho run int vla 352 | i mtu

  mtu 9216

DC1SW01-Primary# sho run int vla 353 | i mtu

  mtu 9216

Though even if i try to ping via the layer 2 , it still drops pkts.

With the HSRP being the .1 address, pinging from the .2 address in DC1 to the .3 address in DC2, i still get drops?

DC1SW01-Primary# ping
Vrf context to use [default] :
No user input: using default context
Target IP address or Hostname: 10.200.74.3
Repeat count [5] : 50
Datagram size [56] : 4000
Timeout in seconds [2] :
Sending interval in seconds [0] :
Extended commands [no] : y
Source address or interface : 10.200.74.2
Data pattern [0xabcd] :
Type of service [0] :
Set DF bit in IP header [no] : y
Time to live [255] :
Loose, Strict, Record, Timestamp, Verbose [None] :
Sweep range of sizes [no] :
Sending 50, 4000-bytes ICMP Echos to 10.200.74.3
Timeout is 2 seconds, data pattern is 0xABCD

4008 bytes from 10.200.74.3: icmp_seq=0 ttl=254 time=2.28 ms
4008 bytes from 10.200.74.3: icmp_seq=1 ttl=254 time=3.066 ms

--- 10.200.74.3 ping statistics ---
50 packets transmitted, 43 packets received, 14.00% packet loss
round-trip min/avg/max = 1.597/3.742/13.524 ms

philbe wrote:

thanks Darren,  unfortunately i've added that:

Sorry mate, I'm stumped then.

All I can say it that you should double check that *every* interface in the physical/logical connection chain has the MTU set to 9216.

I notice you have two interfaces in your port channel 10 (it's reporting 20 gig of bandwidth), but you only show the configuration for one interface (e4/26) - what about the other interface bound into the port channel? Are you 100% sure you have the MTU set in that interface as well?

Cheers.

double checked , but all MTU 9216, thanks though.

DC1SW01-Primary# sho port-c sum | i 10
10    Po10(SU)    Eth      LACP      Eth4/26(P)   Eth7/26(P)

DC2SW01-Primary# sho int e4/26 | i MTU
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC2SW01-Primary# sho int e7/26 | i MTU
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC2SW01-Primary# sho int port-c 10 | i MTU
  MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec,

DC1SW01-Primary# sho int e4/26 | i MTU
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,

DC1SW01-Primary# sho int e7/26 | i MTU
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
DC1SW01-Primary# sho int port-c 10 | i MTU
  MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec,

The reason why the jumbo packets are being dropped is COPP

To confirm this, please get the output of the “show policy-map interface control-plane”

Look for any drops on the class copp-system-class-monitoring

You will see that your Copp configuration is policing 130Kbps of traffic which I think is too low for when you are sending jumbo packets

Also another way to verify this is sending the ping between machines on each datacentre instead of generate the traffic from the Nexus box itself.

At the same time the server team updated the drivers on the server nics and all was well.

So the correct answer was, update your server nics first and check the CoPP on the NEXUS.

Many thanks

jdcardenas
Level 1
Level 1

Phil,

I think we are in contact already, however I though it is important to share this information here:

Nexus by default applies CoPP and gives you the option to select between 3 different levels of protection

Not sure if you have modified the default attributes of CoPP, but it is possible that ICMP traffic is being dropped due to a too strict CoPP policy

Verify if there are any drops on the monitoring class issuing the show policy-map interface control-plane command

If so it is possible that you need either to increase this particular class BW or remove CoPP, also it is important to check if this packet loss is also noticed when pings are traversing the devices instead of being generated and terminated on the devices themselves

More information can be found: http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_1/nx-os/security/configuration/guide/sec_cppolicing.html#wp1072128

Hi All,
I have the same issue, what is the suitable value ? '
class-map copp-system-p-class-monitoring (match-any)
match access-group name copp-system-p-acl-icmp
match access-group name copp-system-p-acl-icmp6
match access-group name copp-system-p-acl-mpls-oam
match access-group name copp-system-p-acl-traceroute
match access-group name copp-system-p-acl-http-response
match access-group name copp-system-p-acl-smtp-response
match access-group name copp-system-p-acl-http6-response
match access-group name copp-system-p-acl-smtp6-response
match access-group name copp-system-p-acl-rise-nam-response
match access-group name copp-system-p-acl-rise6-nam-response
match protocol mpls
set cos 1
police cir 130 kbps bc 1000 ms
conform action: transmit
violate action: drop
Thanks
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: