cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2309
Views
0
Helpful
4
Replies

Cannot ping 2 N5K with Jumbo frame

eric.loiseau
Level 1
Level 1

                  Hi all,

I have an UCS architecture from ESxi to N5K (with MK81R and FI6248)

I would like to check a ping  or wmkping between 2 hosts with failure.

So I started to check the jumbo frame between 2 N5K on the same vlan and I have the same problem , a ping failed

ping 10.203.136.2 use-vrf My-VRF  packet-size 9216 df-bit count 10

the maximum packet is 1472.

When I checked the int port-channel x or int eth 1/1-4, the counter seems good

  260057 input packets  47133516 bytes

    5719 jumbo packets  0 storm suppression bytes

    0 runts  0 giants  0 CRC  0 no buffer

 

the show queuing show

Ethernet1/27 queuing information:
  TX Queuing
    qos-group  sched-type  oper-bandwidth
        0       WRR             11
        1       WRR             14
        2       WRR             22
        3       WRR             20
        4       WRR             18
        5       WRR             15

  RX Queuing
    qos-group 0
    q-size: 62400, HW MTU: 9216 (9216 configured)
    drop-type: drop, xon: 0, xoff: 62400
    Statistics:

my qos configuration is like that

policy-map type network-qos system_nq_policy

  class type network-qos class-platinum

    pause no-drop

    mtu 9216

  class type network-qos class-silver

    mtu 9216

  class type network-qos class-bronze

    mtu 9216

  class type network-qos class-gold

    mtu 9216

  class type network-qos class-fcoe

    pause no-drop

    mtu 2158

  class type network-qos class-default

    mtu 9216

    multicast-optimize

system qos

  service-policy type qos input system_qos_policy

  service-policy type queuing input system_q_in_policy

  service-policy type queuing output system_q_out_policy

  service-policy type network-qos system_nq_policy

I try with a minimum configuration like

policy-map type network-qos jumbo

  class type network-qos class-default

    mtu 9216

    multicast-optimize

system qos

service-policy type network-qos jumbo

But the result are the same and I wonder when is my error and how I check it ?

Or perhaps the ping is not the best tool for such test.

Regards

Eric

4 Replies 4

Robert Burns
Cisco Employee
Cisco Employee

Are you trying to ping between the N5K and ESX VMK?

If so, have you:

-Configured a UCS CoS/QoS for Jumbo MTU?

-Assigned the QoS policy to your UCS Service Profile vNIC?

-Set the vNIC MTU within the UCS Service Profile vNIC to 9000?

-Configured the ESX vmk for Jumbo MTU?

-Configured the ESX vSwitch for Jumbo MTU?

The N5K side looks good, but you haven't detailed anything on UCS or ESX.

Robert

Hi,

I know that the MTU value to fix are

9000 for exsi, N1K

9216 for FI and UCS

The performance are very good with a mtu at 1500 (in fact I let the default value) and the vmotion works well, but at 9000 or 9216, the vmotion failed.

It's why on the nexus 5K I used the basic configuration between both N5K (only a PO as link)and I have the same issue above 1472 the Cisco ping failed .

I am using also a netapp device, connected on n5k, at 9000, the performance is slow (4 to 8 Mbits/s) , at 1500 the performance is normal for a giga link(500 to 600 Mbits/s).

Regards

Eric

Hi Eric

Did you try to ping with jumbo mtu through the n5k? Do not send jumbo pings from or to n5k with  as that may give wrong results.

Try to connect 2 PC to the n5k and ping though the n5k devices to check whether jumbo frames pass them well.

HTH,

Alex

Hi all,

I found the solution, what I understand is that the service policy works at level 2 and if you want to do it at L3 we must configure a mtu at 9216 on each interfaces vlans.

I now I can ping both nexus 5K , together and from a device connected on our UCS architecture.

Regards

Eric

Review Cisco Networking products for a $25 gift card