Input packet drops on uplink port-profile
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2013 06:51 AM
Hi,
I'm using Nexus 1000v and vSphere 5.1;
I just migrated some physical servers to VM, and I have some weird reporting issues;
Just to make sure it wasn't a network issue they asked me to verify if anything was overlooked on the Nexus side of things;
Everything checked out, but I'm seeing a lot of input packet drops on the physical ports of the system uplink port-profile; I doubled checked the configs on the VSM and the Catalyst stack and all is configured properly;
should I be concerned about these Input packet drops that I'm seeing on the VSM on the physical interfaces of my uplink port-profile? If so, could it be the NICS in the ESX host that could be the issue?
Any feed back would be appreciated;
Thanks.
- Labels:
-
Server Networking
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2013 06:55 AM
If you're using MAC Pinning this is expected.
Since MAC Pinning is a host based port channel (no northbound channel config required) the upstream switch will send broadcast traffic down each uplink on a VEM. The VEM has a designated receiver for each VLAN, so it will forward one copy of the broadcast from the DR uplink, and drop the rest. These drops are the input drops you're likely seeing.
If you're not using MAC Pinning, then I'll need to see your VSM uplink PP config, as well as the upstream switch interface config.
Regards,
Robert
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2013 07:10 AM
Hi,
I'm not using MAC Pinning;
Here is the VSM uplink PP configs:
port-profile type ethernet uplink
port-profile type ethernet name-uplink
vmware port-group
mtu 1500
switchport mode trunk
switchport trunk allowed vlan 1,988-989
switchport trunk native vlan 900
channel-group auto mode on
no shutdown
system vlan 1,988-989
state enabled
Here is my Catalyst 3750 Stack config for the port-channel:
interface Port-channel9
switchport trunk encapsulation dot1q
switchport trunk native vlan 900
switchport trunk allowed vlan 1,801,802,914,981,988,989
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
end
interface GigabitEthernet1/0/9
switchport trunk encapsulation dot1q
switchport trunk native vlan 900
switchport trunk allowed vlan 1,801,802,914,981,988,989
switchport mode trunk
switchport nonegotiate
channel-group 9 mode on
spanning-tree portfast trunk
end
interface GigabitEthernet1/0/6
switchport trunk encapsulation dot1q
switchport trunk native vlan 900
switchport trunk allowed vlan 1,801,802,914,981,988,989
switchport mode trunk
switchport nonegotiate
channel-group 9 mode on
spanning-tree portfast trunk
end
interface GigabitEthernet2/0/6
switchport trunk encapsulation dot1q
switchport trunk native vlan 900
switchport trunk allowed vlan 1,801,802,914,981,988,989
switchport mode trunk
switchport nonegotiate
channel-group 9 mode on
spanning-tree portfast trunk
end
interface GigabitEthernet2/0/9
switchport trunk encapsulation dot1q
switchport trunk native vlan 900
switchport trunk allowed vlan 1,801,802,914,981,988,989
switchport mode trunk
switchport nonegotiate
channel-group 9 mode on
spanning-tree portfast trunk
end
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2013 07:21 AM
Next thing I'd do if a VEMPKT capture on the Port Channel interface, filter for "drop" and see what traffic the VEM is dropping on Ingress. Once you capture the dropped traffic, you'll need to use wireshark to analyze the packets and determine "which" traffic is being dropped.
See the attached doc if you aren't use about how to use vempkt.
Let me know your findings.
Regards,
Robert
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2013 05:41 PM
Hi Robert, do you know if vempkt works on Hyper-v? I ran the capture from my VSM and It didn´t work.vempkt display detail all doesn´t show anything neither exporting the capture, it´s empty. Is it an unsupported feature on Hyper-v?
ACANWXN1KV01(config)# module vem 3 execute vempkt clear
Cleared log
ACANWXN1KV01(config)# module vem 3 execute vempkt capture all-stages
Successfully set packet capture specification
ACANWXN1KV01(config)# module vem 3 execute vempkt show capture info
Stage : Ingress
LTL : Unspecified
VLAN : Unspecified
Filter : Unspecified
Stage : Egress
LTL : Unspecified
VLAN : Unspecified
Filter : Unspecified
Stage : Drop
LTL : Unspecified
VLAN : Unspecified
Filter : Unspecified
Stage : Aipc
LTL : Unspecified
VLAN : Unspecified
ACANWXN1KV01(config)# module vem 3 execute vempkt start
Started log
ACANWXN1KV01(config)# module vem 3 execute vempkt display brief all
Need to stop vempkt before show all/show last n
ACANWXN1KV01(config)# module vem 3 execute vempkt stop
Will suspend log after next 0 entries
ACANWXN1KV01(config)# module vem 3 execute vempkt display brief all
ACANWXN1KV01(config)# module vem 3 execute vempkt display detail all
ACANWXN1KV01(config)#
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2013 10:33 AM
What does "a lot of input packet drops" mean in numbers ? Let us take a look at the interface counters.
If a Vem recives a Frame on an uplink, with a destination mac-Address which is not in the mac-address-table of this vem, it will drop the frame.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2013 07:09 AM
I have the same symptomps on 3 different Nexus 1000v. All 3 run the same version - 4.2(1)SV2(1.1) VMware is 5.0 sp1 and the hardware for ESXi hosts is more or less the same (At least server blade model and CNA).
We have tried to use vempkt to capture traffic but no traffic is captured if we filter on drops even though the counter on the port-channel and member Ethernet interfaces increase. On the hosts we tried vempkt we see about 20 drops per second. Here is some info. I have removed some irrellevant stuff.
NRK-VSM-001# show int po 14
port-channel14 is up
Members in this channel: Eth6/3, Eth6/4
6172 input packet drops <- Increases
NRK-VSM-001# show mod 6
Mod Sw Hw
--- ------------------ ------------------------------------------------
6 4.2(1)SV2(1.1) VMware ESXi 5.0.0 Releasebuild-1024429 (3.0)
Mod Server-IP Server-UUID Server-Name
--- --------------- ------------------------------------ --------------------
6 10.16.1.12 4c4c4544-0034-3010-8036-b4c04f33354a nrk-vi01-h07.nt.se
FROM The ESXi
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port Type
19 Eth6/3 UP UP F/B* 305 0 vmnic2
20 Eth6/4 UP UP F/B* 305 0 vmnic3
~ # vempkt show capture info
Stage : Drop
LTL : 305
VLAN : Unspecified
Filter : Unspecified
Even if we let the capture run for several minutes we see no drops. I set it to capture 31 packets.
~ # vempkt show info
Enabled : Yes
Total Packet Entries : 0 <- Never increases even if the capture is running filtered like above
Wrapped Packet Entries : 0
Lost Packet Entries : 0
Skipped Packet Entries : 560145
Available Packet Entries : 14169
Packet Capture Size : 88
Packet Capture Mode : Un Reliable
Stop After Packet Entry : 31
In our case, could the input drops depend on that we allow vlans from the upstream hardware switch to the VEM that do not exist on the N1000v and that this is the reason we can not capture the dropped packets?
Any ideas?
PS: We see drops on uplinks on all VEMs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2013 05:17 PM
I have been seeing drops on my uplink ports and ended up opening a TAC case for the drops. It ended up being a issue with version sv2.1.1 where the vlans allowed on the esx host side is not the same as the vlans allowed on the physical switch side. Packets with vlan tags that were not allowed in the uplink port profile were being dropped as they should be but the input drop counter was being incremented for the dropped packet. This is a known bug and is scheduled to be fixed in the first quarter of next year. This could be what you are seeing.
Tom Kunkel
Sent from Cisco Technical Support Android App
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-17-2013 01:18 AM
Thanks a lot Tom! This is what I suspected since mismatched vlans on ESX and physical switch is what we have in our case. Our next step in t-shooting this was to match up the allowed vlans on N1K uplinks and the N5500s.
Thing was that I could not find a bug id when seraching the bug tool. Could you check what bug id TAC refered to and post it I would be very happy.
/Stefan
PS I sure hope thety fix the "last clearing of show interface never" bugs (CSCtd82221 and CSCtf84230)as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-17-2013 04:03 AM
The id I was given is CSCue73513.
Tom
Sent from Cisco Technical Support iPhone App
