Have a customer seeing intermittent connectivity problems across the 1000v. The physical setup is that each ESX server has (6) 1Gb NICs. 3 are connected to one 6500 and the other 3 to a second 6500. We're using mac-pinning mode as they can't do VSS. Pings will work..then stop...then work. When they do a "show interface" they get this at the bottom "91441147 Input Packet Drops 0 Output Packet Drops". Should we be seeing that many input packet drops? If not, what would cause it? If that isn't a concern any clues on where to look for why they are having problems?
Here is the output from "show interface"
Ethernet3/5 is up
Hardware: Ethernet, address: 0050.5651.b975 (bia 0050.5651.b975)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 0/255, txload 0/255, rxload 0/255
Port mode is trunk
full-duplex, 1000 Mb/s
Beacon is turned off
Auto-Negotiation is turned off
Input flow-control is off, output flow-control is off
Auto-mdix is turned on
Switchport monitor is off
5 minute input rate 184752 bits/second, 196 packets/second
5 minute output rate 5848 bits/second, 4 packets/second
167595253 Input Packets 37285450 Unicast Packets
38509478 Multicast Packets 91800325 Broadcast Packets
6502306 Output Packets 6474307 Unicast Packets
24924 Multicast Packets 3075 Broadcast Packets 24650 Flood Packets
91441147 Input Packet Drops 0 Output Packet Drops
1 interface resets
Can you send the running configuration of the three ports on each 6500?
Regarding the drop count, do you see the drop count increasing on all the 6 ports? In the case of vPC-HM topology, only one port is designated to receive broadcast, unknown unicast/multicast from the network. On all other non designated you will see them as drop.
Provide the running configuration of the uplink profile used by these 6 ports.
Thanks for the help. Below is some information from the client. Each vSphere server they have has 6 NICs connected and used for VM traffic. I think I mentioned it earlier, but if I didn't...3 NICs connect to one 6500 and 3 NICs connect to another 6500:
Here are the latest findings.
This morning I have been testing the ping results from two 2008 guest vmotioning to and from different vsphere host.
Pings succeed then fail and then will resume.
I used vsphere3 as my test host.
With servers 2008test1 and 2008 test2 on vsphere3 I was dropping pings, in random patterns.
I began removing physical nics from the nexus switch. They were not reassigned to a virtual switch.
Removed all nics except one to Switch1. Pinging began working fine.
Added vmnic8 connected to Switch2. Pings begin failing
Removed vmnic8 , pinging begins working.
I then added vmnic5 on Switch1, pinging remains ok. Added vmnic 6 pinging OK. All Switch1 now connected.
Went back and added vmnic9 to nexus on Switch2 dropped pings begin occurring.
Removed vmnic4 thru 6 (Switch1) ping becomes normal on vmnic9.
Add vmnic8 and vmnic10 to Switch2, pinging OK.
Then vmotioned to different hardware. Pinging then begins acting erratic.
The only system I have had time to test with is vpshere3. I can repeat the same process on other
host if needed.
So it looks to me like if they only connect physical NICs to one switch all is well. When they go to two it gets bad. Here is the config for each port:
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 200-202,320,323,991,992,995
switchport mode trunk
The next thing you're going to ask me for is the N1Kv running-config and I've requested that. The uplink configuration si very straightforward. We're using mac-pinning as the distribution method, but I'll get the config to confirm that with you.
Here is the configuration for the VM network uplink:
port-profile type ethernet VM_DATA_Uplink
switchport mode trunk
switchport trunk allowed vlan 200-202,303-304,320,323,396,993-995
channel-group auto mode on mac-pinning
system vlan 993-995
Status update. TAC reproduced the problem today and found that it appears to be a bug. IGMP Snooping is on by default on the 1Kv..disabling that fixed it. I don't have any details on why it hit this particular environment yet.