cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3923
Views
1
Helpful
3
Replies

Nexus 1000V and Cisco SMB switch(es)

Aki1
Level 1
Level 1

Another interesting note from Nexus 1000V.

This is kind of continuation from previous message...

Once we had our Nexus 1000V system up and running, we started to enable VMs inside the system. Previously all ESX hosts were connected to Cisco 2960 switch and while we changed place, we also changed the upstream switch. This time we used Cisco SMB SLM2024 switch. This led us into very interesting troubleshooting session.

On each ESX host we have four VMs running; 2 Windows XP and 2 Windows Server 2003. All VMs inside one host were on the same port-profile and thus on the same VLAN. When they were powered up and configured correctly for network operations, the following happened:

* All VMs (XP/W2003) were able to ping each other.

* XP VMs were able to ping an external default gateway.

* W2003 VMs were not able to ping an external default gateway.

We did all the troubleshooting that we could possible figure out; MAC address tables etc. Nothing came up. Finally we gave up and changed SLM2024 to the original switch. Boom - everything worked. Talk about strange behaviour....

BR,

Aki Anttila

3 Replies 3

pfazzone
Level 1
Level 1

Hi Aki,

Did you also more the console interfaces of the ESX hosts over to the SLM 2024 during your tests?  Could you ping those?

pf

Robert Burns
Cisco Employee
Cisco Employee

This might be an obvious one, but you're sure you had created the Control, Packet and all other VLANS in use - on the SMB SLM2024?  Also, did you check Spanning-tree to see if any of the paths to the ESX hosts were blocking? I'd be curious to see the output of "show mac-address vlan xx" and see if anything is blocking when some VMs are able to ping the the GW and others can't.  Also watch for Native VLANs, they can really cause headaches if you're using them.  Stick to tagging all packets at the 1000v level, and leave your phsyical connections as Trunks allowing only the VLANs you need.

Also it's best practice is to create Port Channels on the phsyical switches if you're using multiple links on your Uplink profiles. You'll also need to create port-channels on the 1000v side, either static or vPC-HM.

There must be some discrepenacy with the config between the 2900 and 2024.

Robert

Hi!

Thanks for your comments.

For PF: Yes, SC interfaces of the ESX hosts were all reachable.

For Roberbur: Everything worked nicely with the other switch. My Nexus 1000V was quite ok. All necessary VLANs were created, all necessary port-profiles existed and all uplinks were configured to carry all necessary VLANs. My set-up had:

* ESX servers with two PNICs.

* One PNIC was attached to SystemUplink (carrying VLANs 1,11,12).

* One PNIC was attached to VMUplink (carrying VLANs 100->).

On the SLM side all necessary VLANs were created and attached to the ports leading to ESX hosts. There was no spanning-tree blocking any of the ports. MAC tables pointed to correct interfaces.

I have been playing with (and training) Cisco switches for years and I have never come up with this kind of a problem. I could not understand, what was going on.

I you guys want to take a look at the situation, I can provide you access to our lab. At the moment SLM is not connected, but we can agree a suitable date and time and then you can see and troubleshoot yourself.

But of course, before that, I will give it another try.

BR,

Aki

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: