03-21-2016 12:55 AM - edited 03-01-2019 12:39 PM
Hi ,
I'm a Cisco worker , and we got some issue .
I'll start to explain it from the VMware side , and then will move to the UCSM side .
We got DVS on VMware, and there is strange issue that some vlan is working only on 4-5 hosts , and not on the others ...
We did a little troubleshooting , and we saw that the hosts that succeeded to working with this Vlan, are from the same blade .
Notice, that all of our blades were created from the same Service-Profile Template , and they all got the same Vlans .
Hope i explained it well .
Someone have any idea about it ? what can i check ?
thanks in advance ,
Ron
03-21-2016 03:34 AM
Hi Ron
I can hardly believe that this is related to UCS, or SP. Are you sure that your ESXi installations are all identical ? Can you ping the ESXi that have a problem ? Is HA and loadbalancing identical on all ESXi hosts ?
Walter.
03-21-2016 03:47 AM
Hi Walter, thanks for your response .
OK , got it .
I can ping the ESXis that have problem .
The hosts [the "good" ones , and the "not good" ones] ,are in the same cluster , they all got the same HA , but about the loadbalance , how can i check it on specific host ? I know only to check it on the whole Cluster , on the "Cluster Settings"
Thanks
03-21-2016 03:53 AM
Are the ones that have problems using exactly the same North bound uplinks, like the one which are working ok ?
Can you vmotion a VM from a failing ESXi to a good ESXi, and then it works ?
Are both UCS fabrics operational ?
03-21-2016 05:33 AM
1. Same northbound uplinks
2. I can VMotion VMs on another Vlans from "working" to "non-working" ESXi [I can also VMotion on this Vlan , and it'll success , but the VM's networking will be down]
3. both fabrics are operational [one is primary , and the other is secondary]
03-21-2016 07:44 AM
Reg. 2) do I understand you correctly, the problem moves with the VM ? if yes, I would compare the vnic configuration, eg. port-group,.....
03-22-2016 12:02 AM
Hi , thanks again for the assistant & time
The problem is not to move the VM, the VM is success with migrating between all the hosts, but, the VM's networking [ping] is working only on 3 hosts ....
[all hosts connected to same VDS]
03-22-2016 12:30 AM
So a VM which cannot be pinged is moved to another ESXi host, and there is works ? correct ?
- which ESXi version
- which UCS version (I assume the proper vnic/fnic drivers are installed, same on all blades)
- are the blades that are working and not working in the same chassis; or are the non working in another chassis ?
03-22-2016 12:42 AM
See inline in Bold :
- which ESXi version - ESXi 5.5
- which UCS version (I assume the proper vnic/fnic drivers are installed, same on all blades) - UCSM 2.2(5a), and the FIs version : UCS-FI-6248UP . the same drivers installed proper, and it's the same on all blades
- are the blades that are working and not working in the same chassis; or are the non working in another chassis ? The working blades are from the same chassis, and all the non-working, is from other different chassis [but same model of chassis/blades]
03-22-2016 01:31 AM
Did you install on local disk, or boot from SAN ?
Very strange ! Is it possible, to move one blade from a non working chassis to the one which seems to be ok ?
03-22-2016 01:35 AM
All is the same , local boot .
mmm.... Nice idea, I'll check with my manager
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide