04-01-2015 01:17 PM - edited 03-01-2019 12:06 PM
Hi
When I configured service profile to a blade then deploy ESXi, It works correctly for a time then I found the following error " VIF 716 link state is down" as in the attached and the server management IP is pingable also the ESXi IP is pingable during the error
I did the Same to another blade but no problem
Solved! Go to Solution.
04-03-2015 12:27 AM
Hi Amr
I see the following issues
- if global connection policy is 2, it means that min 2 links have to be up, otherwise the chassis discovery will fail on this fabric, and not bring up the remaining link. Please change global default down to 1 link !
- In your SP, you have selected the "hardware failover flag"; please remove that for both veth interfaces. Best practise is to let vswitch handle failover, not UCS !
- is the IP pool used for KVM in the same subnet / vlan, as the out of band management address of the FI, and VIP
Walter.
04-03-2015 11:33 AM
Hi Amr
This has been discussed many times before
see eg. https://communities.vmware.com/message/2183957
But about the HW failover configuration I really do not know why not recommended to use the HW failover I think it's better than failover of VMware VSS
Agreed that it should work, but I'd prefer to enable and configure failover behavior in ESXi rather than UCS. In my research on this issue, I found a thread over on the Cisco UCS forum that has a clear statement regarding this configuration: https://supportforums.cisco.com/thread/2187653.
"Never use fabric failover with a hypervisor vswitch. Let the failover mechanism of the vSwitch handle the network outage."
So while it may be a valid design, it looks like the consensus is to let your vSwitch handle the failover.
Cheers
Walter.
04-01-2015 01:32 PM
Additional info.
When i remove the IOM-1 cables to FI then connected them back, the VIF came up again
04-01-2015 04:35 PM
Hi Amr
Do you have a SP with 2 vnic's, one attached to fabric A, the other to B ?
I assume you also see on the vswitch, that one link is down.
How many links do you have between IOM and FI ?
Do both FI have North bound uplinks ?
It could be, that either the link between IOM and FI or all uplinks of that fabric FI are down.
04-02-2015 11:00 AM
Hi Walter
I hope you are fine
you are really helpful
1- Do you have a SP with 2 vnic's, one attached to fabric A, the other to B ?
yes
2- How many links do you have between IOM and FI ?
2 links per 1 IOM
3- Do both FI have North bound uplinks ?
yes, uplinks in the same core
I tried to unplug the FI-IOM cables and it work correctly
but it's a big problem if we lose one link from FI to IOM some VMs and ESXi will be down , I mean no failover there?
04-01-2015 04:53 PM
Hello Amr,
is the interface up on the OS side?
do you lose services when the error come's up?
please check the following bug:-
https://tools.cisco.com/bugsearch/bug/CSCtt38889/?reffering_site=dumpcr
Thanks,
Saurabh
04-02-2015 10:55 AM
Hello Saurabh
the interfaces in ESXi side was up and can connect to it through vKVM and no services loss but the ESXi not pingable
no OS in the hyper-visor ESXi till now so i do not know the connection states
I supposed it's a bug may be
thanks
04-02-2015 01:04 PM
Hi Amr
Does your vswitch have 2 links, one to fabric A, the other to B ?
How is you loadbalancing on ESXi done ? I hope it's "port id"
Which UCS version ? which ESXi version ?
Are both IOM links to FI up; if you not, you might have to ack the chassis (default gloabal connection is 1).
Walter.
04-02-2015 02:55 PM
Hi Walter
Does your vswitch have 2 links, one to fabric A, the other to B ?
yes as in attached img1,img2
How is you loadbalancing on ESXi done ? I hope it's "port id"
yes it's port ID by default I think see LB attached image
Which UCS version ? which ESXi version ?
6.0.0
Are both IOM links to FI up; if you not, you might have to ack the chassis (default gloabal connection is 1).
yes both up and global chassis connection is 2. see attached global_discovery
04-03-2015 12:27 AM
Hi Amr
I see the following issues
- if global connection policy is 2, it means that min 2 links have to be up, otherwise the chassis discovery will fail on this fabric, and not bring up the remaining link. Please change global default down to 1 link !
- In your SP, you have selected the "hardware failover flag"; please remove that for both veth interfaces. Best practise is to let vswitch handle failover, not UCS !
- is the IP pool used for KVM in the same subnet / vlan, as the out of band management address of the FI, and VIP
Walter.
04-03-2015 09:03 AM
Hi Walter
thanks for helping
I modified the global discovery policy to be 1 link
But about the HW failover configuration I really do not know why not recommended to use the HW failover I think it's better than failover of VMware VSS
Also, if we do not use ESXi and use OVM or redhat or HyperV or use server 2012 baremetal installation, could we do the SW failover in those cases?
yes the KVM IP and Vlan is the same in FI mgmt
Amr
04-03-2015 11:33 AM
Hi Amr
This has been discussed many times before
see eg. https://communities.vmware.com/message/2183957
But about the HW failover configuration I really do not know why not recommended to use the HW failover I think it's better than failover of VMware VSS
Agreed that it should work, but I'd prefer to enable and configure failover behavior in ESXi rather than UCS. In my research on this issue, I found a thread over on the Cisco UCS forum that has a clear statement regarding this configuration: https://supportforums.cisco.com/thread/2187653.
"Never use fabric failover with a hypervisor vswitch. Let the failover mechanism of the vSwitch handle the network outage."
So while it may be a valid design, it looks like the consensus is to let your vSwitch handle the failover.
Cheers
Walter.
04-02-2015 01:11 PM
the interfaces in ESXi side was up and can connect to it through vKVM and no services loss but the ESXi not pingable
How is you management vlan for ESXi setup:
2 choices
- either you define vlan id on ESXi, and mark it in UCS as non native
- you don't specify any vlan in the ESXi configuration, and mark it in UCS as native vlan
04-02-2015 02:42 PM
Hi Walter
I didn't specify any vlan in the ESXi configuration, and mark it in UCS as native vlan
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: