02-02-2010 03:21 PM - edited 03-01-2019 09:36 AM
We have a pair of 6120s connected to a single chassis. We have the management interfaces set up as
Fabric a = 10.1.239.2
Fabric b = 10.1.239.3
Virtual IP 10.1.239.1
The HA cluster is showing active and operational.
Issues:
We can access the system fine by browsing to 10.1.239.1
We can Ping .1 and .2
We cannot ping .3
We have re-checked addressing, masks, default gateway etc. and all seem to be in order.
If I power down Fabric A the virtual address does not come active on Fabric B.
The system continues to pass user data to the ESX servers we have configured on the blades. Just Management access seems to not failover correctly.
Anyone have any thoughts on this? We are running the latest code 1.0.(2j).
02-02-2010 03:30 PM
+ How do you have the cluster links on the Fabric Interconnects connected? Single, dual?
+ How long are you allowing for the failover to occur? I've seen it take 2-3 mins for the VIP to move to the other FI.
Robert
02-02-2010 05:03 PM
Thanks for the reply Robert,
Fabric Interconnects are dual cluster attached (port 0 to port 0) and (port 1 to port 1). Just a couple of 3 foot CAT 5 cables.
I've waited well over 5 minutes but still nothing. Power Fabric A back up and the virtual IP eventually comes live again.
Can you tell me if you were able to ping both management interfaces (.2 and .3 in my case) in your installs.
02-02-2010 05:16 PM
Yes you should always be able to ping both the individual Fabric Interconnect Mgmt addresses as well as the VIP.
I'll run a test today and advise exactly how long it takes for the VIP to failover to the seocondary FI when the primary is failed.
**Update** During my tests it took 1min 35sec to fail the VIP over to the secondary Fabric Interconnect.
Regards,
Robert
02-02-2010 05:52 PM
Thanks Robert,
I plan on running more test again tommorow as well. I think I need to get into the CLI and have a look around.
02-02-2010 05:59 PM
Guys,
In UCSM, Admin tab, check the management interface and ensure all relevant protocols are enabled for both FI's.
Also check to see if you can ssh and http to the virtual int and the primary FI. The secondary should say http is not enabled.
Not sure if the uplinks are configured correct.
Try these and report back.
gd
02-02-2010 05:59 PM
If you hit the same issues again let me know. We'll grab your config and topology if you're still having problems.
Robert
02-03-2010 07:46 AM
Well it finally dawned on me what the issue is. Because of lack of copper SFPs we only had the management interface on Fabric A connected to the network. As soon as I moved the mgmt connection from A to B then 10.1.239.3 on fabric B was pingable.
Sorry to waste everyone time on a stupid mistake. I will get both mgmts interfaces connected in an I'm sure failover will be fine.
02-04-2010 12:53 AM
No worries, "unexpected cabling" is in the Top 3 root causes of issues :-) The most common one I find is "unexpected fibre-channel crossover" where 6120-a is connected to MDS-b :-) That's a bugger when you are trying to zone and can't see the pWWNs...cos they're going to the wrong fabric!
Happens to the best of us! :-)
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide