01-30-2020 07:25 AM
First post here on the community. I am install my first HX cluster with a Cisco Intersight and on the validation I am getting Summary - Summary Step: IP address: 169.254.1.20 at default port: 443 is reachable and Cluster Data IP Not Used: IP address: 169.254.1.20 at default port: 443 is reachable. any advice how to bypass this and so the install will continue and not fail?
02-05-2020 08:47 AM
Hi and welcome to the community.
First, double check and make sure there are no other devices (outside of HX) that are on the storage VLAN that use link local IP addressing.
Second, I believe this can happen if you are re-installing a cluster that was previously configured, or if you've attempted the install of a new cluster multiple times and the installation has progressed passed a certain point before failing (where it has already assigned the IP addresses previously). We are working on some enhancements to the installation process to safe guard against this type of behavior, but they have not been rolled out yet in Intersight.
To resolve, I would try downloading the HyperFlex specific ESX install ISO (from the HyperFlex section of software.cisco.com) and re-install ESXi on your HyperFlex nodes. After you've re-installed the HX specific ESXI ISO on each node, try the installation again through Intersight.
When you run the installation again through Intersight, please go into your "Storage Configuration" policy for you HyperFlex Cluster profile and make sure to check the box for "Clean up Disk Partitions" as well.
Hope that helps.
Thanks,
Mike
02-05-2020 08:50 AM
Hi and welcome to the community.
First, double check and make sure there are no other devices (outside of HX) that are on the storage VLAN that use link local IP addressing.
Second, I believe this can happen if you are re-installing a cluster that was previously configured, or if you've attempted the install of a new cluster multiple times and the installation has progressed passed a certain point before failing (where it has already assigned the IP addresses previously). We are working on some enhancements to the installation process to safe guard against this type of behavior, but they have not been rolled out yet in Intersight.
To resolve, I would try downloading the HyperFlex specific ESX install ISO (from the HyperFlex section of software.cisco.com) and re-install ESXi on your HyperFlex nodes. After you've re-installed the HX specific ESXI ISO on each node, try the installation again through Intersight.
When you run the installation again through Intersight, please go into your "Storage Configuration" policy for you HyperFlex Cluster profile and make sure to check the box for "Clean up Disk Partitions" as well.
Hope that helps.
02-05-2020 10:02 AM
I was also able to get the system deployed with the Hyperflex installer so I know there are not any issues.I received the error message and reinstalled with Cisco ESXi. I then installed with the Hyperlex installer and was able to get deployed. I then re-imaged with the Cisco ESXi and tried again with Intersight and got he error again.
Thanks
02-05-2020 01:44 PM
Hello. Is this a fresh install or have these nodes been previously installed into a cluster? I believe this error means there is an IP address conflict because it already in existance from a previous install attempt.
02-06-2020 06:39 AM
This is a fresh install. I am able to deploy with the Hyperflex installer with no issues. I wiped the systems with the Cisco HX ESXi image and redeployed with intersight and I get the error.
03-24-2020 10:59 AM
this is a response I got back from Cisco that fixed the problem. I did not fix it, but someone on my network team and Firewall team fixed the issue.
We’ve seen before where this is caused by one of the following:
We were able to work around it by blackholing the traffic by routing it to a null interface ‘ip route 169.254.1.0 255.255.255.0 Null 0’ (verify the routing table first). If you go this route you would want to exclude it from route redistribution if they are redistributing static routes.
03-25-2020 04:55 PM
This happened to me twice in one day for two different installs (same customer, different DCs)
Now, I'm not sure if it has any bearing on the situation, but in each case the Data VLAN was not completely local to the FIs and immediate upstream devices. In other words, there actually WERE other devices on the network. I'm not too sure how the Intersight installer tests to see if there is a device on the VLAN (one would assume that an ARP request would be sent followed by a TCP SYN on port 443) but I suspect it is not quite what it should be.
So that is just my 2c
12-16-2020 03:38 PM
We ran into the same problem.
Here are some options to fix this issue:
a) Disable the ‘proxy arp’ on the L3 device for the 169.254.173.x subnet
b) Create a null route that dumps 169.254.x.x traffic in a black hole, since that subnet should be L2 only
Example: `ip route 169.254.173.0 255.255.255.0 Null 0`
NOTE: If go with this option, verify the routing table first, and may want to exclude it from route redistribution if you are redistributing static routes
c) Create an ACL deny all to port 443 on 169.254.173.0/24
NOTE: 169.254.173.x looks to be what this HX cluster is using for storage, but you can also use 169.254.0.0/16 instead above if you are deploying multiple clusters as the next cluster may get a different 169.254.x.x address.
d) Use HyperFlex Installer - Configure the HyperFlex and then Claim the cluster into Intersight.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide