This topic is a chance to clarify your questions about Cisco HyperFlex installation, upgrading and troubleshooting best practices. During the session Cisco experts will answer questions about how to upgrade, install and troubleshoot common issues on Cisco HyperFlex, including cluster Expand Problem.
Questions related to general issues seen in the HyperFlex environment are also welcome and encouraged.
To participate in this event, please use the button below to ask your questions
Ask questions from Monday 13th to Friday 24th of January, 2020
Afroj & Aasim might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center community.
**Helpful votes Encourage Participation! **
Please be sure to rate the Answers to Questions
Your question is too open-ended.
Just for the Hyperflex cluster configuration, we dont need vSANs to be configured on UCS or Upstream switch (Nexus or MDS).
However you can present FCoE luns to HX cluster's ESXi host for which vSAN can be configured and have to update SP associated with the HX servers. Just as you do for other non-HX servers.
Here is a reference for UCSM configuration:
VMware VSAN and Hyperflex works little different.
Cisco Hyperflex offers both hybrid and all-flash options just like vSAN, however, the scalability is limited to eight storage nodes in a hybrid configuration and 16 in an all-flash configuration. vSAN on the other hand scales to 64 nodes
These are good readouts #
Top 5 Reasons to Choose Cisco HyperFlex Systems:
Hope this will help !!
we would like to add an additional pair of vNICs to our Cisco Hyperflex hosts. The Cisco HX Data Platform Installer wizard did not seem to provide any options for adding additional vNICs to the hosts and we would like to know if doing so manually is likely to cause difficulties with any future upgrades or general operation of the platform.
Thanks for your interest !
Yes, You can safely add additional vNICs to HX nodes with limitation that you keep existing configuration of “core” networking untoched i.e. you will not delete/change mgmt and data vSwitches and/or move theirs ports to other vSwitch. I recommend to keep vmotion vNICs configuration as is, too.
If you require additional management interface on ESXi side, you can bind it to newly created pair, too. However, keep in mind than number of vmkernel adpaters are limited. I also recommend to check existing vSwitches configuration for physical NICs active/standby configuration. The recommendation is to setup same A/S policy across all nodes to both close the traffic within a single FI and load balance it across Fis.
In our HX environment, on Vcenter we are seeing controller VMs have "Virtual machine Memory Usage" alarms in critical state.
Can you please let me know why i am getting these alarms and how i can limit the memory usage to avoid such alarms in future.
Thanks in advance!
Please check the below things:
# Checked Controller VMs (CVM) are at 72 GB memory ?
# Confirmed there is enough free memory on controller VM
Share the output of "free" from SCVM
# Checked Controller VMs (CVM) are at 72 GB memory. yes it is 78GB
below is the output@
total used free shared buff/cache available
Mem: 74234748 42211016 29987484 6912 2036248 31464156
Swap: 0 0 0
thanks for your reply, It sounds like you are hitting known VMware issue that is being tracked under these defects#
The workaround for this issue is to disable the virtual machine memory usage alarm to avoid false positives as mentioned by Aasim in one of his response.
To disable the alarm definition:
Select the vCenter object in the navigation pane of the vSphere Web Client
Click the "Monitor" tab, then the "Issues" sub-tab
Here, select "Alarm Definitions" and search for "Virtual machine memory usage"
Highlight the alarm and click "Edit", then un-check "Enable this alarm"
For reference, See VMware article: https://recommender.vmware.com/solution/SOL-12357
Thanks for the reply Afroj!
I have another question : Can we remove and add the same converged node back in the same cluster?
If not, what is the reason and what would be the workaround to add the same node back in the cluster ?
More than happy to help you.
Can we remove and add the same converged node back in the same cluster? No
Yes you cannot add the same converged node back in the cluster.Zk maintains the mapping of the disk UUIDs to the pNode ids. So if you remove the a node and then want to add it back with a redeployed SCVM that changes the pNode id. Now zk will have to map node same disk uuids with the new pnode id which creates an issue. so it is not possible
Only way I see if you replace all the disk :(
Thank you for your interest.
It is quite normal for Controller VM (SCVM) memory utilization to go high as the Storage controller is a pass-thru device , all IOs are handled by SCVM.
Therefore you may see Critical alerts in vCenter about high memory utilization for the SCVMs again and again.
It would be best to suppress this alarm from vCenter or simply ignore.
Also as mentioned in above reply, try the command from SCVM SSH:-
# vsphere client -> VM edit settings -> Memory Allocated