cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4738
Views
25
Helpful
4
Replies

Cisco HyperFlex and Dell VxRail.

Hello All,

         recently we were discussing current data center infrastructure and it's contracts what will be renewed and whats not.

we found that our VNX SAN storage which is connected through Cisco MDSs to the UCS chassis 5108 and Cisco Fabric interconnect 6248UP. this mentioned VNX will be not supported by DELL EMC' soon which will lead us to reconsider the whole Cisco chassis also as they will be end of support by 2024.

they're thinking about the Dell VxRail as  its simplified compact no storage or Storage switching "MDS" needed.

I wanted  to get the way to go through from you as experts of how to stay in Cisco as my favorite vendor and respond to questions of how Cisco is simplified things as DELL new VxRail.

1 Accepted Solution

Accepted Solutions

RedNectar
VIP
VIP

I've been watching this post hoping someone would give a better reply than what I can give.  But anyway here goes.

First I guess you've read things like https://www.trustradius.com/compare-products/cisco-hyperflex-vs-vxrail where one reviewer writes

"HX stacks up well ahead of the competition (we are running all the three HCI's) in terms of performance, stability, seamless integration with network backbone (no conflicts!), unified management of UCSM and most importantly, it helped us setup data centre level HA just with 2 node cluster per data centre and it is working flawlessly in case of failure scenarios. Cisco TACs prompt and skilled support and fastest deployment time (without issues). Even the node addition in cluster and data equalisation is superb"

But when I talk about Hyperflex I tell people that the thing that gets me excited about HX is the beautiful internal design that enables HX to deliver Enterprise-grade storage.

So let me break that down a bit and I'll talk about

  1. Enterprise grade storage
  2. The internal design - both the software data platform and the UCS hardware

By Enterprise-grade storage I mean storage that performs as well as or better than traditional siloed storage, where the image for a VM lives NOT locally on the node that is running the VM, but on external storage.  This gives you much greater flexibility to move VMs from one node to another to balance loads and get optimum performance. In fact, products like VMware’s DRS and Cisco’s CWOM will automate this process for you.

The way Cisco achieves this is by clever internal design

  • firstly the hardware builds on the already proven UCS platform to ensure that all the storage traffic between the HX nodes gets storage grade priority.  In fact, reads and writes of data between nodes across the low-latency Fabric Interconnects are just as fast as reading and writing locally – about 2µs.  Keep that fact in mind as you read how Cisco has capitalised on this to provide truly scalable performance in a Hyper Converged Infrastructure.
  • And secondly, and probably more importantly, the HXDP (Hyperflex Data Platform) software itself which was designed from the ground up as a pointer-based distributed storage operating system.  Everything about the HXDP software is distributed. 

The .vmdx file that makes up the VM itself is distributed across all the nodes in the cluster.  A second (and a third copy if using replication factor 3) is also distributed across all the nodes in the cluster.

Within each node, the pieces of the .vmdk file that the node is responsible for are distributed across the disks within the node using a log structured file system specifically designed to optimise performance on both spinning disks and flash storage.

Because the .vmdk file is distributed, should the VM need to be moved from one host to another, none of the underlying .vmdk file needs to move. Just like enterprise-grade siloed storage. This is because the data is already distributed across all nodes.

When a VM reads or writes to its underlying .vmdk file (or any other part of the HXDP storage), the read or write requests are distributed across ALL the storage nodes, meaning if you have say 6 storage nodes, every VM can and will use all 6 storage controller VMs to service its read and write requests giving the VM a very consistent level of performance.

If a disk in a node fails, all 6 (sticking with my example of 6 storage nodes) will work together to rebuild the lost data on the disk

If a node fails, the remaining 5 nodes work together to rebuild the lost data on that node.

If a storage controller VM fails, the remaining 5 storage controller VMs continue to service the VMs running on all 6 nodes – VMs on the node that lost the storage controller VM will be no more affected than any other VM (because ALL VMs now only have 5 storage controller VMs to share the load). In other words, because the storage system is distributed, loss of one part of the distributed system has minimal impact.

This also highlights the fact that you don’t NEED a storage controller VM to run VMs.  This is how Cisco’s HXDP supports diskless nodes.

If a Flash cache drive fails in our 6 node example, the Storage Controller VMs continue to use the Flash cache of the other 5 nodes – recall that reads and writes across the low-latency network are as fast as reading locally, another way diskless nodes make use of the total capacity of the cluster.

Finally, I’ll say a word about the pointer-based nature of the distributed storage.  Because files, such as the .mdk file that defines a disk on a VM is distributed across many disks across multiple nodes, the system keeps a set of pointers that define how the pieces are connected together.  This means that to snapshot or clone a VM, no data has to be replicated.  All that needs to be done is to copy the set of pointers that define that .vmdk file!

And this feature also highlights another feature of the file system that is built-in, always on and can’t be turned off. It needs no special hardware (unlike some other systems that attempt this) and takes no extra overhead, because it is just the way the file system works! So what is this marvellous feature? You may have guessed already.  It is de-duplication.  Yes. By using as a pointer-based distributed storage system Cisco HX has de-duplication built in as an integral part of the way the file system operates.

Furthermore, the pointers are based on fingerprints, so if ever another piece of file is to be written to disk, when its fingerprint is calculated the system will detect that the data for that fingerprint already exists, so no data is written. Like I said. De-duplication is built in.

I hope this helps


Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

View solution in original post

4 Replies 4

Any response that may help?

TIA

VXrail is a VMware vsan based hyperconverged solution.

Cisco has a Hyperconverged storage solution called HyperFlex,, which uses UCSM FIs, Integrated rack servers, and can use B series for compute nodes.  This solution provides both Compute, storage, hypervisor, networking, eliminating need for separate SAN network & storage.

See https://www.cisco.com/c/en/us/products/hyperconverged-infrastructure/hyperflex-hx-series/index.html

 

Sounds like you might be able to re-utilize some of your existing Cisco gear with HyperFlex solution.

 

Kirk...

RedNectar
VIP
VIP

I've been watching this post hoping someone would give a better reply than what I can give.  But anyway here goes.

First I guess you've read things like https://www.trustradius.com/compare-products/cisco-hyperflex-vs-vxrail where one reviewer writes

"HX stacks up well ahead of the competition (we are running all the three HCI's) in terms of performance, stability, seamless integration with network backbone (no conflicts!), unified management of UCSM and most importantly, it helped us setup data centre level HA just with 2 node cluster per data centre and it is working flawlessly in case of failure scenarios. Cisco TACs prompt and skilled support and fastest deployment time (without issues). Even the node addition in cluster and data equalisation is superb"

But when I talk about Hyperflex I tell people that the thing that gets me excited about HX is the beautiful internal design that enables HX to deliver Enterprise-grade storage.

So let me break that down a bit and I'll talk about

  1. Enterprise grade storage
  2. The internal design - both the software data platform and the UCS hardware

By Enterprise-grade storage I mean storage that performs as well as or better than traditional siloed storage, where the image for a VM lives NOT locally on the node that is running the VM, but on external storage.  This gives you much greater flexibility to move VMs from one node to another to balance loads and get optimum performance. In fact, products like VMware’s DRS and Cisco’s CWOM will automate this process for you.

The way Cisco achieves this is by clever internal design

  • firstly the hardware builds on the already proven UCS platform to ensure that all the storage traffic between the HX nodes gets storage grade priority.  In fact, reads and writes of data between nodes across the low-latency Fabric Interconnects are just as fast as reading and writing locally – about 2µs.  Keep that fact in mind as you read how Cisco has capitalised on this to provide truly scalable performance in a Hyper Converged Infrastructure.
  • And secondly, and probably more importantly, the HXDP (Hyperflex Data Platform) software itself which was designed from the ground up as a pointer-based distributed storage operating system.  Everything about the HXDP software is distributed. 

The .vmdx file that makes up the VM itself is distributed across all the nodes in the cluster.  A second (and a third copy if using replication factor 3) is also distributed across all the nodes in the cluster.

Within each node, the pieces of the .vmdk file that the node is responsible for are distributed across the disks within the node using a log structured file system specifically designed to optimise performance on both spinning disks and flash storage.

Because the .vmdk file is distributed, should the VM need to be moved from one host to another, none of the underlying .vmdk file needs to move. Just like enterprise-grade siloed storage. This is because the data is already distributed across all nodes.

When a VM reads or writes to its underlying .vmdk file (or any other part of the HXDP storage), the read or write requests are distributed across ALL the storage nodes, meaning if you have say 6 storage nodes, every VM can and will use all 6 storage controller VMs to service its read and write requests giving the VM a very consistent level of performance.

If a disk in a node fails, all 6 (sticking with my example of 6 storage nodes) will work together to rebuild the lost data on the disk

If a node fails, the remaining 5 nodes work together to rebuild the lost data on that node.

If a storage controller VM fails, the remaining 5 storage controller VMs continue to service the VMs running on all 6 nodes – VMs on the node that lost the storage controller VM will be no more affected than any other VM (because ALL VMs now only have 5 storage controller VMs to share the load). In other words, because the storage system is distributed, loss of one part of the distributed system has minimal impact.

This also highlights the fact that you don’t NEED a storage controller VM to run VMs.  This is how Cisco’s HXDP supports diskless nodes.

If a Flash cache drive fails in our 6 node example, the Storage Controller VMs continue to use the Flash cache of the other 5 nodes – recall that reads and writes across the low-latency network are as fast as reading locally, another way diskless nodes make use of the total capacity of the cluster.

Finally, I’ll say a word about the pointer-based nature of the distributed storage.  Because files, such as the .mdk file that defines a disk on a VM is distributed across many disks across multiple nodes, the system keeps a set of pointers that define how the pieces are connected together.  This means that to snapshot or clone a VM, no data has to be replicated.  All that needs to be done is to copy the set of pointers that define that .vmdk file!

And this feature also highlights another feature of the file system that is built-in, always on and can’t be turned off. It needs no special hardware (unlike some other systems that attempt this) and takes no extra overhead, because it is just the way the file system works! So what is this marvellous feature? You may have guessed already.  It is de-duplication.  Yes. By using as a pointer-based distributed storage system Cisco HX has de-duplication built in as an integral part of the way the file system operates.

Furthermore, the pointers are based on fingerprints, so if ever another piece of file is to be written to disk, when its fingerprint is calculated the system will detect that the data for that fingerprint already exists, so no data is written. Like I said. De-duplication is built in.

I hope this helps


Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

Thank you so much for the great explanation.


Review Cisco Networking products for a $25 gift card