cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3135
Views
2
Helpful
4
Comments
Vikram Hosakote
Cisco Employee
Cisco Employee

An OpenStack cloud has a lot of virtual features. Almost every piece of hardware has been virtualized and this makes me think that a cloud is a very "soft" system.


Hubs, switches, routers, interfaces/ports, line cards, gateways, servers, firewalls, wires/cables/bus, motherboards, keyboard, mouse, memory/RAM, processor/CPU, hard drive/ROM, PCI slots, power switches -- All of these have a virtualized version in an OpenStack cloud!  They are no longer things that we un-box and install.  They are all virtual, software-based, Pythonic Linux processes!


Luckily, they have not virtualized CPU heatsinks, headphones, LCD projectors, power source, antennas, server chassis, laptop cases, web cameras and DVD/Blu-ray players yet!


As much as companies in the fields of software, networking, cloud, SDN, NFV and telecom are trying very hard to virtualize everything, I see that there is also an equal effort to use great hardware to replace virtualization in the cloud industry.


In today's cloud and data center industry, there is clearly a great amount of competition between software and hardware, between virtualization and bare metal.  "Bare metal" are the buzzwords and they have become very famous cloud jargons.  No more "bare" software!


Below are a few relevant diagrams.


pic1.png

pic2.png

pic3.png


"It is virtually impossible to virtualize everything". Right ?  There is clearly a war between virtualization and bare metal. We all know who wins when there is a war between software and hardware.


Below are the different bare metal technologies that replace virtualized features in an OpenStack cloud. The advantages of bare metal technologies over virtualized features are:

    • High performance
    • Very high scalability
    • High throughput
    • Large number of parallel processes
    • Very high processing speed
    • 100% Dedicated resources (CPU, memory, cache, hard disk, networking ports)

  1. OpenStack Ironic provisions a bare metal physical server that replaces a virtual machine.
  2. SR-IOV (supported by Intel and Mellanox cards) and Cisco's VM-FEX extend a physical network adapter directly to a virtual machine bypassing the layer-2 virtual switch (Open vSwitch or Linux Bridge).
  3. Cisco's ASR 1000 Series Router (ASR1K) plugin can be used to do layer-3 forwarding in OpenStack Neutron and replace Neutron's virtual router (that uses neutron-L3-agent and NAT software) with Cisco's ASR1k.
  4. Cisco's CPNR (Cisco Prime Network Registrar) plugin can be used with Neutron to replace dnsmasq (virtual DHCP and DNS server in Neutron) with a highly-scalable physical DHCP and DNS enterprise server.  The CPNR Neutron plugin has been open-sourced by Cisco on GitHub.
  5. Cisco's Nexus driver for Neutron can be used to replace neutron's virtual router and virtual layer-2 switch and do layer-2 switching and layer-3 forwarding on a physical Nexus switch (N3k, N5k, N6k, N7k and N9k).
  6. Cisco's APIC plugin for Neutron can be used to replace neutron's virtual router and virtual layer-2 switch and do layer-2 switching and layer-3 forwarding on Cisco's physical ACI (Application Centric Infrastructure) fabric that uses UCS and Nexus 9000.
  7. Linux containers and Docker containers can be created on a bare metal physical server with no hypervisor, no virtual machine and no virtual layer-2 switch.
  8. Neutron's provider networks can be used to do layer-2 switching and layer-3 forwarding on a physical switch and a physical router respectively, and thereby bypassing Neutron's virtual gateway, virtual router and the virtual layer-2 switch.
  9. NAT hardware acceleration can be used to replace Neutron's NAT software and iptables with a physical layer-3 device.
  10. SDN and OpenDaylight can be integrated with OpenStack to "virtually" use any bare metal physical device that supports REST/OpenFlow in the cloud.

As far as NFV (Network Functions Virtualization) is concerned, using bare metal technologies is completely the opposite of NFV ("un-NFV" or "NFun-V" ? ).


Below is a table that shows what is "in" and what is "out" when we "un-virtualize" the cloud.


OUTIN
Virtual machine Ironic provisioning a bare metal server
Virtual layer-2 switch (Open vSwitch or Linux Bridge)SR-IOV
Virtual layer-2 switch (Open vSwitch or Linux Bridge)VM-FEX
Neutron's virtual router (neutron-L3-agent and NAT software)Cisco's ASR 1000 Series Router (ASR1K) plugin
Neutron's virtual DHCP and DNS server (dnsmasq)Cisco's CPNR (Cisco Prime Network Registrar) plugin
Neutron virtual router and virtual layer-2 switchCisco's Nexus driver for Neutron
Neutron virtual router and virtual layer-2 switchCisco ACI fabric (UCS and Nexus 9000)
Containers on virtual machine using hypervisor and virtual layer-2 switchContainers on bare metal physical server
Neutron's virtual router and virtual gatewayNeutron's provider networks
Neutron's NAT software and iptablesNAT hardware acceleration
Low performance, reduced scale limit, low throughput and slow processingHigh performance, high scale limit, high throughput and fast processing
Shared resources (CPU, memory, cache, hard disk, networking ports)100% dedicated resources (CPU, memory, cache, hard disk, networking ports)
Cost-effective cloudExpensive cloud
Small network topologyLarge network topology
Cloud that is built quicklyCloud that needs more time to build
Minimum hardware expertiseLot of hardware expertise
Less hardware to maintainMore hardware to install and maintain
Less power usageMore power usage
Environmentally friendlyNot environmentally friendly
MoneyGreat hardware

As we clearly see in the the table above, using bare metal technologies to un-virtualize a cloud also has some disadvantages.

Do let me know if you virtualize or un-virtualize your cloud in the comments below!

4 Comments
bhouser
Cisco Employee
Cisco Employee

Good summary Vikram.  I'd add that I think OpenStack would like to abstract "everything" behind an API.  Virtualized items are easier todo that with.  But if real hardware could be abstracted behind an API, then it would be a game changer b/c you get the same convenience of API driven config, but with performance of real hardware.  The ASR ML2 neutron plugin is just such a thing.

eckelcu
Cisco Employee
Cisco Employee

The cost of building and operating the cloud infrastructure is one big cost consideration; however, I think an even bigger cost is the cost of developing and operating a huge number of applications and services that run on top of that cloud. One could argue that the apps and services costs are often paid by a different team/group/company than the team that pays to build and operate the cloud (e.g. public vs. private clouds), but those costs end up flowing upstream/downstream at the end of the day, so both teams impact the other. And the most damaging cost of all for the cloud operator would be their APIs get cluttered with arcane options for bare metal use cases that do not apply at all to virtual hardware use case such that few application developers and service providers write their application in a way that benefits from the bare metal capability of the cloud. This results in a higher performance, higher cost cloud that no application or service providers can justify paying to use.

Vikram Hosakote
Cisco Employee
Cisco Employee

Thanks for the comment Britt!

Yes, hardware should be abstracted behind APIs.  OpenStack Ironic, SDN, OpenFlow, Cisco ASR1k neutron ml2 plugin, Cisco's neutron Nexus plugin, Cisco's CPNR DHCP plugin, Cisco's APIC plugin, Cisco's UCSM APIs, Cisco's NETCONF config driver -- All of these configure hardware thru APIs.

Right, *every* hardware manufacturer must API-ize their gear!

Vikram Hosakote
Cisco Employee
Cisco Employee

Agreed!  Thanks a lot for your comments Charles!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:


Cisco Cloud Native resources: