cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2727
Views
5
Helpful
0
Comments
kramesh
Cisco Employee
Cisco Employee

This document provides an overview of DPDK and steps to configure and troubleshoot DPDK for OVS in NFVIS. 

DPDK stands for Data Plane Development Kit, is a set of libraries that improves data plane performance.DPDK descriptionDPDK description 

 

Starting 3.10.1 NFVIS release, DPDK for OVS is supported. 3.12.2 NFVIS release is recommended when using OVS-DPDK.

 

NOTE : 

 

DPDK-OVS

Service Chain Throughput

Service Chain throughput near SRIOV, better than non-DPDK OVS

NFVIS Default Cores + Additional CPU for dpdk function

1+1 CPU <=16 core system

2+2 >16 core system

3+1GB mem in <=32GB system

4+2GB mem in <=64GB system

4+4GB mem in <=128GB system

Driver requirements in VNF

Virtio required

Supported Platforms

ENCS

UCSE, UCS-C, CSP5K

 

Yes 3.10.1 onwards

Yes 3.12.1 onwards

Caveats

Packet /traffic capture : Not supported in DPDK.

Span traffic on PNIC : Not supported in DPDK.

After OVS-DPDK is enabled, it cannot be disabled as a individual feature. Only way to disable DPDK would be a factory reset. 

 

Configuration Steps

OVS-DPDK can be enabled via CLI or NFVIS GUI 

VM-Lifecycle->NetworksVM-Lifecycle->Networks'system settings dpdk enable' 

 

 

Verify DPDK status via CLI or NFVIS GUI

‘show system settings dpdk-status’ 

 ENCS5412 CPU Allocation with DPDK enabledENCS5412 CPU Allocation with DPDK enabled

 

For unconfiguring DPDK, factory reset must be used.

There are 3 factory reset options, each one disables DPDK

dpdk2.jpg

Factory-default-reset-config DPDK-enabled networks retained VNF deployment removed Registered Images removed
all No No Yes Yes
all-except-images No No Yes No
all-except-images-connectivity No Yes Yes No

Troubleshooting 

There are 3 possible DPDK statuses when running the

‘show system settings dpdk-status’ command

  • Enabled - DPDK has been enabled successfully and no further action required. No reboot required.

nfvis#show system settings-native dpdk-status

system settings-native dpdk-status enabled

  • Error – When DPDK is enabled, but not enough CPU’s are reserved by the system or there is not enough contiguous Hugepage memory available, the status resolves into an error.

nfvis#show system settings-native dpdk-status

system settings-native dpdk-status enabled  <-----Error

  • Enabling – After DPDK is enabled and the DPDK status fails to resolve to ‘enabled’ or ‘error’, the status will show ‘enabling’.

nfvis#show system settings-native dpdk-status

system settings-native dpdk-status enabled  <-----Enabling

 

In order to recover the system from an ‘enabling’ DPDK status

1.Collect the tech-support logs and report to TAC engineer

vm-lifecycle->host->diagnostics->Download Tech-support  (tar file with all the log files)

2.Run ‘factory reset’

3.Enable DPDK again

*If after enabling DPDK initially, the status fails to resolve to  ‘enabled’ or ‘error’ after 5 minutes, please collect the tech- support logs and report it to your TAC engineer.

Reference

https://www.cisco.com/c/en/us/td/docs/routers/nfvis/config/3-12-1/nfvis-config-guide-3-12-1/setup-system-config.html

 

Huge Pages overview

  • Memory is generally allocated in chunks of 4KB (called pages)
  • HugePages are memory chunks that are bigger than the default 4KB pages, such as 2MB.
  • When a CPU looks up pages, it will first inspect TLB cache
  • TLB can be accessed extremely fast
  • TLB cache has a limited number of entries for pages.
  • By increasing the size of the pages, we can use the TLB cache in a more efficient way.
  • 4KB * 256 = 1024KB = 1MB
  • 2MB * 256 = 512MB = 0.5GB
  • Increase by factor of 512.

HugePages always stay in memory and cannot be swapped. Thus, as soon as HugePages get allocated they are not available to the overall system anymore, although they might not be used at the moment.dpdk4.jpg

HugePages are always allocated by the free memory of the system, which can be less than the total available memory. However not all free memory can be allocated as HugePages there will always be a remains. 

 

Troubleshoot : show system memory

Troubleshoot : show system mem-info

Troubleshoot : VM domain XML

For a VM to use HugePages, one must ensure that the XML extract depicted below is included inside the VM domain XML.

nfvis# support virsh dumpxml ROUTER

<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0’>

<qemu:commandline>

    <qemu:arg value='-numa'/>

    <qemu:arg value='node,memdev=mem'/>

    <qemu:arg value='-mem-prealloc'/>

    <qemu:arg value='-object'/>

    <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/>

  </qemu:commandline>

</domain>

 If DPDK is enabled each newly deployed VM should include it. If this is not the case, check /var/log/esc/forever.log for any dpdk-related errors.

Troubleshoot : network xml

vNIC on network which is on service-bridge will have following configuration. Here vnic2 is on test-net, which belongs to service-bridge test-br. Notice though, the configuration doesn’t have any reference of test-br.

<interface type='vhostuser'>
      <mac address='52:54:00:76:a4:88'/>
      <source type='unix' path='/run/vhostfd/vnic2' mode='server'/>
      <target dev='vnic2'/>
      <model type='virtio'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

 

Verify vnic2 configuration on the bridge: test-br.

nfvis# support ovs vsctl show          

92171152-1c1d-4d95-935a-f8dba97527b6

…..

    Bridge test-br

        Port test-br

            Interface test-br

                type: internal

        Port "vnic2"

            Interface "vnic2"

                type: dpdkvhostuserclient

                options: {vhost-server-path="/run/vhostfd/vnic2"}

…..

    ovs_version: "2.9.2"

 

Logs - /var/log/esc/forever.log

VM Deployment log

 

2018-12-03T22:20:23.968Z - ^[[34mdebug^[[39m: Check if dpdk is enabled.

2018-12-03T22:20:23.968Z - ^[[34mdebug^[[39m: Executing: /opt/platform-config/dpdk.sh passing: enabled

2018-12-03T22:20:24.218Z - ^[[34mdebug^[[39m: DPDK Status: true

2018-12-03T22:20:24.219Z - ^[[34mdebug^[[39m: Check if network mgmt-net is dpdk backed.

2018-12-03T22:20:24.219Z - ^[[34mdebug^[[39m: Executing: /opt/platform-config/dpdk.sh passing: network,mgmt-net

2018-12-03T22:20:24.445Z - ^[[34mdebug^[[39m: Network: mgmt-net dpdk status: true

2018-12-03T22:20:24.446Z - ^[[34mdebug^[[39m: Check if dpdk is enabled.

2018-12-03T22:20:24.446Z - ^[[34mdebug^[[39m: Executing: /opt/platform-config/dpdk.sh passing: enabled

2018-12-03T22:20:24.694Z - ^[[34mdebug^[[39m: DPDK Status: true

2018-12-03T22:20:24.694Z - ^[[34mdebug^[[39m: Check if network mgmt-net is dpdk backed.

2018-12-03T22:20:24.696Z - ^[[34mdebug^[[39m: Executing: /opt/platform-config/dpdk.sh passing: network,mgmt-net

2018-12-03T22:20:24.900Z - ^[[34mdebug^[[39m: Network: mgmt-net dpdk status: true

2018-12-03T22:20:24.901Z - ^[[34mdebug^[[39m: Creating vnic vnic0 on network mgmt-net

2018-12-03T22:20:24.901Z - ^[[34mdebug^[[39m: Executing: '/opt/platform-config/dpdk.sh' passing: create,mgmt-net,vnic0

2018-12-03T22:20:25.533Z - ^[[34mdebug^[[39m: Vhost FD: /run/vhostfd/vnic0

Logs - /var/log/vhostuser.log

VM Deployment:

2018-12-11 01:01:30,299 DEBUG: Bridge: test-br

2018-12-11 01:01:30,299 DEBUG: Checking if bridge test-br is DPDK backed

2018-12-11 01:02:02,294 DEBUG: Removing port vnic6

2018-12-11 01:02:02,295 DEBUG: Finding bridge for port vnic6

2018-12-11 01:02:02,344 DEBUG: Bridge: test-br

2018-12-11 01:02:02,409 DEBUG: Removed port: vnic6

2018-12-11 01:02:20,575 DEBUG: Removing port vnic9

2018-12-11 01:02:20,575 DEBUG: Finding bridge for port vnic9

2018-12-11 01:02:20,638 DEBUG: Bridge: test-br

2018-12-11 01:02:20,701 DEBUG: Removed port: vnic9

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking for a $25 gift card