Showing results for 
Search instead for 
Did you mean: 

Network Performance Monitoring with Catalyst 9300 Application Hosting

Cisco Employee

Hosting Native Docker Container on C9300


Powered by an x86 CPU, the application hosting solution on the Cisco® Catalyst® 9000 series switches provide the intelligence required at the edge. This gives administrators a platform for leveraging their own tools and utilities, such as a security agent, Internet of Things (IoT) sensor, and traffic monitoring agent.

Application hosting on Cisco Catalyst 9000 family switches opens up new opportunities for innovation by converging network connectivity with a distributed application runtime environment, including hosting applications developed by partners and developers.

Cisco IOS XE 16.12.1 introduced native Docker container support on Catalyst 9300 series switches. Now, C9404 and C9407 models also can support Docker container with Cisco IOS XE 17.1.1 release.  This enables users to build and bring their own applications without additional packaging. Developers don’t have to reinvent the wheel by rewriting the applications every time there is an infrastructure change. Once packaged within Docker, the applications will work within any infrastructure that supports docker containers.

In this blog, you will see how you can use native docker image (iperf) from Docker Hub to measure network performance by hosting on Cisco Catalyst 9300 switch.


Step-by-step installation configuration


1. Download the iPerf docker image to the local laptop.

Download the latest iPerf image from Docker Hub to the laptop.

Screen Shot 2019-06-05 at 11.41.12 AM.png


Docker engine has to be installed on laptop and can pull iPerf image from Docker Hub.

MyPC$ docker pull mlabbe/iperf3


Save the downloaded iPerf docker image as a tar archive:

MyPC$ docker save mlabbe/iperf3 > iPerf.tar


2. Login to the Catalyst 9300 and copy the iPerf.tar archive to the flash: drive.

copy usbflash0:iPerf.tar flash:

3. Configure network connectivity to the IPerf docker.


Screen Shot 2019-06-05 at 12.55.59 PM.png

a. Create the VLAN and VLAN interface:

      conf t
      interface Vlan123
      ip address


b. Configure the AppGigabitEthernet1/0/1 interface:

       interface AppGigabitEthernet1/0/1
       switchport mode trunk

The above configuration allows VLAN 123 on the AppGigabitEthernet port.


c. Map vNIC interface eth0 of the IPerf docker to VLAN 123 on the AppGigabitEthernet 1/0/1 interface: 

       conf t
       app-hosting appid iPerf
       app-vnic AppGigEthernet trunk
       guest-interface 0

       vlan 123 guest-interface 0
       guest-ipaddress netmask
       app-default-gateway guest-interface 0


4. Enable and verify App Hosting.


a. Verify that the USB SSD-120G flash storage is used.

     dir usbflash1:
     Directory of usbflash1:/
    11 drwx           16384    Mar 25 2019 22:32:36 +00:00    lost+found
    118014062592 bytes total (105824313344 bytes free)
Note: SSD-120G will be shown as usbflash1: in IOS-XE CLI. Internal flash and front panel usb (usbflash0:) do not support for application hosting.


b. Configure iox for App Hosting:

 conf t


show iox-service
IOx Infrastructure Summary:
 IOx service (CAF)      : Running
 IOx service (HA)        : Running
 IOx service (IOxman)  : Running
 Libvirtd                      : Running
 Dockerd                    : Running


5. Install ,activate and run the IPerf docker cat9k application.

Screen Shot 2019-05-14 at 9.26.51 PM.png

a. Deploy the IPerf docker application:

   app-hosting install appid iPerf package flash:iPerf.tar
   Installing package 'flash:iPerf.tar' for iPerf. Use 'show app-hosting list' for progress.

    show app-hosting list
    App id                                   State
    iPerf                                  DEPLOYED

 b. Activate the IPerf docker application:

    app-hosting activate appid iPerf
    iPerf activated successfully
    Current state is: ACTIVATED

c. Start the IPerf docker  application:

    app-hosting start appid iPerf
    iPerf started successfully
    Current state is: RUNNING

d. Verify that the IPerf docker  application is running:

    show app-hosting list
    App id                                   State
     iPerf                                  RUNNING


6. Working with the app.

a. Check app details.

   show app-hosting detail appid iperf

   App id                 : iperf

   Owner                 : iox

   State                   : RUNNING


   Type                    : docker

   Name                   : mlabbe/iperf3

   Version                : latest

   Description          :

   Path                 : usbflash0:iperf3_sai.tar

   Activated profile name : custom


   Resource reservation

     Memory              : 2048 MB

     Disk                    : 4000 MB

     CPU                    : 7400 units

     VCPU                  : 1 units

   Attached devices

     Type              Name               Alias


     serial/shell      iox_console_shell   serial0

     serial/aux        iox_console_aux     serial1

     serial/syslog    iox_syslog          serial2

     serial/trace      iox_trace           serial3

   Network interfaces



   MAC address        : 52:54:dd:50:b5:ce

   IPv4 address         :

   Network name      : mgmt-bridge193



   Run-time information

     Command              :

     Entry-point             : iperf3 -s

     Run options in use  :

   Application health information

     Status                    : 0

     Last probe error     :

     Last probe output   :

b. Check app utilization.

   show app-hosting utilization appid iPerf
   Application: iPerf
   CPU Utilization:
     CPU Allocation: 7400 units
     CPU Used: 1.49 %
   Memory Utilization:
     Memory Allocation: 2048 MB
     Memory Used: 893 KB
  Disk Utilization:
     Disk Allocation: 4000 MB
     Disk Used: 0.00 MB


7. Monitoring performance between the laptop and C9300.


MyPC$ iperf3 -c

Connecting to host, port 5201

[  5] local port 59408 connected to port 5201

[ ID] Interval           Transfer     Bitrate

[  5]   0.00-1.00   sec  7.51 MBytes  63.0 Mbits/sec                 

[  5]   1.00-2.00   sec  8.24 MBytes  69.2 Mbits/sec                 

[  5]   2.00-3.00   sec  10.0 MBytes  84.2 Mbits/sec                 

[  5]   3.00-4.00   sec  9.52 MBytes  79.9 Mbits/sec                 

[  5]   4.00-5.00   sec  9.36 MBytes  78.5 Mbits/sec                 

[  5]   5.00-6.00   sec  10.8 MBytes  90.8 Mbits/sec                 

[  5]   6.00-7.00   sec  10.1 MBytes  84.9 Mbits/sec                 

[  5]   7.00-8.00   sec  9.62 MBytes  80.7 Mbits/sec                 

[  5]   8.00-9.00   sec  10.9 MBytes  91.2 Mbits/sec                 

[  5]   9.00-10.00  sec  4.51 MBytes  37.7 Mbits/sec                 

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bitrate

[  5]   0.00-10.00  sec  90.6 MBytes  76.0 Mbits/sec                  sender

[  5]   0.00-10.01  sec  90.1 MBytes  75.5 Mbits/sec                  receiver


Liqun Yang
Cisco Employee

Very interesting. I assume this will work on other cat9k, such as, 9500 as well, right? Is 16.12.1 is the first release to support this feature?




Cisco Employee
Hi Leon,

9500 support is part of roadmap. Yes, native docker support is starting from IOS XE 16.12.1 release.

Hi, thanks for the posting, I believe some of the commands in your posted have now been deprecated and your example will need an update ! 

I can't exactly work out why these have changed as I was run the version....


Gibraltar-16.12.1 with release dated: Aug 1st 2019 


my examples below will also probably be outdated before you know it too. :)


I achieved my testing with two 9300's back to back using the data ports on access vlans on the AppG1/0/1 interfaces but only got it working using DHCP,  I just couldn't seem to get the configuration to work with static addressing 


note; all the other groundwork that you've laid down in the above was the same process. 

!!same config on both switches 
 interface GigabitEthernet1/0/1
 description * DHCP SERVER INTERFACE *
 switchport access vlan 4000
 switchport mode access
interface GigabitEthernet1/0/2
description * BACK TO BACK 9300 *
 switchport access vlan 4000
 switchport mode access
! interface AppGigabitEthernet1/0/1 description * APP INTERFACE * switchport access vlan 4000 switchport mode access ! interface Vlan4000 ip address dhcp ! app-hosting appid DEMO_IPERF app-vnic AppGigabitEthernet access guest-interface 0


from the CLI

!launch session from container


app-hosting connect appid DEMO_IPERF session



/ $ iperf3 -c
Connecting to host, port 5201
[ 5] local port 54396 connected to port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 116 MBytes 970 Mbits/sec 0 1.58 MBytes
[ 5] 1.00-2.00 sec 111 MBytes 933 Mbits/sec 0 1.58 MBytes
[ 5] 2.00-3.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
[ 5] 3.00-4.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
[ 5] 4.00-5.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
[ 5] 5.00-6.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
[ 5] 6.00-7.00 sec 111 MBytes 933 Mbits/sec 0 1.58 MBytes
[ 5] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
[ 5] 8.00-9.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
[ 5] 9.00-10.00 sec 112 MBytes 944 Mbits/sec 0 1.58 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 944 Mbits/sec 0 sender
[ 5] 0.00-10.02 sec 1.10 GBytes 939 Mbits/sec receiver









other example i got working was as follows:


This uses the management interfaces on the back of both 9300 switches.


(9300) Gi0/0 ---(generic switch) --- Gi0/0 (9300) Gi0/0



The default settings within iperf3 container is setup as a Server on the switches, so you only need to logon to one switch with app-hosting connect and run  / $ iperf3 -c  


sho app-hosting detail

! omit 

! omit 

Run-time information
Command :
Entry-point : iperf3 -s





interface GigabitEthernet0/0
 vrf forwarding Mgmt-vrf
 ip address
 negotiation auto
interface AppGigabitEthernet1/0/1

app-hosting appid DEMO_IPERF
 app-vnic management guest-interface 0
  guest-ipaddress netmask



interface GigabitEthernet0/0
 vrf forwarding Mgmt-vrf
 ip address
 negotiation auto
! interface AppGigabitEthernet1/0/1 !
app-hosting appid DEMO_IPERF
app-vnic management guest-interface 0 guest-ipaddress netmask




Is there a way to run IPERF from a Cat 4500?  I assume not as this appears to be all new with the 9000 series.


Thank you 


@JohnKaftan2984 Just to answer your question, no the Catalyst 4500 services switches don't use IOS-XE like the Catalyst 9K switches. IOS-XE (>=Denali 16.1.1) uses a x86 architecture and runs IOS as a daemon (IOSd). It has the ability to also run 3rd party applications in as a container using LXC (Linux Containers). That is a very watered down explanation. I found this this FAQ that Cisco released that will help answer some questions of Cisco IOS-XE programmability:

Service containers are applications that can be hosted directly
on Cisco IOS® XE routing platforms. The apps use the Linux
aspects of the IOS XE operating system to host both Linux
Virtual Containers (LXC) and Kernel virtual machines (KVM) on
Cisco 4000 Series Integrated Services Routers (ISR), Cisco
ASR 1000 Series Aggregation Services Routers, and Cisco
Cloud Services Routers 1000V.


For more info, I would checkout the Cisco Live session - Catalyst 9300 Switching Architecture - BRKARC-3863 on here is the slide deck


I think that answer is yes but app hosting is available on 9300L series? Thank you

Cisco Employee
Yes, 9300-L is supported.

my switch complains that a bunch of files are missing in the iperf3.tar.

Any idea? 


package.yaml and many more. 

I guess that the docker image or the image created with the docker save command is missing valuable files? 


Thanks for the help.