Showing results for 
Search instead for 
Did you mean: 


IOS-XE application hosting and service container throughput limitation

Does anyone know if there are throughput limitations for the ISR 4K and Catalyst 8000 routers when hosting KVM or Docker/LXC applications?


I have tried building a few simple KVM VMs (just a minimal Ubuntu OS) and docker containers with iperf installed. If the containers/vms are connected to the same VirtualPortGroup interface, they can obtain very good throughput numbers. If I try an iperf test to a container/vm on another VirtualPortGroup interface or out physical interface on the router, the throughput numbers plummet and are nowhere near line rate. I see the same results if I install iperf in guestshell.


I have tried this on the following routers - ISR 4331, ISR 4451, Catalyst 8300, Catalyst 8500, ASR 1001X with essentially the same results. All were running IOS-XE 17.3.2.


1) Trying to find out if this is an intended limitation or a bug.


2) Does the IOS-XE hypervisor and container runtime not support SR-IOV?




I have been intermittently testing iperf via GuestShell in ISR4321s and have noticed some limitations when the physical interfaces are not configured as negotiation auto.


Also, please keep in mind, the throughput levels of the ISR4Ks are controlled by licensing.  For example, ISR4321 "aggregated" throughput w/o license is 50Mbps while it increases to 100Mbps w/licensing.  Note, these are aggregated throughput, real world results are closer to 75-80% of aggregated.


Thanks for the reply. I am more concerned about the Catalyst 8300, 8500, and ASR 1K performance which is well below line rate even for a 1 Gbps interface. I was hoping to be able to use an on-box virtual performance agent to validate SLAs rather than a separate hardware performance probe. Latency/jitter/loss validation seems like a possibility either through a virtual agent on IP SLA functions such as TWAMP, however throughput looks like its an issue due to the docker/KVM networking implementation. Hoping someone could shed some light on whether this is intentional or not and if its going to be fixed in a future release.

According to TAC this is an architecture limitation on the Catalyst 8300/8500. I submitted a feature request. Still waiting to hear if its a potential software fix and if there is an implementation timeline.


I am surprised at this limitation, since I would think that the BU would want to support “full” throughput (i.e. 1Gbps or 10Gbps with a 1500 byte payload) to a KVM/LXC/Docker instance running on of the applications that Cisco sells for the Catalyst 8000 such as the Thousand Eyes agent or WAAS not to mention third party partner applications sold by Cisco partners.

Recognize Your Peers
Content for Community-Ad