The control and dataplanes are separated into distinct linux containers.
The dataplane is a part of the Universal Virtual Forwarder (UVF) container. Its composed of two parts:
1. DataPath Agent (DPA) that communicates with a controller in the XR container to get the route/feature configuration that gets programmed into the datapath.
2. The datapath forwarding code which is comprised of DPDK (Intel's DataPath Dev Kit), VPP (infra code) and the XRv9k dataplane code.
The Punt/Inject packets travel from the SPP to the DPA.
Show commands for VPP Graph Nodes
There are many VPP graph nodes under the RX, FWD, TM, TX and DPA blocks. A node processes a frame of packets (upto 256 packets) using batch processing. Batch processing gives cache efficiency gains. The infra (VPP) code is always running in the background even if there are no packets to process, therefore the CPUs will always run at 100%. This is by design.
VPP graph node show commands show controller dpa graph ...
The running state of the VPP graph nodes is displayed by show controller dpa graph runtime
Per node counters are displayed with show controller dpa graph counters
Inter-core frame queues are displayed with show controller dpa graph frame-queue
Punt/Inject Packet Path
Use the following CLI to troubleshoot the punt/inject packet path:
show spp sid [table|stats|node-counters|client]
show netio forwarder stats
show lpts pifib hardware entry [stat|brief|policer|context info]
show lpts pifib stat
Packets arriving from the line or inject packets may be traced as follows:
debug controllers dpa packet-trace [line|inject] <# to trace>
Hi,Previous we were using Cisco 7606 as BRAS and the issue was Router keeps on rebooted automatically.OLD Details about Cisco 7606Router 7606 #show versionCisco IOS Software, c7600rsp72043_rp Software (c7600rsp72043_rp-ADVENTERPRISEK9-M), Version 15.2(2)S...
I have successfully registered my SIP endpoints (which we have developed) with Broadsoft sandbox environment. I can make basic calls without problems. When I try to conduct feature tests like Call Waiting (CW), Call Hold (CH), and Call Transfer (CT)...
I am trying to find if there is any Pros or Cons in deploying MPLS with separate customer AS for each site as opposed to one AS. The end goal is to be able to inject default routes from two DCs and be able to make a subset of sites follow one default rout...
So I have attempted the below config: l2vpn
backup disable never
xconnect group Xconns
In one of the discuss, it was initiated to us that there is a specific DB size for example:1 GB and breaching of which will led to crash of Iedge process.I want to replicate the scenario by creating load of user by tool in the lab and track the CDM db uti...