Control plane in networking devices is something like intellegent logic of what is running in device with what intelligence like protocols OSPF,BGP or EIGRP are best examples.
The traffic which is getting inserted into RIB and FIB based on control plane direction then Data Plane comes in picture to forwards the transit traffic using ASICs.
On the other hand for management plane configiraton on network devices are been done via Management plane . SNMP protocol is management plane which used to monitor the device status.
Hope it Helps..
Rate if it Helpss
To add to Ganesh.
Using a L3 switch as an example the control plane is responsible for the L2 protocols such as CDP/STP/VTP etc. and the L3 routing protocols ie. establishing neighbors, exchangng routes and building the routing table (RIB).
It is also responsible for building an adjacency table with arp and together with the RIB creating a forwarding table (FIB) that is used for CEF.
The data or forwarding plane is responsible for the actual forwarding of packets using the FIB.
On a L3 switch the control plane is handled in software by the general CPU and the data plane is handled in hardware using dedicated ASICs (Application Specific Intergrated Circuits). Most packets are forwarrded in hardware but there are times when the packets have to handled in software.
The more packets that are handled in software the worst performance you get from the switch.
With routers you have software only routers which handle both planes in software and hardware based routers which separate the planes between software and hardware much as L3 switches do.
Thanks for your reply. So if there is high CPU utilization on the device which will affect Control plane so it will directly affect the data plane and the client may experience intermittent connectivity also ?
For software only devices pretty much yes because everything is handled by the CPU.
For hardware based devices it is more complicated.
For example people sometimes test latency in their network by pinging from an end device to the switches IP address (or one of them if it is L3).
This is not a good indicator of latency because any traffic sent to the switch itself is processed in software.
To get a true indicator of latency you need to ping through the switch ie. from one end client to another because this is handled by the ASICs and not the main CPU.
So you may just get a short burst of CPU activity and at the same time be pinging the switch and the latency looks really bad but in reality it isn't at all.
That said though, if you are continually running at a very high CPU level on a hardware based switch that is usually a sign of something wrong with the switch itself, either a bug or a configuration issue causing a lot of packets to be sent to the main CPU.
And then you can start seeing performance issues.
If the CPU gets too heavily loaded it can start missing control plane packets which could in worse case scenarios lead to losing neighborships with other L3 devices for example.
So it really depends with hardware based devices.
Thanks for your wonderful explanation. So what are the possible reasons that the client may experience performance/latency issues and what should be the approach to diagnose/troubleshoot those areas?
Difficult to give a complete answer because it depends on so many things.
General performance and or latency issues could simply be congested links in which case QOS or upgrading the link speed are viable options.
CPU usage on software routers is down to how much traffic and how many features you have enabled because all features use CPU and memory and everything is handled by the main CPU.
So with software based devices it's very important to factor that in.
Hardware based devices can be trickier because as I say CPU usage does not necessarily reflect how well the device is performing in the forwarding of traffic.
As a general rule and excluding things like STP loops etc which will obviously affect performance, if your hardware device is running a very high CPU and you are seeing performance issues you have to work out what is using the CPU.
More often than not it is packets being process switched ie. forwarded in software rather than hardware which can severly degrade the throughput.
The causes of packets being processed switched are many and Cisco have a lot of good troubleshooting documents that go into detail for hardware based switches.
I guess some care has to taken with these terms as they can be idealogical. For example control plane traffic, management traffic and data plane traffic will share the same physical medium between devices.
Best practice design often dictate that management plane traffic should be separate from user generated data plane traffic. An example of this segregation would be using a management VRF and potentially a differnet infrastructure for management - so that a data plane problem doesn't break your management plane.
Control plane traffic clearly need to run alongside data plane traffic as this is adjancencies are maintained between devices.