Showing results for 
Search instead for 
Did you mean: 


Question about demerit of process switching and cached based switching.



I want to know demerit about it. 


Cisco book says that 

'switching components are in place only after the packets enter the router. and they are removed when such packets are not being forwarded by the router. if there are large numbers of packets with unpredictable patterns, switching performance is degraded significantly '


What exactly does this mean?

As far as i understood, 

Even though router(switch) built fast cache entry, it is removed automatically when there is no more packet to using this cache. and after that, router(switch) need to do process switching again to build fast cache.   


Am I right?


Please correct me. 

Thank you  in advance : )


Even I am looking for a more

Even I am looking for a more precise answer for this question. :)



Hall of Fame Cisco Employee

Hi guys,Let's first reiterate

Hi guys,

Let's first reiterate how process switching works. This will help to understand the drawbacks in fast switching.

When a packet comes into a router, this is roughly what the router would do if it handled the packet in the process switching path:

  1. Look up the destination IP address in the routing table. If the lookup returns only the next hop IP address without the information about the egress interface, perform another lookup in the routing table, this time looking for the next hop IP address found in the previous iteration. Repeat this until the lookup returns information about the egress interface, remembering the last found next hop IP address as well.
  2. After the egress interface for the next hop IP address is identified, use the Layer3-to-Layer2 mapping table for this interface (such as ARP table, NHRP table, IP/DLCI, IP/VPI-VCI) to determine the Layer2 address of the last next hop that was identified in the previous step.
  3. Encapsulate the IP packet into a newly constructed frame using the Layer2 addressing determined in the previous step and send it out the egress interface.

I have omitted the usual steps with verifying the checksum of the ingress IP packet, decrementing the TTL, recalculating the checksum etc. - these need to be done anyway.

While this process seems to be relatively straightforward, it is at the same time quite ineffective.

  • No information about the lookup results is ever stored. For one million packets going to the same destination, this process is repeated itself one million times, each time producing the very same result but never storing it for further reuse.
  • Recursive lookups in the routing table are expensive by themselves, and if the routing table has thousands or even more entries, going over them even once is an expensive operation. Multiply this by the number of recursive lookups necessary for a single packet, and multiply that again by the number of packets going to the same destination, and you'll see soon why this doesn't scale.

Fast switching, also neatly called route cache, is a concept that can be summarized as "Route once, forward many times". Whenever a packet for a new, not-yet-seen destination comes in, it is routed by the process switching exactly as described before. However, the resulting information from these daunting lookups (egress  interface, Layer2 address of the next hop) will then be stored in a tree-based route cache organized for rapid lookups. The next time a packet for the same destination comes in, the information for its routing/forwarding is already readily available in the route cache with a single non-recursive lookup - just use it to encapsulate the packet into a frame and forward it.

While this seems to be a perfect idea, there is - apart from other disadvantages related to housekeeping of the cache - still a major drawback: If your router suddenly experiences an inrush of millions of packets to yet-unseen destinations, they will all need to be process switched because the route cache does not yet contain information about them. This means that all advantages of the fast switching are lost, and in fact, the incurred load of process switching these gazillions of packets to yet-unseen destinations can truly bring the router down to its knees. This can happen easily when, for example, a backbone router comes online, routing protocol converges and the router suddenly becomes a transit router for many destinations.

This is what, I believe, the book tries to explain.

Best regards,

Hall of Fame Guru

Just to add a side note to

Just to add a side note to Peter's explanation and to expand on what I assume he means when he says "apart from other disadvantages related to housekeeping of the cache"

In terms of invalidating cache entries this is done partly to conserve memory resources on the router but also because it is a cached entry.

As Peter has described the router uses two tables to forward the packet, the routing table and the L2 adjacency table for the L2 header rewrite. When the first packet is process switched an entry is then made into the fast cache. But it is exactly that, a cached entry based on the contents of those tables at the time the packet was process switched.

The problem now is that these tables can change and if they do there is no way of communicating that to the cached entry in the fast cache.

So in addition to invalidating fast cache entries to preserve memory they are also invalidated to ensure consistency with the routers tables.

Hope I haven't just confused the issue or misrepresented what Peter was alluding to.



Jon,I got 2 questions:1) How


I got 2 questions:

1) How fast/frequent the fast cache entries are invalidated?

2) How the CEF ensures the consistency of the table? Does CEF also do the same invalidate technique.


Hall of Fame Guru

CFAccording to docs 1/20 of


According to docs 1/20 of the entries are invalidated per minute unless the router is suffering in terms of memory in which case it is increased to 1/5 every minute.

CEF doesn't have to do this because CEF does not use cached entries.

It maintains two separate tables a forwarding table and an adjacency table and it does not build these on demand, it builds them before any packets are forwarded so no need to process switch any packets.

The forwarding table (FIB) is a mirror image of the routing table (RIB) and if the RIB is updated so is the FIB.

The two tables are built independently of each other and there is no need to store the resulting lookup in a separate cache.



Jon,I got the point now. So


I got the point now. 

So th CEF FIB and adjacency tables are more dynamic in nature than the Fast cache.


Hall of Fame Guru

In the fact that CEF does not

In the fact that CEF does not cache entries which may not reflect the true state of the router's tables yes that is one way to put it.


CreatePlease to create content
Content for Community-Ad
July's Community Spotlight Awards