I want to know demerit about it.
Cisco book says that
'switching components are in place only after the packets enter the router. and they are removed when such packets are not being forwarded by the router. if there are large numbers of packets with unpredictable patterns, switching performance is degraded significantly '
What exactly does this mean?
As far as i understood,
Even though router(switch) built fast cache entry, it is removed automatically when there is no more packet to using this cache. and after that, router(switch) need to do process switching again to build fast cache.
Am I right?
Please correct me.
Thank you in advance : )
Let's first reiterate how process switching works. This will help to understand the drawbacks in fast switching.
When a packet comes into a router, this is roughly what the router would do if it handled the packet in the process switching path:
I have omitted the usual steps with verifying the checksum of the ingress IP packet, decrementing the TTL, recalculating the checksum etc. - these need to be done anyway.
While this process seems to be relatively straightforward, it is at the same time quite ineffective.
Fast switching, also neatly called route cache, is a concept that can be summarized as "Route once, forward many times". Whenever a packet for a new, not-yet-seen destination comes in, it is routed by the process switching exactly as described before. However, the resulting information from these daunting lookups (egress interface, Layer2 address of the next hop) will then be stored in a tree-based route cache organized for rapid lookups. The next time a packet for the same destination comes in, the information for its routing/forwarding is already readily available in the route cache with a single non-recursive lookup - just use it to encapsulate the packet into a frame and forward it.
While this seems to be a perfect idea, there is - apart from other disadvantages related to housekeeping of the cache - still a major drawback: If your router suddenly experiences an inrush of millions of packets to yet-unseen destinations, they will all need to be process switched because the route cache does not yet contain information about them. This means that all advantages of the fast switching are lost, and in fact, the incurred load of process switching these gazillions of packets to yet-unseen destinations can truly bring the router down to its knees. This can happen easily when, for example, a backbone router comes online, routing protocol converges and the router suddenly becomes a transit router for many destinations.
This is what, I believe, the book tries to explain.
Just to add a side note to Peter's explanation and to expand on what I assume he means when he says "apart from other disadvantages related to housekeeping of the cache"
In terms of invalidating cache entries this is done partly to conserve memory resources on the router but also because it is a cached entry.
As Peter has described the router uses two tables to forward the packet, the routing table and the L2 adjacency table for the L2 header rewrite. When the first packet is process switched an entry is then made into the fast cache. But it is exactly that, a cached entry based on the contents of those tables at the time the packet was process switched.
The problem now is that these tables can change and if they do there is no way of communicating that to the cached entry in the fast cache.
So in addition to invalidating fast cache entries to preserve memory they are also invalidated to ensure consistency with the routers tables.
Hope I haven't just confused the issue or misrepresented what Peter was alluding to.
I got 2 questions:
1) How fast/frequent the fast cache entries are invalidated?
2) How the CEF ensures the consistency of the table? Does CEF also do the same invalidate technique.
According to docs 1/20 of the entries are invalidated per minute unless the router is suffering in terms of memory in which case it is increased to 1/5 every minute.
CEF doesn't have to do this because CEF does not use cached entries.
It maintains two separate tables a forwarding table and an adjacency table and it does not build these on demand, it builds them before any packets are forwarded so no need to process switch any packets.
The forwarding table (FIB) is a mirror image of the routing table (RIB) and if the RIB is updated so is the FIB.
The two tables are built independently of each other and there is no need to store the resulting lookup in a separate cache.
In the fact that CEF does not cache entries which may not reflect the true state of the router's tables yes that is one way to put it.