11-09-2016 05:43 AM - edited 03-05-2019 07:26 AM
Hi,
We have 3925 router at one of our customer place, terminated 70Mbps ILL link.
The router started working fine but after two or three days the internet gets very slow and if we restart the router again it is working fine but repeating the same issue again and again.
Earlier the router was connected 40Mbps link and did not face any issue but after upgraded 40Mbps to 70Mbps the issue has started.
Router model: C3925/K9
IOS: c3900-universalk9-mz.SPA.152-4.M7.bin
Can any one help how much WAN link this router support, i found that it supports upto 100Mbps but with 70Mbps getting issue.
Thanks in advance.
Thanks and regards,
Ashok
11-09-2016 07:04 AM
Hi Ashok
it should support up to 100 easily even with encryption/nat and full dup running its specked at 400mbs , you loose half as an estimate with encryption then more with nat etc
whats the logs showing , did you take a show tech or grab anything form the router before rebooting,could be mem leak process jammed etc multiple causes , I doubt its because you have a 70mbs circuit , really need to see whats happening before its rebooted as its obviously wiping the issue which makes me think its mem or process as both get freed on reboot
11-09-2016 09:06 AM
Cisco recommends the 3925 for up to 100 Mbps of WAN bandwidth, and since their recommendations are conservative, router should normally be able to deal with 70 Mbps.
Since you mention Internet, how do you route to it? Reason I ask, if you have something like:
ip route 0.0.0.0 0.0.0.0 g0/1
The ARP cache can exhaust the router's memory.
If you do have something like the forgoing, try changing it use a next hop IP, rather than the outbound interface.
11-09-2016 11:57 PM
Hi,
I have checked that the route is pointed to next hop IP address not name of exit interface.
Thanks and regards,
Ashok
11-09-2016 09:13 AM
Ashok
You wrote that all was well when the router was handling 40Mbps throughput. Router is unresponsive when you upgraded the WAN to 70Mbps. Going by this experience of yours, there is no doubt that the bottleneck is the router.
To validate this further, look at the historical CPU utilization numbers before the WAN upgrade. If it was in 55% or higher, adding (almost) twice the bandwidth would naturally lead to 100% (or very close to it) of the CPU.
As far as performance goes, two factors contribute to what throughput you can expect:
For us to comment meaningfully, it would help if you can share
In the meantime, take a look at ISR G2 Performance document. This will give you some idea of what to expect from the different models of G2 series.
Kind regards ... Palani
11-10-2016 12:02 AM
Hi,
We have not configured ACLs or encryption, very simple configuration with interfaces and route.
The issue seems to be with default buffers, I have collected "show buffers" output when the router is in normal working state.
Please find the below output:
sh buffers
Buffer elements:
539 in free list
3177072 hits, 0 misses, 617 created
Public buffer pools:
Small buffers, 104 bytes (total 62, permanent 50, peak 86 @ 14:03:09):
50 in free list (20 min, 150 max allowed)
744332 hits, 101 misses, 267 trims, 279 created
0 failures (0 no memory)
Middle buffers, 600 bytes (total 63, permanent 25, peak 84 @ 05:04:40):
61 in free list (10 min, 150 max allowed)
358722 hits, 79 misses, 199 trims, 237 created
0 failures (0 no memory)
Big buffers, 1536 bytes (total 50, permanent 50, peak 51 @ 1d01h):
49 in free list (5 min, 150 max allowed)
846224 hits, 0 misses, 1 trims, 1 created
0 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 10, permanent 10, peak 11 @ 1d01h):
10 in free list (0 min, 100 max allowed)
0 hits, 0 misses, 1 trims, 1 created
0 failures (0 no memory)
Large buffers, 5024 bytes (total 1, permanent 0, peak 1 @ 1d01h):
1 in free list (0 min, 10 max allowed)
0 hits, 0 misses, 9 trims, 10 created
0 failures (0 no memory)
Huge buffers, 18024 bytes (total 5, permanent 0, peak 5 @ 1d01h):
5 in free list (4 min, 10 max allowed)
0 hits, 0 misses, 18 trims, 23 created
0 failures (0 no memory)
Interface buffer pools:
CF Small buffers, 104 bytes (total 101, permanent 100, peak 101 @ 1d01h):
101 in free list (100 min, 200 max allowed)
0 hits, 0 misses, 9 trims, 10 created
0 failures (0 no memory)
CF Middle buffers, 600 bytes (total 101, permanent 100, peak 101 @ 1d01h):
101 in free list (100 min, 200 max allowed)
0 hits, 0 misses, 9 trims, 10 created
0 failures (0 no memory)
Syslog ED Pool buffers, 600 bytes (total 133, permanent 132, peak 133 @ 1d01h):
101 in free list (132 min, 132 max allowed)
58 hits, 0 misses
IPMUX SF buffers, 1500 bytes (total 500, permanent 500):
500 in free list (0 min, 1000 max allowed)
0 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
CF Big buffers, 1536 bytes (total 26, permanent 25, peak 26 @ 1d01h):
26 in free list (25 min, 50 max allowed)
0 hits, 0 misses, 9 trims, 10 created
0 failures (0 no memory)
pak coalesce engine buffers, 1536 bytes (total 512, permanent 256, peak 512 @ 1d01h):
0 in free list (0 min, 512 max allowed)
384 hits, 128 misses, 0 trims, 256 created
0 failures (0 no memory)
512 max cache size, 502 in cache
837123 hits in cache, 0 misses in cache
IPC buffers, 4096 bytes (total 2, permanent 2):
1 in free list (1 min, 8 max allowed)
1 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
CF VeryBig buffers, 4520 bytes (total 3, permanent 2, peak 3 @ 1d01h):
3 in free list (2 min, 4 max allowed)
0 hits, 0 misses, 9 trims, 10 created
0 failures (0 no memory)
CF Large buffers, 5024 bytes (total 2, permanent 1, peak 2 @ 1d01h):
2 in free list (1 min, 2 max allowed)
0 hits, 0 misses, 9 trims, 10 created
0 failures (0 no memory)
IPC Medium buffers, 16384 bytes (total 2, permanent 2):
2 in free list (1 min, 8 max allowed)
0 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
IPC Large buffers, 65535 bytes (total 17, permanent 16, peak 17 @ 1d01h):
17 in free list (16 min, 16 max allowed)
0 hits, 0 misses, 1545 trims, 1546 created
0 failures (0 no memory)
Header pools:
Header buffers, 0 bytes (total 768, permanent 768):
256 in free list (128 min, 1024 max allowed)
512 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
512 max cache size, 512 in cache
530 hits in cache, 0 misses in cache
Particle Clones:
1024 clones, 0 hits, 0 misses
Public particle pools:
F/S buffers, 256 bytes (total 768, permanent 768):
256 in free list (128 min, 1024 max allowed)
512 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
512 max cache size, 512 in cache
0 hits in cache, 0 misses in cache
Normal buffers, 1548 bytes (total 3840, permanent 3840):
3456 in free list (128 min, 4096 max allowed)
896 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Private particle pools:
HQF Particle buffers, 0 bytes (total 2000, permanent 2000):
2000 in free list (500 min, 2000 max allowed)
0 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
IDS SM buffers, 240 bytes (total 128, permanent 128):
0 in free list (0 min, 128 max allowed)
128 hits, 0 fallbacks
128 max cache size, 128 in cache
0 hits in cache, 0 misses in cache
IPMUX particle pool buffers, 512 bytes (total 500, permanent 500):
0 in free list (0 min, 1000 max allowed)
500 hits, 1 misses
1000 max cache size, 500 in cache
0 hits in cache, 0 misses in cache
GigabitEthernet0/0 buffers, 1664 bytes (total 768, permanent 768):
0 in free list (0 min, 768 max allowed)
768 hits, 0 fallbacks
768 max cache size, 512 in cache
17739087 hits in cache, 0 misses in cache
GigabitEthernet0/1 buffers, 1664 bytes (total 768, permanent 768):
0 in free list (0 min, 768 max allowed)
768 hits, 0 fallbacks
768 max cache size, 512 in cache
21259498 hits in cache, 0 misses in cache
GigabitEthernet0/2 buffers, 1664 bytes (total 768, permanent 768):
0 in free list (0 min, 768 max allowed)
768 hits, 0 fallbacks
768 max cache size, 768 in cache
0 hits in cache, 0 misses in cache
The about output showing some of the buffers are created maximum limit and some are exceeded the maximum limit. Kindly check and revert if any issue found.
I will try to get the router show tech output.
Thanks and regards,
Ashok
11-10-2016 06:45 AM
Hi Ashok
I have reasonable experience in this area. Buffers you cited has nothing to do with transit traffic. Moreover, there is no problem with buffers like you cited. Please do not get side-tracked.
Rtr not responding is almost always because CPU is tied up with moving pkts, virtually non-stop. This causes huge delay in responding process based activities (erratic PING response and delay in responding to Telnet/SSh etc).
We really need the following at minimum, to make a determination:
- term exec prompt timestamp
- term len 0
- show ver
- show inv
- show run
- show int summary
- show ip cache flow (Assuming you have Netflow enabled).
Sincerely ... Palani
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide