09-05-2012 02:16 AM
Hi Experts,
I have a WAAS device which exceeds the CPU thresholds though optmization connections are at 50%. Is this expected or something fishy. Need your experts experience.
top - 08:43:29 up 297 days, 2:38, 2 users, load average: 8.02, 8.90, 9.32
Tasks: 441 total, 1 running, 440 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 28.9%sy, 41.0%ni, 11.3%id, 2.3%wa, 0.2%hi, 16.3%si, 0.0%st
Mem: 12188284k total, 12058508k used, 129776k free, 50176k buffers
Swap: 12582664k total, 1168k used, 12581496k free, 6453680k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10079 admin 30 10 5744m 3.3g 43m S 68.1 28.8 195037:10 so_dre64
10103 admin 30 10 1251m 38m 2712 S 1.2 0.3 1361:27 httpmuxd
2228 admin 30 10 1081m 764m 74m S 1.1 6.4 1523:15 java
2227 admin 30 10 1081m 764m 74m S 1.0 6.4 1532:30 java
10732 admin 30 10 1251m 38m 2712 S 1.0 0.3 1014:33 httpmuxd
10733 admin 30 10 1251m 38m 2712 S 1.0 0.3 1011:43 httpmuxd
3818 admin 30 10 1081m 764m 74m S 0.7 6.4 723:58.13 java
3816 admin 30 10 1081m 764m 74m S 0.7 6.4 722:14.13 java
2272 admin 30 10 1081m 764m 74m S 0.6 6.4 936:58.89 java
3812 admin 30 10 1081m 764m 74m S 0.6 6.4 722:39.58 java
3804 admin 30 10 1081m 764m 74m S 0.5 6.4 723:01.25 java
3814 admin 30 10 1081m 764m 74m S 0.5 6.4 723:06.14 java
10734 admin 30 10 1251m 38m 2712 S 0.3 0.3 367:00.92 httpmuxd
16 admin 15 -5 0 0 0 S 0.2 0.0 486:48.88 events/1
3813 admin 30 10 1081m 764m 74m S 0.2 6.4 350:52.23 java
3819 admin 30 10 1081m 764m 74m S 0.2 6.4 352:16.43 java
3822 admin 30 10 1081m 764m 74m S 0.2 6.4 350:18.69 java
7822 admin 30 10 1081m 764m 74m S 0.2 6.4 119:42.06 java
17318 admin 30 10 2300 1296 824 R 0.2 0.0 0:00.12 top
15 admin 15 -5 0 0 0 S 0.2 0.0 367:12.92 events/0
17 admin 15 -5 0 0 0 S 0.2 0.0 152:58.99 events/2
18 admin 15 -5 0 0 0 S 0.2 0.0 149:40.85 events/3
322 admin 15 -5 0 0 0 S 0.2 0.0 197:08.50 kswapd0
2224 admin 30 10 1081m 764m 74m S 0.2 6.4 226:29.45 java
3807 admin 30 10 1081m 764m 74m S 0.2 6.4 352:20.70 java
3815 admin 30 10 1081m 764m 74m S 0.2 6.4 119:40.81 java
3820 admin 30 10 1081m 764m 74m S 0.2 6.4 119:58.79 java
3821 admin 30 10 1081m 764m 74m S 0.2 6.4 350:46.55 java
7820 admin 30 10 1081m 764m 74m S 0.2 6.4 119:51.71 java
11438 admin 30 10 1081m 764m 74m S 0.2 6.4 119:56.24 java
15703 admin 20 0 0 0 0 S 0.2 0.0 0:00.68 pdflush
1035 admin 39 19 0 0 0 S 0.1 0.0 2876:38 kipmi0
2207 admin 30 10 1081m 764m 74m S 0.1 6.4 28:53.42 java
2208 admin 30 10 1081m 764m 74m S 0.1 6.4 28:52.18 java
2216 admin 30 10 1081m 764m 74m S 0.1 6.4 116:17.43 java
2223 admin 30 10 1081m 764m 74m S 0.1 6.4 16:11.50 java
2229 admin 30 10 1081m 764m 74m S 0.1 6.4 263:55.23 java
2233 admin 30 10 1081m 764m 74m S 0.1 6.4 27:36.34 java
2235 admin 30 10 1081m 764m 74m S 0.1 6.4 27:41.57 java
3817 admin 30 10 1081m 764m 74m S 0.1 6.4 77:40.14 java
10108 admin 30 10 73720 4540 1748 S 0.1 0.0 84:33.82 nfs_ao
10202 admin 30 10 244m 59m 9968 S 0.1 0.5 29:34.82 java
10737 admin 30 10 1251m 38m 2712 S 0.1 0.3 56:56.72 httpmuxd
10738 admin 30 10 1251m 38m 2712 S 0.1 0.3 56:43.84 httpmuxd
10739 admin 30 10 1251m 38m 2712 S 0.1 0.3 56:05.68 httpmuxd
10741 admin 30 10 1251m 38m 2712 S 0.1 0.3 55:58.25 httpmuxd
10742 admin 30 10 1251m 38m 2712 S 0.1 0.3 56:02.11 httpmuxd
10744 admin 30 10 1251m 38m 2712 S 0.1 0.3 56:11.04 httpmuxd
10750 admin 30 10 85364 32m 5204 S 0.1 0.3 257:23.30 mapi_ao
10751 admin 30 10 85364 32m 5204 S 0.1 0.3 257:02.83 mapi_ao
xxxxxxxxxxxxxxx#show statistics connection
Current Active Optimized Flows: 5619
Current Active Optimized TCP Plus Flows: 5939
Current Active Optimized TCP Only Flows: 675
Current Active Optimized TCP Preposition Flows: 0
Current Active Auto-Discovery Flows: 7
Current Reserved Flows: 188
Current Active Pass-Through Flows: 208
Historical Flows: 623
xxxxxxxxx#show tfo detail
Policy Engine Config Item Value
------------------------- -----
State Registered
Default Action Use Policy
Connection Limit 12000
Effective Limit 11809
Keepalive timeout 3.0 seconds
09-05-2012 07:43 AM
This can be expected. Look at your top output so_dre64 is consuming 68% CPU. This is probably normal and based on traffic flow / the amount/ type of data being processed by DRE. If you suspect performance issues because of high CPU suggest you open a TAC case and give more details than just a top output. A sysreport would need to be analyzed as well as provide a specific problem / concern.
Regards,
Mike
09-05-2012 03:29 PM
Hello Shankar,
In addition to Michael's reply, here are the steps to get to the CPU statistics on detailed, it is not easy to say when a box is really getting overload with one output, we should look over the CPU behavior over the months, weeks, days and hours of CPU utilization.
1- get the sysreport from the device: ( this is heavy process make sure you don't run the command when there's a lot of traffic on your customer's network, schedule sometime ahead if needed)
WAE#copy sysreport ftp
2- get the sysreport open and surf through the following path:
state/actona/conf/Manager/MRTG/stats/gw/
3- along with logs analisys and the CPU charts it should help your client to have a idea how the WAE and network traffic really behave, figure out possible network issues or disregard normal CPU utilization.
4- I suggest to use the Cisco employee tech zone for any further questions on any technology.
Regards,
Felix
Cisco TAC
09-05-2012 07:48 PM
Thanks Michael & Felix....I will look at sysreport..
09-17-2012 06:29 PM
your welcome Shankar, if you find it helpful please rate it so other customers can find the answer too.
and remember in case of doubt rate 5 =)
Felix
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide