02-16-2006 10:45 AM - edited 03-03-2019 01:52 AM
Hi,
We have 2 Microsoft ISA servers attached across a backbone in separate locations. They are configured to cache HTTP traffic and loadbalance using Multicasting. All clients on separately routed Vlans point to the virtual ip address of the ISA server cluster, using this address in their proxy settings.
Ever since the cluster went live the CPU utilization on the Each backbone Switch has hit 90%. The utilization goes up and down as you would expect during office hours. Out of hours, the CPU is at about 6%. During 12 noon, when everyone wants to use the internet the CPU on both core routers for the VLAN that the ISA servers sit on, hits 90%.
I have checked the vlan interface and the input packets are huge, with a high percentage of dropped packets and the input queue filling up regularly.
Is it possible that the SVI is having to process switch the multicasts packets from the Client PC's in order to get to the ISA server somehow ?
pim sparse-dense mode is configured on all routed SVI's with 2 RP's and 1 mapping agent.
The below SVI counters were cleared about 2 hours before showing the output. All over SVI's are no where near this input :
Vlan200 is up, line protocol is up
Hardware is Cat6k RP Virtual Ethernet, address is 0030.8515.4d02 (bia 0030.851
5.4d02)
Description: VL200_MPLS_Gateway
Internet address is x.x.x.x/24
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters 04:49:29
Queueing strategy: fifo
Output queue 0/40, 0 drops; input queue 0/75, 31118 drops
5 minute input rate 1435000 bits/sec, 1314 packets/sec
5 minute output rate 1398000 bits/sec, 1298 packets/sec
139328802 packets input, 4129904029 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
138326110 packets output, 3927745634 bytes, 0 underruns
0 output errors, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
Can anyone help ?
02-19-2006 10:50 AM
Hi,
Not sure about this one, it's quite a puzzle! I am just going to throw a few ideas, hope one may give you a clue. Could you possibly be experiencing only dense mode flooding? Have you checked if your pim routers are switching to sparse mode after they have received traffic from the RP? If you have dense mode flooding problems obviously there is an issue soemwhere, however Cisco has introduced auto rp listener to avoid such scenarios. Runs on sperse mode.
On another occassion that I had a 90% cpu load on 65k switches was when there was a STP loop. Have you checked the sh proc cpu output, does it give any clues? What does the output of the sh int command on the physical port the mcast servers are connected onto, say?
Are the RPs and MA on the same routers which have the problem? If yes, have you tried moving them?
Have you tried storm control on the switches where m/cast servers connect to see if the problem goes away?
Rgds
E.
02-19-2006 12:27 PM
First, you need to determine, what is contributing to the high CPU - high proc cpu.
sh proc cpu
CPU utilization for five seconds: 0%/0%; one minute: 0%; five minutes: 0%
The first number on the five second interval is the process and the second number is interrupt. If most of the process are interrupt then it's traffic related but you can alos look at what process is hogging the CPU. Second, if you determined that the high CPU is on the RP and it's traffic related, then determine what kind of traffic and why it's hitting the RP. Packets should be switch at hardware and not be processes by the RP's SW.
Hope this leads you to the right path in resolving the issue.
02-19-2006 01:48 PM
Do you have ip multicast mls configured on your interfaces . Good link http://www.cisco.com/en/US/products/hw/switches/ps700/products_command_reference_chapter09186a008007f2a0.html
02-22-2006 04:15 AM
Is this a Microsoft NLB cluster? If so I have seen the same thing.
02-27-2006 07:24 AM
Hi,
Thanks for all the responses. The 2 x ISA Proxy servers are running on Windows 2003 server, using NLB in igmp multicast mode.
I have raised a TAC case with Cisco who are currently looking through all the diagrams / info I have sent them.
No word yet though as to what the problem is.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide