12-02-2010 05:31 AM - edited 02-21-2020 05:00 PM
Hello everybody,
we're facing some strange performance problem with our leased line over which are running about 25-30 IPSec VPN tunnels. The Line has physicaly 25Mbit/s, on both sides there are 2 routers.
On side A the router is connected to Public network, on side B other router (c2811) is connected to Cisco ASA which terminates all those VPNs.
The line itself has no problem, if we connect two device on each side we get full 25Mbit without any packetloss.
But during the normal operations, if all VPN tunnels are active and sending about 10-15Mbit in each direction (the line is not over-utilized) we see some performance problem:
1. the VPN tunnels goes time to time up and dow, many times a day.
2. the ping between locations shows packetlosts
3. and what is much more strange we see the packetloss when both router on each side pings each other over its lesed line. when there are no traffic on the line, or quite small amount ~2-3 Mbit then thee is no any packetloss, but as the traffic starts to flow then we see them.
We've already activated Traffic shaping on both sides of leased lines in order to "smooth" the peaks, but it hasn't helped at all.
What I can imaging, as the most traffic on the line is IPSec based traffic, IPSec uses UDP and it doesn't have that "auto-adjustment" mechanism as TCP has (window size, backoff and so on) then IPSec traffic simply overwhelms the leased line and some frames are dropped, but why traffic shaping doesn't help to prevent the drops?
If somebody had similar experience, I'd be glad to hear some advices!
12-05-2010 12:00 PM
Konstantin,
If tunnel itself renegotiate it means that DPDs between IKE peers are dropped most likely.
Possible reasons:
- CAC mechanism engaged
- High CPU
- Interface buffer overloaded
- A few others... less likely.
Can you confirm for me 2. and 3.?
Debugging this might be a lenghty process would you consider (or maybe already have) opening a TAC case?
Marcin
12-06-2010 04:40 AM
the CPU is not an issue on the routers, about ASA's I don't have an information - they are out of our control.
the buffer Interface on egde routers is not full and looks like:
Input queue: 0/75/108/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: Class-based queueing
Output queue: 0/1000/0 (size/max total/drops)
as I said we implemented the Traffic shaping on both sides, and I can observe that some packets are holding in buffers on one side:
queue limit 5000 packets
(queue depth/total drops/no-buffer drops) 0/318/318
and some "drops" on other side
155579818 packets, 150854373784 bytes
30 second offered rate 2070000 bps, drop rate 1000 bps
I don't work so much with IPSec VPN tunnels and I don't know all possible "hidden" problem, but what I can see it's that more then 40% of all packets on the line are UDP packets.
Could it produce some kind of performance problem ?
We havn't opened a TAC case yet, because we don't know where the problem could be and trying to narrow it
P.S. what do you mean under "DPDs packets" and "CAC"?
12-06-2010 10:44 AM
Konstantin,
It seems I don't grasp fully the toplogy, can you maybe attach a diagram?
I understood that the 25-30Mbit load was on the ASA, and it's from there that I wanted to check logs ;-)
DPD is Dead Peer Detection it's a control plane mechanism sent via main channel (usually udp/500 and udp/4500 in case of NAT-T).
Failure to respond to DPD messages (multiple) will result in tunnel being torn down - assuming it's dead.
CAC = Call admission control - there's a mechanism on ASA and IOS to make sure that negotiation doesn't overflow the CPU, eat too many resources.
Marcin
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide