cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1332
Views
0
Helpful
1
Replies

Pre fragment failures issue in cisco PIX to ASA S2S VPN

balajis27
Level 1
Level 1

Hi All

Need some help on below issue which i am facing on a site to site vpn connection between three different PIX firewall boxes having 3 separte IPSEC Site to Site VPN tunnel to a 1 REMOTE ASA 5550 box.

The pix side is the tunnel initiator and the asa is the responder. The project users behind all the 3 location pix devices complain that when they access the remote servers, they face slowness. Bandwidth utilisation on the link is normal. When troubleshooting this I come across an error counter on all the 3 s2s tunnels which I am pasting below. I am seeing PMTU counter increasing in this VPN session as the day progresses and doubt whether this might be an issue. The users initiate tunnel every day fresh and this counter gradually increases reaching about 40 to 50 in each of the three pix devices.

5f# show crypto ipsec sa
interface: yellow
Crypto map tag: cm_OneNet, seq num: 542, local addr: 9.252.227.21

access-list al_vpn_BCBS-V542 extended permit ip 170.226.241.32 255.255.255.240 10.69.124.0 255.255.255.0
local ident (addr/mask/prot/port): (170.226.241.32/255.255.255.240/0/0)
remote ident (addr/mask/prot/port): (10.69.124.0/255.255.255.0/0/0)
current_peer: 170.69.248.225

#pkts encaps: 51665, #pkts encrypt: 51631, #pkts digest: 51631
#pkts decaps: 65691, #pkts decrypt: 65691, #pkts verify: 65691
#pkts compressed: 0, #pkts decompressed: 0
#pkts not compressed: 51665, #pkts comp failed: 0, #pkts decomp failed: 0
#pre-frag successes: 0, #pre-frag failures: 34, #fragments created: 0
#PMTUs sent: 34, #PMTUs rcvd: 0, #decapsulated frgs needing reassembly: 0
#send errors: 0, #recv errors: 0
I had a doubt on the MTU setting or DF state mismatch between my 3 pixes and the customer asa but that is matching. The setting we have is MTU of 1500 Bytes at both end and DF bit state set to default "Copy" state. Not sure what might be the cause of the issue.
I also tried doing a capture on one of the 3 pixes and below is the capture result. They do a RDP session on TCP 3389 to the remote machines and work on that. Will it be something to do with the machines connecting to the pix and thier MSS size ?
22: 08:37:53.696176 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: S 786427996:786427996(0) win 65535 <mss 1460,nop,wscale 2,nop,nop,sackOK>
  23: 08:37:56.659954 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: S 786427996:786427996(0) win 65535 <mss 1460,nop,wscale 2,nop,nop,sackOK>
  24: 08:37:56.920057 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: S 907433021:907433021(0) ack 786427997 win 1460 <mss 1380,nop,wscale 0,nop,nop,sackOK>
  25: 08:37:56.920759 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: . ack 907433022 win 65044
  26: 08:37:56.921339 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786427997:786428016(19) ack 907433022 win 65044
  27: 08:37:57.424370 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: . ack 786428016 win 65516
  28: 08:37:57.664073 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433022:907433033(11) ack 786428016 win 65516
  29: 08:37:57.665782 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428016:786428444(428) ack 907433033 win 65041
  30: 08:37:57.927945 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433033:907433366(333) ack 786428444 win 65088
  31: 08:37:57.928983 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428444:786428456(12) ack 907433366 win 64958
  32: 08:37:57.929196 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428456:786428464(8) ack 907433366 win 64958
  33: 08:37:58.188268 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: . ack 786428464 win 65068
  34: 08:37:58.188588 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433366:907433377(11) ack 786428464 win 65068
  35: 08:37:58.189641 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428464:786428476(12) ack 907433377 win 64955
  36: 08:37:58.448325 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433377:907433392(15) ack 786428476 win 65056
  37: 08:37:58.449210 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428476:786428488(12) ack 907433392 win 64951
  38: 08:37:58.708200 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433392:907433407(15) ack 786428488 win 65044
  39: 08:37:58.709039 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428488:786428500(12) ack 907433407 win 64947
  40: 08:37:58.967906 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433407:907433422(15) ack 786428500 win 65032
  41: 08:37:58.968669 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428500:786428512(12) ack 907433422 win 64944
  42: 08:37:59.227573 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433422:907433437(15) ack 786428512 win 65020
  43: 08:37:59.228473 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428512:786428524(12) ack 907433437 win 64940
  44: 08:37:59.487218 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433437:907433452(15) ack 786428524 win 65008
  45: 08:37:59.488271 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: P 786428524:786428536(12) ack 907433452 win 64936
  46: 08:37:59.747214 802.1Q vlan#485 P0 10.69.124.228.3389 > 170.227.182.51.1387: P 907433452:907433467(15) ack 786428536 win 64996
  47: 08:37:59.941204 802.1Q vlan#485 P0 170.227.182.51.1387 > 10.69.124.228.3389: . ack 907433467 win 64932

Please let me know how can I overcome this issue.

Regards

S.Balaji

1 Reply 1

manish arora
Level 6
Level 6

Hi Sri,

I would set the crypto df to clear rather an copy on both ends using :-

crypto ipsec df-bit clear-df

also, you can use command like :-

crypto ipsec fragmentation before-encryption 

which makes fragmentation happen before encryption as it says , This helps in packets being sent out of
order and then being dropped cuz of missing 64 packet windows scale etc.

Manish