cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
761
Views
0
Helpful
9
Replies

help in mitigating the effects of serialization delay

blues22
Level 1
Level 1

What is or are the recommended methods of mitigating the effects of serialization delay on data circuits not employing video conferenceing or VOIP. We have remote office of just a few people with bandwidths ranging from 128k to 512k. On high bandwidth apps users complain of slow response time, yet do not want to pay for circuit upgrades. I haven't sniffed these apps yet, but I suspect we are sending 1514 byte frames over these frame-relay circuits.

9 Replies 9

thiland
Level 3
Level 3

Just a few questions:

What kind of applications? Web-based?

Are these frame relay circuits?

If frame relay, have you implemented any type of compression or traffic-shaping? What is the CIR?

What kind of queuing are you using on the serial interfaces?

Have you implemented any QoS?

You may be able to gain performance benefits by adjusting queue sizes, implementing a different queuing strategy, etc.

-tanner

One of their major complaints is anything that uses Microsoft's RPC prorocol. This is used for Exchange, screen prints and file transfers as well. For exsample, it takes 3 minutes or so to receive a 6MB word email attachment. In the case of a 2 or 3 person office w/ a 128 k CIR, they complain of perceived poor application times. We monitor router interfaces and see that there is plenty of headroom. No compression of traffic shaping and default FIFO queueing. The MTU size is set to default ethernet (1500) throughout our enterprise.

Hi

I would suggest to start class based bandwidth allocation using CBWFQ.

Just create ACLs matching the RPC protocols port numbers and create a class-map matching tht ACL.

then bind tht in a policy map and allocate the min req b/w for the same.

u can create classes like this for other applications too and allot b/w accordingly to tht applicaations..

the final step apply tht to ur wan interface hope this shuld help to come out of ur congestion problem..

sample conf

class-map match-all SMTP

match access-group 102

class-map match-all FTP

match access-group 101

class-map match-all X.400

match access-group 103

class-map match-all REST

match access-group 104

policy-map FR1

class FTP

police 16000 1500 1500 conform-action set-prec-transmit 4 exceed-action set-prec-transmit 0

class SMTP

police 16000 1500 1500 conform-action set-prec-transmit 4 exceed-action set-prec-transmit 0

class X.400

police 16000 1500 1500 conform-action set-prec-transmit 4 exceed-action set-prec-transmit 0

class REST

police 16000 1500 1500 conform-action set-prec-transmit 4 exceed-action set-prec-transmit 4

ip cef

interface Serial6/0

ip address x.x.x.x y.y.y.y

load-interval 30

service-policy output FR1

access-list 101 permit tcp any eq ftp any

access-list 102 permit tcp any eq smtp any

access-list 103 permit tcp any eq 102 any

access-list 104 permit tcp any eq 45 any

regds

Thanks for the reply. However, I'm not sure that congestion is the issue here. Some of these WAN links are only 11 or 12% utilized at the max. I don't think that these apps are competing too much for WAN access. I think that the perceived poor application response time as seen from the remote sites is due to the disproportionat (sp?) serialization delay inherent in let's say a 128k frame relay circuit. Of course the applications in question, although intended for use across the WAN, were not built with that end in mind. No WAN modeling or testing was included in the design process. The result are somewhat "chatty" transactions. It may be that the only way to address this issue now, since the application is already deployed, is to throw bandwidth at it. If I'm wrong about the bandwidth contention, sort me out.

Hello,

Since serialization delay is "frame size/link bw", you can reduce the delay by fragmenting the frames into smaller chunks - use the frame-relay fragment command as follows:

map-class frame-relay EIC_Policy

frame-relay fragment 320

Apply this class to the appropriate interface.

Regards

Pradeep

I think one thing to realize here is that to download a 6MB file over a 128k link it will have to take at least 6 Minutes or so. 6Mbytes is equivalent to 48,000,000 bits. If you do the math and lets say they pull the file down at 100k it will take 480 seconds or 8 minutes. I think the reason that you see the utilization so low is that you have the bandwidth command under that Frame Relay circuit set for the default of 1544k. Change it with the command bandwidth 128 and you should see the utilization go up.

Jason Smith

www.smif101.com

My Network Sniffer tells me that this transaction involves a transfer of about 15,000,000 bits in 150,000 frames. Must be some compression at work...

Also, bandwidth statements are accurate...

Given the amount of frames, you can significanly improve performance by reducing the serialization delay.

The delay for a 1500 bytes frame on a 128K link is around 92ms. If you fragment the frame to say 320 bytes then the serialization delay gets reduced to 20ms. I know that more frames will introduce more overhead in other areas, but the tradoff maybe worth it.

Regards

Pradeep

Review Cisco Networking for a $25 gift card