cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
502
Views
5
Helpful
3
Replies

ACNS and UDP receive window

gnijs
Level 4
Level 4

Hi all,

I am just starting to ACNS equipment to stream video to our LAN and our WAN. I have a strange issue in remote sites.

On our LAN network, the video streaming in UDP works. However, ACNS has a windows size of > 1500 bytes, so the stream gets shopped in segments of +/- 4000 bytes and this segment is fragmented onto the network in +/- 4 packets.

In remote sites, i am unable to connect to the video stream. When i do a sniffer, i can see the packets arriving. However, of every 4 packets in a segment, i only receive 3 packets. After that it is finished. Windows Media PLayer stays in "buffering" state and after a while disconnects. This is because he misses the last packet of each segment. So now my question is:

1) can i increase the UDP receive windows on XP so that it can receive all fragments ? I think it can be a UDP receive window thing on windows

2) Can i configure ACNS device NOT to use a segment size of +/- 4000 when doing UDP streaming, but just a "segment" and "packet" size of 1500, therefore preventing ANY kind of fragmentation on UDP stream packets ?

regards,

Geert

3 Replies 3

gnijs
Level 4
Level 4

Don't bother. I found the cause of my own problem. It are those f***ing QOS buffer settings on a 3750 switch again !

I changed the queue-set on an interface from queue-set 2 to queue-set 1 and now it isn't dropping any UDP packets anymore.

Cisco made it also very easy (hum hum) to see these drops (yeah right).

Now i see with "sh platform port-asic stats drop", i have constant drops in Port2 Tx queue:

Port-asic Port Drop Statistics - Summary

========================================

RxQueue 0 Drop Stats: 0

RxQueue 1 Drop Stats: 0

RxQueue 2 Drop Stats: 0

RxQueue 3 Drop Stats: 0

Port 0 TxQueue Drop Stats: 0

Port 1 TxQueue Drop Stats: 0

Port 2 TxQueue Drop Stats: 1514616

Port 3 TxQueue Drop Stats: 0

Port 4 TxQueue Drop Stats: 0

Port 5 TxQueue Drop Stats: 0

Port 2 TxQueue Drop Statistics

Queue 0

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 5517

Queue 1

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 1038986

Queue 2

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 110

Queue 3

Weight 0 Frames 83

Weight 1 Frames 0

Weight 2 Frames 471496

So, Queue 1 , Weight 2 is mapped to:

mls qos srr-queue output dscp-map queue 2 threshold 3 24 25 26 27 28 29 30 31

DSCP 26, indeed AF31, the video traffic...

The difference is somewhere within these numbers (?):

sh mls qos queue-set 1

Queueset: 1

Queue : 1 2 3 4

----------------------------------------------

buffers : 10 10 26 54

threshold1: 138 138 36 20

threshold2: 138 138 77 50

reserved : 92 92 100 67

maximum : 138 400 318 400

sh mls qos queue-set 2

Queueset: 2

Queue : 1 2 3 4

----------------------------------------------

buffers : 16 6 17 61

threshold1: 149 118 41 42

threshold2: 149 118 68 72

reserved : 100 100 100 100

maximum : 149 235 272 242

This is supposed to be high priority traffic that should have NOT been dropped. Anyway the output interface is completely un-saturated.

Qos is causing problems instead of resolving ones in this case.

Hi Geert.

Thanks for providing the solution for the problem. It can be of great help to others.

5+ for that.

Regards, Ingolf

Well, i even digged a bit futher into the "problem". The "problem" seemed to be the buffer size allocation to Q2 in the queue-set 2. The "6" in "buffers : 16 6 17 61" seems just to small. I guess the buffer is just to small to hold the packets arriving on 2 gigabit fiber interfaces and going out on a single 100 Mbps interface. An additional single video UDP stream of +/- 600 kbps, marked with AF31, was enough to saturate Q2T3.

(i tried to simulate this on a lab switch, however i failed. No problem in the lab. Of course, i don't have real-world ospf,pim,best effort traffic and the port assignments (ASICs) were also different so very difficult to simulate)

When i changed the maximums to "400 400 400 400" in production the problem went away.

I also noticed that the switch was sometimes dropping Q4 traffic (best effort) even though the utilisation of the 100 Mbps link doesn't go above 30 Mbps. Playing around with the "maximum" or "reserved" thresholds didn't help here. The cause seemed to be the default "25 25 25 25" buffer space allocation (when not using AutoQos). When i changed this to "10 30 10 50", the drops in Q4 went away. I also noticed that i could change the maximum setting from 1-3200 where Cisco is using 400 as a maximum everywhere (older IOS versions had this limit). I put them now on 1000 (just a number). So my settings are now:

Queue : 1 2 3 4

----------------------------------------------

buffers : 10 30 10 50

threshold1: 100 200 100 100

threshold2: 100 200 100 100

reserved : 50 50 50 50

maximum : 1000 1000 1000 1000

and i haven't had a single drop in 5 days now (woohoo). Your milage may vary however...

PS. Some very good test information regarding qos on C3560 can be found here:

http://blog.internetworkexpert.com/2008/06/26/quick-notes-on-the-3560-egress-queuing

Review Cisco Networking for a $25 gift card