File Transfer QoS Configuration

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2017 09:42 PM - edited 03-05-2019 08:39 AM
We recently implemented Twelve-Class (1P7Q1T+DBL) Egress Queuing model on our core 4500 switches in the environment. We have had some significant improvements for VoIP traffic (Skype for Business, EF). We want to have similar benefit to our file transfer between two sites. At the moment the speeds are averaging 700-800 Kbs for a 500Mb link between sites. Does file transfer between sites improved by altering the QoS policy? Can we mark the traffic is AF2 (Transactional Data)? Let me know your thoughts around this.
- Labels:
-
Routing Protocols

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2017 11:19 PM
Hello,
can you identify the file transfer flows ? With only 700-800Kbps, it sounds like your traffic somehow gets in the scavenger or best effort queues.
Post the config you have so far.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2017 05:42 PM
Yes, i think the traffic is going to the default queue which has 25% bandwidth allocated.
Below is our configuration and interface policy-map output.
Policy Map PoolOutput-Policy
Class PoolOutput-Priority-Queue
priority
police cir percent 30
conform-action transmit
exceed-action drop
Class PoolOutput-Control-Mgmt-Queue
bandwidth remaining 10 (%)
Class PoolOutput-Multimedia-Conf-Queue
bandwidth remaining 10 (%)
dbl
Class PoolOutput-Multimedia-Strm-Queue
bandwidth remaining 10 (%)
dbl
Class PoolOutput-Trans-Data-Queue
bandwidth remaining 10 (%)
dbl
Class PoolOutput-Bulk-Data-Queue
bandwidth remaining 4 (%)
dbl
Class PoolOutput-Scavenger-Queue
bandwidth remaining 1 (%)
Class class-default
bandwidth remaining 25 (%)
dbl
#show policy-map interface t5/12
TenGigabitEthernet5/12
Service-policy input: PoolTrust-Dscp-Input-Policy
Class-map: class-default (match-any)
225027924429 packets
Match: any
QoS Set
dscp dscp table PoolTrust-Dscp-Table
Service-policy output: PoolOutput-Policy
queue stats for all priority classes:
Queueing
queue limit 680 packets
(queue depth/total drops) 0/5
(bytes output) 137869841276
Class-map: PoolOutput-Priority-Queue (match-any)
9391962010 packets
Match: dscp cs4 (32) cs5 (40) ef (46)
9391962010 packets
Priority: Strict, b/w exceed drops: 5
police:
cir 30 %
cir 3000000000 bps, bc 93750000 bytes
conformed 135361482842 bytes; actions:
transmit
exceeded 0 bytes; actions:
drop
conformed 246000 bps, exceeded 0000 bps
Class-map: PoolOutput-Control-Mgmt-Queue (match-any)
313805202 packets
Match: dscp cs2 (16) cs3 (24) cs6 (48) cs7 (56)
313805202 packets
Queueing
queue limit 680 packets
(queue depth/total drops) 0/0
(bytes output) 294042057
bandwidth remaining 10%
Class-map: PoolOutput-Multimedia-Conf-Queue (match-any)
2548401638 packets
Match: dscp af41 (34) af42 (36) af43 (38)
2548401638 packets
Queueing
queue limit 680 packets
(queue depth/total drops) 0/1
(bytes output) 25160834390
bandwidth remaining 10%
dbl
Probabilistic Drops: 0 Packets
Belligerent Flow Drops: 4 Packets
Class-map: PoolOutput-Multimedia-Strm-Queue (match-any)
114950200 packets
Match: dscp af31 (26) af32 (28) af33 (30)
114950200 packets
Queueing
queue limit 680 packets
(queue depth/total drops) 0/0
(bytes output) 13072359153
bandwidth remaining 10%
dbl
Probabilistic Drops: 0 Packets
Belligerent Flow Drops: 0 Packets
Class-map: PoolOutput-Trans-Data-Queue (match-any)
110951001 packets
Match: dscp af21 (18) af22 (20) af23 (22)
110951001 packets
Queueing
queue limit 680 packets
(queue depth/total drops) 0/0
(bytes output) 22317639635
bandwidth remaining 10%
dbl
Probabilistic Drops: 0 Packets
Belligerent Flow Drops: 0 Packets
Class-map: PoolOutput-Bulk-Data-Queue (match-any)
785709124 packets
Match: dscp af11 (10) af12 (12) af13 (14)
785709124 packets
Queueing
queue limit 680 packets
(queue depth/total drops) 0/0
(bytes output) 41479401625
bandwidth remaining 4%
dbl
Probabilistic Drops: 0 Packets
Belligerent Flow Drops: 0 Packets
Class-map: PoolOutput-Scavenger-Queue (match-any)
230146 packets
Match: dscp cs1 (8)
230146 packets
Queueing
queue limit 680 packets
(queue depth/total drops) 0/0
(bytes output) 211785
bandwidth remaining 1%
Class-map: class-default (match-any)
214760652709 packets
Match: any
Queueing
queue limit 648 packets
(queue depth/total drops) 0/0
(bytes output) 613318368189
bandwidth remaining 25%
dbl
Probabilistic Drops: 0 Packets
Belligerent Flow Drops: 0 Packets
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2017 11:06 PM
Hello,
the default queue has 25% of bandwidth, which still doesn't explain the 700-800kbps. I agree with Joseph that the problem might be elsewhere. Your QoS doesn't show any drops.
What you could do is match a host to host connection and put it in the priority queue. Then transfer a large file between those hosts and check if that makes a difference.
That said, how are the files being transferred ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-07-2017 05:25 AM
It's possible QoS might improve bulk file transfer rate. Insufficient information, though, to know for sure.
Things that QoS can impact include insuring drops aren't prematuring throttling transfer rate or conversely using minimal drops to keep transfer from bursting beyond link capacity.
However, besides QoS, if using TCP, some hosts may have an insufficient sized RWIN to fully utilize high bandwidth WAN links.
