cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2174
Views
0
Helpful
1
Replies

Traffic Shaping dual stack IPv4 and IPv6

bcoverstone
Level 1
Level 1

We started running IPv6 at our office about a year ago.  I've noticed that services that use IPv6 and run through our router are sometimes very slow.  I ran some diagnostics with Wireshark to find that there are large delays between IPv6 packets received, where sometimes frames plus their retransmissions would come through at the same time.

After some troubleshooting, I found two things.

  1. This condition only occurs when traffic shaping has kicked in and the class matching my service has a queue depth greater than 0.
  2. If I disable IPv6 on my host and use the same service on IPv4, the packets are not experiencing large delays when congestion occurs.

I have a hypothesis.  I wanted to bring it up here in case someone had some ideas that I hadn't thought of yet.  Or maybe I've discovered some interesting behavior about how MQC handles dual stack queues.

My thought is that the traffic shaper will service IPv4 packets in a queue first, and then if there are any remaining tokens it will service IPv6 packets.  Given the delays I've seen, I believe this may be happening at every traffic shaping Tc interval, thus starving the IPv6 packets in the queue.

 

I am running a test to split out IPv4 and IPv6 into different queues to see if this makes a difference.  To accomplish this, I have created a new policy-map called DUALSTACK and applied it as a child to the service class:

class-map match-any IPV6
 match protocol ipv6
class-map match-any IPV4
 match protocol ip

policy-map DUALSTACK
 class IPV6
  bandwidth percent 49
  fair-queue 1024
 class IPV4
  bandwidth percent 49
  fair-queue 1024

policy-map OUTPOLICY
  class SOMESERVICE
    bandwidth percent 20
    service-policy DUALSTACK

 

The router is a 7206vxr G1 running IOS 15.2(4)M5, so HQF is utilized.

1 Reply 1

bcoverstone
Level 1
Level 1

I wanted to provide an update on this topic.  It turns out the traffic class that I was testing with was overlapping another class's match statement, which had a much lower bandwidth percentage.

After making the corrections, it seems the IPv4 and IPv6 work very well together in the queues.  And now that you can run fair-queueing per class, I'm actually impressed with how well it is working.

Now if only I could classify traffic based on the number of packets/bytes seen in netflow.... then I could shape some really nice QoS policies!