02-19-2022 03:24 PM
1. I have Hub and several Spokes. I completed with Per-Tunnel QoS from Hub to Spokes and Spoke-to-Spoke.
2. But I have delay-sensitive VoIP traffic in both directions - Hub-Spoke and Spoke->Hub, but the latter is not covered with any QoS Policy.
3. Physical Interface has applied policy which put into LLQ all IPSec giving it 30% of total Bandwidth of 1Gig interface (all DMVPN is prioritized towards Internet). But I wish to classify traffic within DMVPN with more granularity (VoIP, Zabbix, Windows AD, and so on).
4. Using Spoke-to-Spoke I achieve this. But Spoke->Spoke.
5. How to conjunct policy Spoke-to-Hub and not to influence existing Spoke-to-Spoke QoS Policy?
6. Are there any commands on Spoke like Hub's ones: nhrp map group ... service-policy output ... to select just Spoke-to-Hub traffic?
Solved! Go to Solution.
02-21-2022 12:41 AM
I saw this document. It is rather unuseful (because it does not explain anything! Just simple example). It seems I found solution!
1. Spoke sends to Hub GROUP, Hub applies POLICY linked to this group in its configuration of Tunnel interface. Right?
2. Hub also CAN send a group to Spokes. And Spokes also CAN apply the policy! This way is right!
3. When all the Spokes send to Hub their GROUP all other Spokes receive this GROUP at the moment of Spoke-To-Spoke Tunnel construction and they CAN apply different than for Hub policy to Spokes.
4. Command nhrp attribute group (or its older version ip nhrp attribute group) command: a) not working b) replaces (not an addition) nhrp group.
5. The above does not replace or cancel QoS on Physical Interface which is used as Tunnel origin. It's enough simply to match IPSec in DMVPN class and to reserve some percent of overall bandwidth for DMVPN class.
02-19-2022 03:37 PM
I have suggestion here,
config two tunnel,
one will be used to forward VoIP traffic to hub and other used to spoke to spoke.
one tunnel will advertise and receive the route for VoIP
other tunnel will receive and advertise the route for spoke to spoke connection.
02-20-2022 04:50 AM
Sound like something obscure.. Two DMVPNs?? Are you serious? For two kinds of traffic? What if the third will arise? Fourth?
I thought about different matching strategies in policies "Spoke-to-Hub" and "Spoke-To-Spoke".. And overall IPSec matching on outgoing physical interface on a Spoke.
02-20-2022 04:51 AM
Or give me, please, an example?
02-20-2022 03:17 PM - edited 02-21-2022 02:52 AM
Never Mind,
I suggest and it up to you.
for my suggestion
...
02-21-2022 12:41 AM
I saw this document. It is rather unuseful (because it does not explain anything! Just simple example). It seems I found solution!
1. Spoke sends to Hub GROUP, Hub applies POLICY linked to this group in its configuration of Tunnel interface. Right?
2. Hub also CAN send a group to Spokes. And Spokes also CAN apply the policy! This way is right!
3. When all the Spokes send to Hub their GROUP all other Spokes receive this GROUP at the moment of Spoke-To-Spoke Tunnel construction and they CAN apply different than for Hub policy to Spokes.
4. Command nhrp attribute group (or its older version ip nhrp attribute group) command: a) not working b) replaces (not an addition) nhrp group.
5. The above does not replace or cancel QoS on Physical Interface which is used as Tunnel origin. It's enough simply to match IPSec in DMVPN class and to reserve some percent of overall bandwidth for DMVPN class.
02-21-2022 07:59 AM
the document mention that Per Tunnel QoS only support in some IOS XE version and other not support this enhancement.
so this should config if all Spoke DMVPN router support this enhancement.
the NHRP group exchange between Spoke and Hub and other Spoke not learn it from Hub.
The NHRP group send through register while NHRP VPE attribute is send through the resolution request/reply.
Good Luck....
02-23-2022 10:35 AM
From what you describe, if I understand it correctly, you're doing QoS incorrectly.
To use QoS effectively, you need to control/manage all bandwidth.
As a simple example, if your "sharing" your Interconnect traffic for both DMVPN and "other traffic", although you're LLQing DMVPN outbound, what about inbound? (I.e. not just hub traffic, inbound.)
Doing DMVPN "right" (regarding QoS) is difficult enough managing bandwidth between hub and spokes. I.e. to insure spokes aggregate doesn't overrun hub ingress, offset by being unable to "use" bandwidth, unused, at that moment, by other spokes. Compounding this problem is when you enable spoke-to-spoke traffic, again many to one aggregate is the issue.
One of Cisco's recent DMVPN variants supports dynamic (adaptive) shaping between DMVPN end points. Using that (if available on your platform), allows one location's egress to shape to another location's available ingress bandwidth. However, for real-time traffic, like VoIP, you might run into a situation where dynamic shaping doesn't provide enough bandwidth for such traffic.
If you haven't had any problems so far, as you mention at least one gig interface, perhaps you haven't had enough congestion to cause any issues. I.e. possibly later, if not at the moment, you'll bump into "QoS" problems.
I would suggest a further review of your "QoS" design.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide