Choose one of the topics below for SD-WAN Resources to help you on your journey with SD-WAN
This community is for technical, feature, configuration and deployment questions.
For production deployment issues, please contact the TAC!
We will not comment or assist with your TAC case in these forums.
To participate in this event, please use the button below to ask your questions
Ask questions from Friday, March 20 to Friday, April 3, 2020
**Helpful votes Encourage Participation! **
Please be sure to rate the Answers to Questions
Hi Daniel and David
Thanks you so much for delivering quite an amazing session this past Tuesday.
These are some of the questions that were not covered during the live session:
Hopefully it will be included in VIRL2 when released but haven't seen any definite information on it. It's also possible to use various virtual platforms to setup your own lab.
I need to use a tracker in the DIA route in the cEdge, so if this internet link fails, it can use the MPLS tunnel to DC and start using internet from DC. Sis there a workaround to solve this?
Currently trackers aren't supported on cEdge but it's coming. I don't know of a workaround so far. Might be able to find one when there is support to have device managed by vManage and add CLI templates, which isn't supported today.
Destination-based NAT is supported in service VPNs: https://www.cisco.com/c/en/us/support/docs/routers/sd-wan/215106-service-side-destination-based-network-a.html
Would you mind elaborating more your query so we can pinpoint your requirement?
For the HA (high availabilty) for the vmanage and using the concept of disaster recovery, is necessary have cluster of vmanage en both sites (in the hub central and the secondary) or we can enable only with 2 vManages (on both sites + 1 for the arbitrator)??
We have the solution on-premise and have 3 + 3 + 1 vManages is too much consumption of resources
You dont necessarily need the arbitrator (if you try to reduce resource usage) as it can be done manually specifying a switching threshold.
Documentation for high availability does not state that they should be running, only that the cluster must have three nodes:
"Prior to configuring disaster recovery, make sure you have met the following requirements:
You must have two vManage clusters with three nodes in each cluster. If automated recovery option is selected, then another vManage node is required."
If you define the services to be enabled in every vManage node, your would have to worry about the services enabled for all of them:
Select the services to run on the vManage server:
Application Server— Each vManage NMS in the cluster must be a web application server, which is an HTTP or HTTPS web server for user sessions to the vManage NMS. Through these sessions, a logged-in user can view a high-level dashboard summary of networks events and status, and can drill down to view details of these events. A user can also manage network serial number files, certificates, software upgrades, device reboots, and configuration of the vManage cluster itself from the vManage application server.
Statistics Database—Stores all real-time statistics from all Cisco vEdge devices in the network. These are the statistics that are displayed on the various vManage screens. You can run up to three iterations of the statistics database in a vManage cluster.
Configuration Database—Stores the inventory and state and the configurations for all Cisco vEdge devices. You can run up to three iterations of the configuration database in a vManage cluster.
Messaging Server—Each vManage NMS in the cluster must be a messaging server. The messaging server provides a communication bus among all the vManage servers in the cluster. This bus is used to share data and to coordinate operations among the vManage instances in the cluster.
I would not recommend to running less than 3 vManage instances per cluster. But you can definitely reduce compute resources by not using the arbitrator.
Thanks for this opportunity of policy discussion.
I am trying to build a control policy for my customer. Setup is as follows -
Central Hub (CH) -->MPLS(blue)--> Regional Hub (RH) -->INTERNET(biz/public-internet) Multiple Spokes (MS)
CHs & RHs are connected via MPLS using color blue. While RHs & MSs are connected via internet.
***There is also a direct internet based tunnel from CH to MS (this is to be less preferred & only in case of RH failure)
I've explored the Cisco doc & discovered the feature 'tloc-action' to suite my needs. Kindly let me know how i can use this. For now, ive designed my CH policy as below & it seems to always prefer direct internet based tunnel instead of going via RH -
site-list sl-MS_ALL <-- All site-id of MS routers
tloc-list tl-RH _SDWGW_SITE_MPLS_TLOCS <-- mpls tlocs of RHs
From CH to MS
Please let me know if you need more logs
They definitely can have different configuration and therefore be selected as a "better" option, but its highly advised to not do it. The recommended approach is to let vManage push the configuration to both and it will make sure it is consistent between them.