Note: This lab utilizes the dCloud SD-WAN platform. You must schedule this ahead of time before proceeding below!
Digitization is placing unprecedented demands on IT to increase the speed of services and applications delivered to customers, partners, and employees, all while maintaining security and a high quality of experience. With the adoption of multi-cloud infrastructure, the need to connect multiple user groups in an optimized and secure manner places additional demands on IT teams.
The traditional architectural method of delivering traffic optimization (i.e. load balancing, security policy, WAN optimization, etc.) relied on centralized provisioning of elements, such as Firewalls, Intrusion Detection/Prevention sensors, Data Leak Prevention systems, URL filtering, Proxies and other such devices at aggregation points within the network (most commonly the organization’s Data Centers). For SaaS applications and Internet access, this approach resulted in backhauling user traffic from remote sites into the main Data Centers, which increased application latency and negatively impacted overall user experience. For applications hosted in the Data Center, this approach resulted in the potential waste of Data Center bandwidth resources. Additionally, this architectural method also proved to be challenging to effectively mitigate security incidents, such as virus outbreaks, malware exploits and internally sourced denial of service attacks.
Today, as we move into the era of SDWAN, this problem is exacerbated by the architectural shift into a distributed access model. Branches and users are now free to access SaaS applications and Internet resources directly – bypassing the aggregation points highlighted above. While this provides a much more efficient method of moving data from point A to point B, it poses a challenge to IT teams looking to maintain their traditional optimization and security policies.
Cloud onRamp for CoLocation solves these challenges by offering the capability of virtualizing your network edge and extending it to colocation centers – bringing the cloud to you, versus you extending to the cloud. Cisco Cloud onRamp for CoLocation provides virtualization, automation and orchestration for your enterprise – negating the need to design infrastructure for future requirements or scale by providing an agile way of scaling up and down as required.
Step 1: Building a Cloud onRamp CoLocation Cluster
Cloud onRamp for CoLocation is designed to be prescriptive and turn-key. Hence, when the solution is purchased, the equipment can be drop-shipped to the colocation facility of choice, where it will be racked, stacked and cabled by local resources.
Upon initial boot-up, the inbuilt components of NFVIS (namely, vDaemon) will begin the process of auto-provisioning. This is where our lab picks up. Our assumption here is that the equipment is racked, cabled and booted up. It has built a control channel to vManage and is awaiting further provisioning by the administrator.
The following requirements must be met when implementing Cisco SDWAN Cloud onRamp for CoLocation:
You must be running IOS-XE v16.10.1 or greater on your Catalyst switches
You must be running Network Advantage or greater on your Catalyst switches
CSPs must be running NFVIS v3.11.1 or greater
You must be running vManage v19.1 or greater
You must have a DHCP server (capable of DHCP Option 43) running in the management subnet of the colocation
1. The first step to cluster creation is to allocate the devices appropriately within vManage. Navigate to Configuration, Network Hub (Note: this menu item will be renamed with v19.1+).
2. Click the Configure & Provision Cluster button at the bottom of the screen.
3. Provide a Name, Site ID, Location and Description for your cluster. With the exception of Site ID, these values are somewhat arbitrary and useful for distinguishing clusters in the event that your organization has many. In a real world deployment, ensure that the value you choose for Site ID falls inline with the organizations Site ID structure for other overlay elements. Here, I will use the following values:
4. Next, we need to assign switches and CSPs to this cluster. Click on the Switch1 icon in the graphic.
5. Provide a name for your switch and select a serial number from the dropdown list and click the Save button (Note: it does not matter which serial number you choose. The switches will form a Virtual Switch Stack and operate as one logical entity):
6. Repeat the previous step for Switch2 and each CSP (Note: the CSPs have identical cabling to each Catalyst switch in the cluster. Hence, it does not matter which serial number you choose from the dropdown for each CSP).
7. Click the Credentials button on the right pane.
8. Since the solution ships with a blank configuration, it is important to configure credentials on each device. Though, as a customer accessing the console of these devices is unlikely, they will be necessary for Cisco TAC troubleshooting and for more advanced review of configuration. Click the pencil icon next to the existing admin credentials already present (or, create your own with the New User button):
9. Provide a username, password and role (or simply a password, if using the default admin account) and click the Save Changes button.
10. Click the Save button to close out the credential window.
11. Click the Resource Pool button.
12. Here, we will set the "underpinnings" of how the cluster will stitch together the various service chains that we build. As VNFs are built on each cluster, the solution needs a pool of VLANs (L2) and IP Addresses (L3) to pull from that can be used to stitch devices together. For the DTLS Tunnel IP (a.k.a. System IP), enter a pool of IP Addresses that can be assigned as System IPs to devices that will join the SDWAN overlay (in the real world, this should fall in line with the organization's System IP policy). For this lab, we will use 184.108.40.206 through 220.127.116.11:
13. Enter the VLAN pool that can be used in the Service Chain VLAN Pool. These VLANs are (typically) not exposed outside of the cluster, though they can be. Ensure that you are using VLANs that do not overlap with other Data Center resources. This pool will be used to stitch VNFs together from a Layer 2 perspective. Here, I will use 1050 through 1100:
14. Since the solution is capable of importing basic values into VNFs as they boot up (as part of Day0 provisioning), provide a pool of IP Address that the system can use to assign to data interfaces of VNFs. This value stitches the VNFs together in a Service Chain at Layer 3. Here I will use 10.2.0.1 through 10.2.0.254:
15. Next, enter an IP Address pool that can be used by each VNF for management purposes. If the VNF has a dedicated management interface, the solution will automatically assign an IP Address to it from this pool. This address should be routable throughout your organization so that you have management access to each individual VNF. Here, I will use 10.1.0.101 through 10.1.0.254:
16. Next, set the Management Gateway Prefix (i.e. the Default Gateway) for the the management subnet you just entered. Here, I will use 10.1.0.1.
17. Set the Management Mask to the correct subnet mask for your management subnet. Here, I will use /24.
18. Lastly, enter the IP Address that will be used for Colo Configuration Manager (CCM) in the Switch PNP Server IP field. Since the Catalyst switches do not run vDaemon (and, hence, cannot connect directly with vManage), a separate container is spawned on the CSPs called Colo Configuration Manager. This "mini server" acts as a proxy between the switches and vManage. vManages pushes configuration for the switches to the CCM container, which then relays that configuration directly to the Catalyst 9500s (Note: CCM does not have a GUI, nor is it expected that the customer will interface directly with CCM. It is a silent install outside of providing the IP it should use). Here, I will use 10.1.0.100/24:
19. Click the Save button.
20. Next, click the Cluster Settings button, followed by NTP. Notice that you can also specify a Syslog server using this method.
21. Enter the NTP server(s) you wish to use for this cluster (Note: NTP is an important consideration here as authentication to the SDWAN overlay relies on certificate validation). Here, I will use 10.1.0.1:
22. Click the Save button.
23. Click the Save button again to save the entire cluster configuration.
24. Now that our cluster is configured, we need to activate it. This process generates the templates that will be used to configure the CSPs as well as CCM. Click on the ellipsis (three dots) to the right of your new cluster and choose Activate.
25. In the screen that follows, you may preview the configuration that was generated by clicking on each of your CSPs:
26. Click the Configure Devices button.
27. Confirm the fact that there will be three devices configured in the window that appears (2x CSP and 1x CCM).
28. Your cluster will now be activated and each device configured. This process can take upwards of 15 minutes. If your devices have not yet built a control channel to vManage, the cluster status will change to Pending. If your devices have already built their control channels, they will begin configuration.
Step 2: Building a Service Chain and Adjusting Policy
The next step in our journey is to build our Service Chains and start pushing traffic through them. To do this, we must first upload the necessary VNFs to vManage's Software Repository.
1. Navigate to Maintenance, Software Repository.
2. Click the Virtual Images tab.
3. Your dCloud lab has a few packages already downloaded for you to upload as practice. Click the Upload Virtual Image button.
4. Click the Upload Virtual Image button.
5. Click the Browse button and n avigate to the C:\Users\Administrator\Desktop\SD-WAN Demo\Software folder.
6. There are two *.tar.gz files in this folder. One for vEdge Cloud and one for Firepower Threat Defense (FTD). Select these two files (Shift + Click) and click the Upload button.
7. Allow the files a few moments to upload (30-60 seconds).
8. Next, navigate to Configuration, Network Hub.
9. Click the Service Group tab at the top of the page. Since multiple Service Chains can be created that each serve a broader purpose, the concept of Service Groups has been introduced. As an example, suppose you have a Service Group called "Business Partners." Some business partners may not require the same level of scrutiny that others do, while some may require very specific optimization policies. Hence, you can create multiple Service Chains to satisfy these needs all under the broader Service Group. Additionally, Service Groups allow for both lateral and North/South movement between Service Chains. In essence, traffic can enter one Service Chain, but be handed off to a different Service Chain within the Group, when/where necessary.
10. Click the Create Service Group button.
11. Provide a Name and Description for your Service Group. Here, we can use "SDWAN_Router_FirewallSG" and "SDWAN Router with Firewall":
12. Click the Add Service Chain button.
13. In the pane that appears, enter the details of your Service Chain. Specifically, enter a Bandwidth (used to ensure that the chain is placed on a CSP that can support your requirement and for allocating the correct QoS shaping/placing values), Input VLAN (used to tell the solution how you will deliver traffic to the Service Chain), Output VLAN (used to determine how traffic will exit the Service Chain) and Service Chain structure (used to derive the order of VNFs). Here, we will use 1024Mb for the bandwidth, VLAN 10 for the Input VLAN, VLAN 20 for the Output VLAN and Create Custom:
14. Click the Add button.
15. In the window that appears, click and drag the router and firewall icons to the blank Service Chain pane:
16. Click the router icon in the Service Chain.
17. In the window that appears, select the software package that will be used to instantiate this VNF (in our case, vEdge Cloud).
18. Click the Fetch VNF Properties button that appears.
19. Name your router and select CPU, memory and and Disk allocations (these should pre-populate for you as part of the package).
20. Select an available Serial Number from the dropdown list (any available is fine, since these are all vEdge Cloud serials).
21. Enter the WAN IP Address and Gateway. Here, I will use 192.168.10.180 and 192.168.10.1, respectively.
22. Enter the Service VPN Number. Devices that follow in the Service Chain will exist within this VPN/VRF on the SDWAN overlay. Here, I will use VPN 10.
23. Click the Configure button.
24. Next, click on the firewall icon within the Service Chain.
25. Select your FTD package.
26. Notice the option for HA appears as well as a VNF termination mode (Routed/Transparent). For capable VNFs, these options will be presented and, if checked, automatically configured for you (i.e. dual FTD appliances will be built and automatically joined as an HA pair). Click the Fetch VNF Properties button.
27. Configure your FTD appliance as follows:
28. Click the Configure button.
29. Click the Save button to save your Service Chain.
30. In the screen that follows, click on the three dots (ellipsis) to the right of your new Service Chain and choose Attach Cluster. Select your newly built cluster and click the Attach button. This will fail in your dCloud lab since no actual hardware is present!
vManage will then begin the process of installing the necessary software and providing basic configuration to your new Service Group VNFs. After a brief period (3-5 minutes), your Service Group is ready to use. At this point, you may want to finish configuration on these VNFs outside of vManage (i.e. if this is a Firewall, you may want to finish setting up a more granular security policy). Once finished, it's time to adjust policy to influence traffic through your Service Chain. There are a few ways you can push traffic through Service Chains. One is through traditional routing (i.e. have the Service Chain advertise a Default Route into the overlay), while the second is through a Service Insertion policy (explained below). The following picture, though not specific to this lab, illustrates this concept:
1. Navigate to Configuration, Policies.
2. Click the Add Policy button.
3. Identify which applications will qualify for service insertion. Here, we will use the default Microsoft_Apps Application Family, though you can create your own.
4. Click the Next button.
5. Click the Next button again, since we will not be modifying the topology for this VPN.
6. Click the Traffic Data tab (we will be creating a Centralized Data Policy).
7. Click the Create Policy button, followed by Add New.
8. Give your policy a Name and a Description. Here, we will use CoR-CoLo-Policy and Cloud onRamp for CoLo Policy.
9. Click the Sequence Type button, followed by Service Chaining:
10. Click the Sequence Rule button.
11. In the Match tab, click the Applications/Application Family List button.
12. Select your Application Family list created in the above step. In our case, we will select Microsoft_Apps:
13. Click the Actions tab at the top of the window pane.
14. Click the Service button.
15. Choose the Firewall service in the dropdown, enter 10 in the in the Service: VPN field, enter 18.104.22.168 in the Service: TLOC IP field, select biz-internet from the Color dropdown and select IPSEC from the Encapsulation dropdown. This entry establishes which SDWAN router to forward Microsoft traffic to:
16. Click the Save Match and Actions button.
17. Click the Save Data Policy button:
18. Click the Next button.
19. Click the Traffic Data tab on the Policy Application screen.
20. Enter a Name and Description for your new policy.
21. Click the New Site List and VPN List button.
22. Select the From Service radio button.
23. Select the AllBranches list in the Select Site List dropdown (created for you).
24. Select the dataVPN list in the Select VPN List dropdown (created for you).
25. Click the Add button.
26. Click the Save Policy button.
27. Activate your policy by clicking on the three dots (ellipsis) next to your new policy and clicking Activate.
28. Validate traffic flow by generating traffic from a branch location to any Microsoft application on the Internet (instructor only!).
Step 3: Monitoring
1. As a final step, you can monitor your new service chains and clusters via vManage. Navigate to the Monitor , Network menu.
2. Under normal circumstances, any SDWAN devices that are part of the overlay show in this screen - including those hosted on the Cloud onRamp for CoLo cluster. Notice the Network Hub Clusters menu at the top of the screen. Click on this link.
3. This screen shows provides a brief status overview of all clusters (both CSP and VNF operational status). Click on your cluster.
4. The screen that appears shows the status of individual cluster components. Notice the CPU, RAM and Disk Space allocations. Notice also that switch status appears on this screen. Click on the Services tab at the top:
5. Had your Service Chain successfully attached (with real hardware), each chain and its associated status would show on this screen (see instructor screen):
6. Click on the VNF tab at the top.
7. For individual monitoring of VNFs, you can utilize this tab. Clicking on any of the VNFs will result in a monitoring screen that displays elements such as CPU, Disk and Network usage (see instructor screen):
... View more
Note: This lab utilizes the dCloud SD-WAN platform. You must schedule this ahead of time before proceeding below!
Cloud onRamp for IaaS extends the fabric of the Cisco SDWAN overlay network into public cloud instances, allowing branches with cEdge/vEdge routers to connect directly to public-cloud application providers. By eliminating the need for a physical Data Center, Cloud onRamp for IaaS improves the performance of SaaS applications.
Step 1: Configure Pre-requisites
The connection between the SDWAN overlay network and a public-cloud application is provided by two redundant vEdge Cloud or cEdge routers, which act together as a transit between the overlay network and the application. Using two routers to form the transit offers path resiliency to the public cloud. In addition, having redundant routers assists in brownout protection to improve the availability of public-cloud applications. Together, the two routers can remediate link degradation that might occur during brownouts. Cloud OnRamp for IaaS discovers any existing private cloud instances in cloud regions and allows you to select which of them to make available for the overlay network. In such a brownfield scenario, Cloud OnRamp for IaaS allows simple integration between legacy public-cloud connections and the Cisco SDWAN overlay network. You configure and manage Cloud OnRamp for IaaS through the vManage NMS server. A configuration wizard in the vManage NMS automates the provisioning and connections between public-cloud applications and the users of those applications at branches in the overlay network. Cloud OnRamp for IaaS works in conjunction with AWS virtual private clouds (VPCs) and Azure virtual networks (VNets).
All Cisco SD-WAN devices configured for the Cloud onRamp for IaaS service must meet these requirements:
You must have at least two available vEdge Cloud or cEdge devices in the vManage device listing with generated bootstrap parameters (these will be used in the gateway VPC/VNET).
You must create a configuration template (to be defined below) to attach to the two vEdge Cloud/cEdge devices. This template can be a CLI or Feature Template.
The above template must define at least one Service Side VPN.
In the case of Microsoft Azure, the host VNet Default Subnet and VNet Gateway Subnet must be part of host VNet Address Block. For example:
Host VNet Address Block – 192.168.1.0/24
Default Subnet – 192.168.1.240/28
VNet Gateway Subnet – 192.168.1.0/28
Each gateway VPC/VNet can accommodate 16 host VPCs/VNets (considering an IKE-IPsec limit of 64 sessions per vEdge Cloud)
This feature is currently only supported on vEdges (cEdge support for AWS will come in March, 2019 with Azure and GCP support following in July, 2019 on CSR1KV).
1. We first need to create a template to attach to our new IaaS routers. Navigate to Configuration, Templates.
2. Click the Feature tab at the top of your screen, followed by the Add Template button:
3. Click on vEdge Cloud:
4. Click on the VPN button:
5. First, specify a name for your new Feature Template. In our case, since this template will define the parameters for our WAN VRF/VPN, we’ll use WAN-VPN0.
6. Next, specify a description for your template. We will use WAN – VPN0 Configuration.
7. Modify the VPN field under the Basic Configuration heading. Set this field to Global and its value to 0:
8. At the bottom of the page, click the Save button to save the template.
9. Next, we will create an Interface Feature Template to define interface parameters (IP Addressing, NAT, speed, duplex, etc.). 10. Click the Add Template button.
11. Click on vEdge Cloud as the device type.
12. Click the VPN Interface Ethernet button.
13. Provide a name for your interface template. This template will define parameters for the Internet-facing interface of the vEdge (ge0/0). Hence, we will name our template VPN0-Internet-Interface.
14. Provide a description for the template. We will use Internet WAN Interface.
15. Under the Basic Configuration heading, set the Shutdown option to Global and its value to No. This will ensure that the interface is physically enabled.
16. Set the Interface Name option to Global as well and its value to ge0/0. This value will ensure that the configuration push selects the correct interface to apply the configuration (note that this is case sensitive):
17. Under IPv4 Configuration, change the radio button to Dynamic.
18. Next, scroll down to the Tunnel section.
19. Set the Tunnel Interface option to Global and its value to On.
20. Set the Color option to Global and select biz-internet for the value.
21. Set the Restrict option to Global and its value to On.
22. Set the Control Connection option to Global and its value to On.
23. Click the Save button.
24. We have now finished configuring the WAN side templates of the router. Next, we will focus on the LAN side. Start by clicking on Add Template.
25. Choose vEdge Cloud as the device type.
26. Click the VPN button.
27. Name your template. Since this template will refer to Service VPN 10, we will use LAN-VPN10.
28. Provide a description for your template. We will use LAN – VPN10 Configuration.
29. Under Basic Configuration set the VPN parameter to Global and its value to 10.
30. Click on the Save button.
31. In the Feature Templates listing, click the Add Template button.
32. Choose vEdge Cloud as the device type.
33. Click the VPN Interface Ethernet button.
34. Provide a name for your template. This template will address the physical interface properties for our Service VPN 10 interface, so we will name ours VPN10-LAN-Interface.
35. Provide a description for your template. We will use VPN 10 LAN Interface Configuration.
36. Under Basic Configuration set the Shutdown parameter to Global and the value to No.
37. Set the Interface Name parameter to Global and the value to ge0/2.
38. Under IPv4 Configuration, change the radio button to Dynamic.
39. Click the Save button at the bottom.
40. Now it’s time to build your parent template. Click the Device tab in the upper left corner, or simply navigate to Configuration, Templates.
41. Click the Create Template button, followed by From Feature Template.
42. Choose vEdge Cloud as the Device Model.
43. Provide a name for your template. Here, we will use AWS-vEdge-Template.
44. Provide a description for your template. Here, we will use AWS Device Template.
45. Next, we need to fill in the blanks with the templates we created previously along with a few that were created for you.
46. Under the Basic Information section, click the down arrow next to each parameter and set the following:
47. Scroll down to the Transport & Management VPN section and set the following parameters. You will likely need to click the icon to the right of the VPN512 Management VPN section:
48. In the Service VPN section, click the plus sign to add a new Service VPN.
49. For your new Service VPN, click the button on the right to add a new VPN interface.
50. Configure your Service VPN as follows:
51. Lastly, under the Additional Templates section, set the following parameters:
52. Click the Create button.
53. Navigate to Configuration, Templates.
54. Click the ellipsis (three dots) to the right of your new AWS template, followed by the Attach button.
55. Select two available vEdge Cloud Chassis IDs from the window that appears and click the right arrow:
56. Click the Attach button.
57. Since several components of the Device Template were set to Device Specific, we need to input these variables here. Click the ellipsis on the far right of each vEdge Cloud and input the following:
58. Click the Next button.
59. Click the Configure Devices button.
60. Click the Confirm checkbox, followed by the OK button.
Step 2: Enable Cloud onRamp for IaaS
1. Navigate to Configuration, Cloud onRamp.
2. Click the Add New Cloud Instance button.
3. Choose the AWS radio button.
4. Enter the API Key and Secret Key obtained from your lab instructor. In a real-world deployment, you can obtain these items from the AWS Dashboard under the My Security menu. In this menu, click the Create Access Key button to generate a new API Key and Security Key:
5. Click the Login button.
6. In the Choose Region dropdown, enter us-west-1:
7. Name your Transit VPC. Here, we will use Test-Pod1-Transit.
8. Select the version of code to run on the device from the WAN Edge Version dropdown box. We will use vedge-18.3.3.
9. Select the CPU allocation for the vEdge(s) in the Size of Transit vEdge dropdown. We will use c3.large (2 vCPU) since our deployment is rather small.
10. For the Device 1 and Device 2 dropdown menus, select each of your vEdges allocated in the preceding steps.
11. (Optional) Under Advanced Options, set the Transit VPC CIDR value (the IP Range that will be used on the LAN interface of your vEdge) and select your PEM key (generated and uploaded to AWS).
12. Click the Proceed to Discovery and Mapping button to allow vManage to auto-discover the AWS Host VPCs.
13. Choose an account from the dropdown menu (there should only be one in this lab) and click the Discovery Host VPCs button.
14. You will notice several "Test-PodX" VPCs in the output that follows. Choose the VPC that corresponds to your pod, as dictated by the instructor. Do not choose any other VPCs:
15. Click the Next button.
16. In the proceeding screen, ensure that your pod's VPC is still selected and click the Map VPCs button.
17. In the window that appears, your Transit VPC and VPN should be set for you. Review for correctness and click the Map VPCs button.
18. Click the Save and Complete button.
19. The task manager window will appear showing the status of the build. This process may take up to 45 minutes to complete!
20. (AWS Dashboard access required) You can review the progress from the AWS dashboard in several places. To see the new Transit VPC that's being built, navigate to the Services menu, followed by VPCs. Click on the VPCs link in the window that appears.
21. The VPCs window should show your new Transit VPC being created:
22. Your vEdge Clouds will be instantiated within this VPC and automatically begin route propagation with the VPCs that you mapped to (in this case, Test-Pod1 VPC - 172.16.1.0/24). To view the status of these vEdge Clouds, navigate to Services, followed by EC2. Click on the Running Instances link:
23. Once this process completes, you should be able to see your VPC routes (172.16.x.x/24) within the VPN10 routing table:
... View more
Note: This lab utilizes the dCloud SD-WAN platform. You must schedule this ahead of time before proceeding below!
Cloud onRamp is one of the most popular features to enable within Cisco SDWAN. As many organizations start to make their shift to the cloud, many are looking for solutions to help facilitate the transition. As organizations increasingly take advantage of Software as a Service (SaaS), it becomes increasingly difficult to provide a positive user experience (i.e. we can’t control the Internet!). Cloud onRamp for SaaS is a feature that enables us to squeeze maximum efficiency out of our Internet links by selecting the best performing circuit to get a user’s traffic to the SaaS destination. Similar to how your Cisco SD-WAN solution monitors your transports via Bidirectional Forwarding Detection (BFD), Cloud onRamp utilizes HTTP ‘ping’ packets to monitor the loss, latency and response time for a SaaS application. These packets are sent every few seconds from each configured Internet transport. When a user accesses the configured SaaS application, the best transport is chosen based on the telemetry collected.
Step 0: Pre-requisites
Since this lab serves multiple use-cases, we must prep it with a bit of pre-configuration prior to starting with Cloud onRamp.
1. From the Configuration, Templates menu, click on the ellipsis to the right of the template named DC-vEdges and choose Edit.
2. Scroll down to the Service VPN sub-section.
3. Click the OSPF button to the right of the Service VPN 10 instance (VPN10withService).
4. Click the down arrow for your new OSPF instance and select the Feature Template (OSPF-DC-Lab6):
5. Click the Update button at the bottom of the screen.
6. Click the Next button.
7. In the next window, you can review the configuration changes to the devices by clicking on them in the left-hand pane, or click the Configure Devices button to proceed.
8. In the Configure Devices window that appears, check the Confirm configuration changes for 4 devices box and click OK.
9. From the Configuration, Templates menu click on the ellipsis to the right of the template named BR1-VE1-Template and choose Edit.
10. Enable the LAN-facing interface by editing line 118 to read no shutdown:
Step 1: Enable Cloud onRamp for SaaS
You can enable the Cloud onRamp for SaaS service (f.k.a. CloudExpress) in sites with Direct Internet Access (DIA) and in DIA sites that access the Internet through a secure web gateway such as Cisco Umbrella. You can also enable the Cloud onRamp service in client sites that access the Internet through another site in the overlay network, called a gateway site. Gateway sites can include regional data centers or carrier-neutral facilities. When you enable the Cloud onRamp service on a client site that accesses the Internet through a gateway, you also enable the Cloud onRamp service on the gateway site.
All Cisco SD-WAN devices configured for the Cloud onRamp service must meet these requirements:
The devices must run Viptela Software Release 16.3 or higher.
The devices must run in vManage mode.
You must configure a DNS server address in VPN 0.
You must configure local exit interfaces in VPN 0:
If the local interface list contains only physical interfaces, you must enable NAT on those interfaces. You can use normal default IP routes for next hops.
If the local interface list contains only GRE interfaces, you do not need to enable NAT on those interfaces. You can add default routes to the IP address of the GRE tunnel to the destination.
This feature is currently only supported on vEdges (cEdge will come in July, 2019).
Additionally, the Cloud onRamp service runs on IPv4 only. It does not support IPv6 at this time.
1. To begin our configuration, let’s make sure the feature is enabled globally. Navigate to Administration, Settings.
2. Find the Cloud onRamp for SaaS feature. It should be enabled for you, but if not, click on the Edit link and enable it:
3. Next, head to the Cloud onRamp menu at the top of the screen and click on Cloud OnRamp for SaaS:
4. You will be presented with the following screen. Review the instructions and click Dismiss:
5. As mentioned in the instructions you just reviewed, the first step to configuring the service is to add the Applications and Service VPNs you wish to monitor. Click the Manage CloudExpress button in the upper right and choose Applications:
6. Click on the Add Applications and VPN button.
7. In the window that appears, choose the SaaS application you wish to monitor and enter VPN 10 in the VPN. We’ll choose DropBox in our example:
8. Click the Add button, followed by the Save Changes.
9. Next, we need to let the system know where our Client Sites are, or those who will be using this service. Click the Manage CloudExpress button and choose Client Sites.
10. Click the Attach Sites button.
11. Choose your branch locations (denoted by a hostname beginning with "BR") in the window that appears and click the Attach button (note, Branch 3 has a cEdge allocated to it and, hence, may not be available in the output below):
12. The system will then provision the branch locations as necessary.
13. Now we need to tell the system where our Direct Internet Access sites are located. Click on the cloud in the upper right corner, followed by Cloud OnRamp for SaaS (Cloud Express).
14. Click the Manage CloudExpress button in the upper right and choose Direct Internet Access (DIA) Sites.
15. Click the Attach DIA Sites.
16. In the window that appears, choose your two branch locations (Site ID 300, and potentially 400) and click the Attach button:
17. The system will then push the necessary configuration to your DIA locations.
18. Lastly, we need to tell the system where our gateway sites are located. In the event of latency or loss at the branch locations, traffic can be backhauled to a gateway location for access. Click on the cloud in the upper right corner, followed by Cloud OnRamp for SaaS (Cloud Express).
19. Click on the Manage CloudExpress button and choose Gateways.
20. Click the Attach Gateways button.
21. Select your two Data Center Site IDs and click the right arrow:
22. Click the Attach button.
23. The system will then begin provisioning the DC routers.
Now, let's monitor your new application. After giving the system a few minutes to begin polling the SaaS application, head back to the main Cloud onRamp for SaaS dashboard (click on the cloud in the upper right corner, followed by Cloud OnRamp for SaaS). Notice that your application now has a vQoE score (vQoE, or Quality of Experience, is a measurement between 1 and 10 that rates the performance of an application over the given transports):
Clicking on the application will pull up additional information (choose VPN – 10 from the VPN List dropdown):
Clicking on the relative vQoE scores will show you how the application has performed historically. In our case, the DC routers aren’t performing quite as well as the Branch routers:
Lastly (optional), you can use the Simulate Flows tool (Device Dashboard, Troubleshooting, Simulate Flows) on each vEdge to see exactly where a router would forward this traffic at any given moment:
... View more
Great! Glad to help! Yes, the documentation on TrustSec/MACSec can get a bit confusing as to what does what and what's needed for each feature. You are correct on SGT 2 - I just used it in my example, but you can pick anything you want. It very much compares to the 802.1q Native VLAN, even from a security perspective (i.e. don't use the Native VLAN for data transport and change it from 1 to an unused/restricted VLAN). In a TrustSec mindset, you may want to use this feature similarly as a bit bucket or quarantine tag. That way, if any rogue infrastructure devices pop up on the network and start sending traffic, they (and their traffic) will be properly segmented. Hope this helps! Aaron
... View more
Nope - you got it! Yes, you can expect a brief disruption when you enable 'cts manual'. Are you planning to run MACSec encryption on your uplinks/downlinks? If you're just preparing to send/receive SGTs for TrustSec, you don't need a PSK on either end. Just enable 'cts manual' and setup your propagation policy: interface GigabitEthernet1/0/1 cts manual policy static sgt 2 trusted The policy above dictates the following: The peer switch is statically authorized (as opposed to dynamically via 802.1X) SGT 2 will be assigned to traffic that enters the switch without a tag Incoming tagged frames will be trusted rather than remarked ...and yes, if you have a co-worker at the other end it will make the change go smoother Aaron
... View more
Inline tagging would be the preferred option as this traffic is processed within hardware (which maintains line-rate performance) - alleviating CPU involvement. SXP on the other hand, being a Control Plane protocol, will chew up CPU cycles depending on how many tags you are propagating back and forth and how often. Generally, this doesn't affect the Data Plane, but can impact other Control Plane functions of the switch. That said, going back to your question, there are a couple of different ways to skin this cat. My recommendation is as follows. To start with, no, you don't need to enable inline tagging globally... in fact, it's best to start small and localized. Since your C6K is aggregating all of this traffic, your first option could be to just enable inline tagging on the downlinks to specific closets (and conversely, uplinks from the C4Ks). This will allow the C6K to glean edge tagging information in-band. From there, you can peer the C6K with ISE via SXP to glean tagging information on the rest of the network out-of-band. You would still enjoy the benefits of intra-VLAN/closet segmentation while allowing the C6K to perform inter-closet segmentation using a mix of SXP and inline tagging. Inline tagging-learned tags will trump SXP-learned tags. Thus, as you schedule new maintenance windows and enable inline tagging deeper within your network, the propagated tags from these newly enabled segments will supersede the SXP-learned tags from ISE... eventually negating the need for the peering. Hope this helps! Aaron
... View more