cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1390
Views
1
Helpful
0
Comments
sitirkey
Cisco Employee
Cisco Employee

Introduction

SD-WAN Overview  

Cisco SD-WAN solution builds an overlay WAN network that operates over standard network transport services. The SDWAN solution is based on the principle of separating the control and data planes of the WAN. The control-plane manages the rules for routing traffic through the overlay network, while the data-plane securely transports the actual data packets between the network devices. On the data-plane the virtualized network runs as an overlay on cost-effective hardware, whether physical routers or virtual machines routers, called vEdge or cEdge (Viptela routers are called “vEdge” and Cisco XE SDWAN routers are called “cEdge”). The following figure shows the SD-WAN components overview.  

Cisco SD-WAN Solution Architecture  

image.png

 

 SD-WAN Manager  

SD-WAN Manager is the Network Management System (NMS) component of SDWAN. The SD-WAN Manager centralizes the provisioning, management, and monitoring functions of the SD-WAN network. It provides an easy-to-use graphical dashboard, with capabilities to monitor, configure, and maintain all SD-WAN devices and links in the overlay network. The SD-WAN Manager NMS also offers these capabilities on a northbound REST API interface, which can be consumed by other systems such as Orchestration, Service Assurance, etc.  

  • A single plane of glass for Day0, Day1, and Day2 operations  
  • Centralized provisioning  
  • Policies and Templates  
  • Troubleshooting and Monitoring  
  • Software upgrades  
  • GUI with RBAC  
  • Programmatic interfaces (REST, NETCONF)  
  • Each SD-WAN Manager server can manage up to about 2,000 WAN Edge routers in the overlay network. 

SD-WAN Controller  

The centralized Controllers oversee the control plane of the SD-WAN network. The SD-WAN Controller devices centralize all routing and policy intelligence in the overlay, following a model similar to the "route reflector" in BGP. The Controllers do not take any part in the data plane; they are dedicated exclusively to the control plane and influencing the networking behavior of the SD-WAN routers.  

The Controllers is the centralized brain of the Cisco SD-WAN solution, controlling the flow of data traffic throughout the network. The Controllers works with the SD-WAN Validator to authenticate Cisco SD-WAN devices as they join the network and to orchestrate connectivity among the WAN Edge routers.  

The major components of the Controllers are listed below

  • Control Plane Connections: Each Controller establishes and maintains a control plane connection with each WAN Edge router in the overlay network. The DTLS connection between a Controller and a WAN Edge router is permanent.  
  • OMP (Overlay Management Protocol): The OMP protocol is a routing protocol like BGP that runs inside DTLS control plane connections and carries the routes, next hops, keys, and policy information needed to establish and maintain the overlay network. OMP runs between the Controllers and the WAN Edge routers and carries only control plane information.  
  • Key Reflection And Rekeying: The Controllers receive the data-plane keys from a WAN Edge router and reflect them to other relevant WAN Edge routers that need to send data plane traffic.  
  • Policy Engine: The Controllers provides rich inbound and outbound policy constructs to manipulate routing information, access control, segmentation, extranets, and other network needs.  

Each Controller can support up to 5,400 connections with WAN Edge routers having a single transport, and each router connects to two of them by default. With 2 modes of transport at each branch, the site means a single Controllers can support up to approx. 2000 connections. This brings a scale of 8,000 connections to the Banking Network Solutions with 4 pairs of SD-WAN Validators.  

SD-WAN Validator  

In addition to the Controllers, the SD-WAN control plane also includes the SD-WAN Validator. The main role of the SD-WAN Validator devices is to automatically identify and validate all other SD-WAN Controllers and devices when they join the overlay network. The SD-WAN Validator is used for authentication, validation, and orchestration of the control plane connections between the SD-WAN Validators and WAN Edge. SD-WAN Validator maintains connections to SD-WAN Validator, SD-WAN Manager, and an initial transient connection to WAN Edge.  

The SD-WAN Validator automatically orchestrates connectivity between WAN Edge routers and Controllers. If any WAN Edge router or Controllers is behind a NAT, the SD-WAN Validator also serves as an initial NAT-traversal orchestrator.  

Note : In the Banking Network Solutions, initially the WAN Edge and Controllers would be using a private IP network and hence would not be behind a NAT. In Future, once the Internet transport is added, the NAT will be enabled for the WAN Edge and the Controllers in the DC/DR/NDR respectively.  

Major functions of SD-WAN Validator are -  

  • Orchestrates control and management plane  
  • The first point of authentication (white-list model)  
  • Distributes list of SD-WAN Validator, SD-WAN Manager to all WAN Edge routers  
  • Facilitates NAT traversal  
  • Highly resilient  

Each SD-WAN Validator server can manage up to about 1,500 WAN Edge routers at a time in the overlay network. This brings scale to 3,000 WAN edge routers with 2 SD-WAN Validators. A Cisco SD-WAN domain can have up to 8 SD-WAN Validators in the Banking Network Solutions network.  

WAN Edge  

The WAN Edge available as a hardware appliance sits at a physical site and provides secure data plane connectivity among the sites over the WAN transports. It is responsible for traffic forwarding, security, encryption, Quality of Service (QoS), routing protocols such as Border Gateway Protocol (BGP), and Open Shortest Path First (OSPF).  

The major function of the WAN Edge is listed below.  

  • WAN edge router  
  • Provides secure data plane with remote WAN Edge routers  
  • Establishes a secure control plane with Controllers (OMP)  
  • Implements data plane and application-aware routing policies  
  • Exports performance statistics  
  • Leverages traditional routing protocols like OSPF, BGP, and VRRP

Banking Network Solutions Overview 

Scope and Audience for this Document  

This design guide serves as a comprehensive resource for implementing Cisco SD-WAN for Banking And Financial Services Industry . It acts as a companion document to the associated design and deployment guides tailored for enterprise networks, providing insights into the specific considerations and requirements relevant to banking/financial institutions.  

The document focuses on the deployment of Cisco Software-Defined WAN (SD-WAN) solutions within the context of banking/financial organizations. For additional resources such as deployment guides, design guides, and white papers, please refer to the following:  

Cisco Enterprise Networking design guides: Cisco Design Zone  

This document is intended for network architects, IT administrators, security professionals, and stakeholders involved in the planning, deployment, and management of SD-WAN solutions within banking/financial institutions. It provides specialized guidance and best practices tailored to address the unique challenges and regulatory requirements of the banking/financial vertical.  

Banking Network Solutions Network Background  

Banking Network Solutions is one of the major BSFI organization that has a network of over 6000+ branches and 10000+ ATMs spread across the region. They are revamping their current Wide Area Network (WAN) and have chosen to implement a Software Defined – WAN (SD-WAN) solution for their WAN infrastructure connecting their Data Centers, Branch locations, Support Offices, and ATMs. The primary drivers for this initiative include operational simplicity, application-aware networking, and a robust and secure WAN Infrastructure.  

From a solution architecture perspective, Banking Network Solutions is adopting Cisco's SD-WAN solution, leveraging the latest advances in software-defined networking technology. The SD-WAN solution separates the network’s control plane from the data plane, providing centralized configuration management and monitoring across the network. This approach offers agility, reduces deployment times, enhances visibility, secures traffic flows, and ensures policy policing and enforcement.  

The Cisco SDWAN solution enables Banking Network Solutions to create a scalable SD-WAN-based network to effectively connect branch sites to Data Center sites hosting mission-critical applications. The solution streamlines site deployment through a template-based configuration approach and offers comprehensive network monitoring capabilities. Redundancy is ensured from both control and data plane perspectives, with active controllers deployed in Data Center sites and backup controllers in Disaster Recovery (DR) sites. Hardware-level redundancy, including box or link redundancy, is also available where applicable.  

Application-aware routing is proposed to prioritize traffic for critical applications based on predefined SLA parameters. A Hub and Spoke Topology is proposed for communication across the SDWAN fabric, ensuring efficient traffic flow. Additionally, to maintain secure traffic flows, all data traversing the SD-WAN overlay is encrypted using IPsec, providing end-to-end security for Banking Network Solutions' network infrastructure.  

Data Centre Design

Banking Network Solutions operates across the country with a network of over 6000+ branches. The organization has a primary Data Center (DC), a Disaster Recovery (DR) site, and an additional Disaster Recovery site (NDR). Currently, mission-critical applications are hosted centrally in DC locations.

To ensure uninterrupted access to business applications, Banking Network Solutions' branches are connected to the data center sites over an MPLS network that spans across 6000+ locations spread across the country. This MPLS network serves as the backbone for facilitating secure and reliable communication between the branches and the central data center, enabling efficient access to critical applications and services.

Below is the high level summary of existing Data Centre Design:

Routing Protocols:

  • WAN: BGP
  • LAN: OSPF

Network Connections:

  • Each MPLS CE router connects to two WAN switches on the LAN side.
  • All encryption and decryption tasks are handled by IPSec Hub routers.

Traffic Forwarding:

  • The IPSec VPN Hub router forwards remote site traffic to a firewall.
  • Locally destined DC traffic is routed to the core switch.

Routing Configurations:

  • Static routes (Route Redistribution Information, RRI) are configured on the IPSec VPN Hub router.
  • These static routes are redistributed into OSPF towards the core switches.

Core Network:

  • Core switches are Layer 3 switches.

Data Center Interconnections:

  • There are point-to-point (P2P) links between the DC and Disaster Recovery (DR), DC and Near Disaster Recovery (NDR), and DR and NDR sites.
  • These links will remain part of traditional/legacy networks.

At the Data Center (DC), there are dual MPLS links from eight different service providers, functioning as active and secondary links, terminating on different MPLS Customer Edge (CE) routers.

Banking Network Solutions - Existing DC Architecture

image.jpeg

 

Banking Network Solutions - Existing DR Architecture

image.jpeg

 

Banking Network Solutions - Existing NDR Architecture

 
image.jpeg
 
 

Banking Network Solutions Requirements

In Banking Network Solutions, the network infrastructure serves various critical applications and services essential for financial institutions' operations and customer service. These include, but are not limited to:

 Banking Applications

  • Online Banking Platforms
  • Core Banking Systems
  • ATM Networks
  • Point-of-Sale (POS) Systems
  • Payment Gateways

Trading and Investment Platforms

  • Trading Terminals
  • Market Data Feeds
  • Algorithmic Trading Systems
  • Risk Management Tools

Customer Service and Operations

  • Call Center Applications
  • Customer Relationship Management (CRM) Systems
  • Account Management Platforms
  • Loan Processing Systems

Internal Operations

  • Employee Workstations
  • Email and Collaboration Tools
  • HR and Payroll Systems
  • Document Management Solutions

Security and Compliance

  • Security Information and Event Management (SIEM) Systems
  • Identity and Access Management (IAM) Platforms
  • Fraud Detection Systems
  • Regulatory Compliance Tools

Data Centers and Cloud Services

  • Virtual Desktop Infrastructure (VDI)
  • Cloud-based Applications and Services
  • Disaster Recovery and Business Continuity Solutions

Networking Infrastructure

  • WAN Connectivity to Branch Offices and Data Centers
  • SD-WAN Solutions for Secure and Optimized Connectivity
  • Network Security Appliances (Firewalls, Intrusion Detection/Prevention Systems)
  • Network Monitoring and Management Tools

Peripheral Devices

  • Printers and Scanners for Document Handling
  • IP Phones and Communication Devices
  • Biometric Authentication Devices
  • Video Conferencing Systems

In the Banking Network Solutions, Cisco SD-WAN plays a crucial role in providing secure, reliable, and optimized connectivity for these applications and services. It ensures efficient traffic routing, application performance optimization, and robust security measures to protect sensitive financial data and comply with regulatory requirements. Additionally, SD-WAN enables seamless integration with cloud services, improves operational efficiency, and enhances the overall user experience for both employees and customers.

 

Challenges in Banking Network Solutions

In Banking Network Solutions, network challenges are unique due to the industry's specific requirements, regulatory constraints, and evolving technologies. Some of the challenges include:

End-to-End Segmentation for Security:

  • The need for comprehensive segmentation to protect sensitive financial data and comply with regulatory requirements (e.g., GDPR, PCI DSS).
  • Ensuring isolation between different departments, business units, and customer data to prevent unauthorized access and data breaches.

Compliance Requirements:

  • Adhering to stringent regulatory frameworks and compliance standards governing the financial industry.
  • Ensuring that network infrastructure and security measures meet regulatory requirements to avoid fines, penalties, and reputational damage.

Network and Service Level Resiliency:

  • Establishing resilient network architectures to ensure continuous availability and reliability of critical financial services.
  • Implementing redundant connectivity, failover mechanisms, and disaster recovery strategies to minimize downtime and ensure business continuity.

Management of Network Services and Infrastructure:

  • Efficiently managing day-to-day network operations, including configuration, monitoring, troubleshooting, and performance optimization.
  • Balancing network management tasks with security management to maintain operational efficiency while mitigating security risks.

Control and Complexity:

  • Managing the complexity of network environments, including diverse hardware and software platforms, legacy systems, and distributed architectures.
  • Simplifying network management, automation, and orchestration to reduce complexity and improve agility in responding to changing business needs.

Digital Transformation and Innovation:

  • Embracing digital transformation initiatives and innovative technologies to stay competitive in the rapidly evolving financial landscape.
  • Leveraging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing to enhance customer experiences, optimise operations, and drive business growth.

Business Process Alignment:

  • Aligning network infrastructure and technology investments with business objectives, processes, and strategic priorities.
  • Ensuring that technology deployments support key business initiatives, such as customer engagement, product innovation, risk management, and regulatory compliance.

Addressing these challenges requires a strategic approach that emphasizes security, compliance, resilience, efficiency, and innovation. By investing in advanced networking technologies, cybersecurity solutions, regulatory compliance measures, and business-aligned strategies, financial institutions can build resilient and future-ready networks that support their digital transformation journey while safeguarding sensitive financial data and ensuring regulatory compliance.

SD-WAN Benefits for Banking Network Solutions

SD-WAN offers several key benefits for financial institutions, enabling them to enhance network performance, security, agility, and cost-effectiveness. Some of the primary benefits include:

Improved Network Performance:

  • SD-WAN optimizes network performance by dynamically routing traffic based on real-time conditions, such as link quality, latency, and congestion.
  • Application-aware routing ensures that critical financial applications, such as trading platforms and banking systems, receive priority treatment to minimize latency and ensure responsiveness.

Enhanced Security:

  • SD-WAN provides advanced security features, including encryption, segmentation, and firewall capabilities, to protect sensitive financial data from unauthorized access and cyber threats.
  • Centralized policy management and enforcement ensure consistent security policies across distributed branch offices and remote locations, reducing the risk of security breaches.

Increased Agility and Flexibility:

  • SD-WAN enables rapid deployment and provisioning of network services, allowing financial institutions to quickly adapt to changing business requirements and market conditions.
  • Centralized management and orchestration simplify network operations, troubleshooting, and configuration changes, improving operational efficiency and agility.

Cost Savings:

  • By optimizing network traffic and intelligently routing traffic over cost-effective transport options, such as broadband internet and LTE, SD-WAN helps reduce reliance on expensive MPLS circuits.
  • Consolidating network infrastructure and leveraging virtualised network functions (VNFs) reduce hardware and operational costs, resulting in significant cost savings for financial institutions.

Business Continuity and Disaster Recovery:

  • SD-WAN improves business continuity by providing seamless failover and redundancy mechanisms to ensure continuous availability of critical financial services.
  • Dynamic path selection and traffic steering capabilities enable automatic rerouting of traffic in case of network outages or failures, minimizing downtime and disruption to operations.

Support for Cloud Services and Applications:

  • SD-WAN seamlessly integrates with cloud services and applications, enabling financial institutions to leverage cloud-based solutions for improved scalability, agility, and cost-effectiveness.
  • Direct cloud connectivity and application-aware routing optimize performance and user experience for cloud-based financial applications, such as SaaS platforms and data analytics tools.

Overall, SD-WAN offers financial institutions a strategic advantage by optimizing network performance, enhancing security, improving agility, and reducing costs. By embracing SD-WAN technology, financial institutions can modernize their network infrastructure, accelerate digital transformation initiatives, and deliver superior customer experiences while maintaining compliance with regulatory requirements.

Migration to SD-WAN

The decision by Banking Network Solutions to migrate to SD-WAN coincides with the refreshing of legacy router platforms within their branch locations, driven by the need to adapt to evolving networking technologies and enhance operational efficiency. The primary motivation behind this strategic move is the centralized configuration and policy management offered by SD-WAN solutions, which alleviates the requirement for additional staffing to deploy and maintain the banking network's wide-area network.

Furthermore, Banking Network Solutions recognizes the added benefits of SD-WAN, particularly the Application Visibility (AV) and Application Aware Routing (AAR) features provided by Cisco Catalyst SD-WAN. These features enable the bank to optimize bandwidth utilization across sites with multiple MPLS and/or Internet transports, ensuring efficient and reliable application performance.

Moreover, the bank anticipates future advantages from integrating with cloud-based security services through Secure Access Service Edge (SASE), as well as enabling Direct Internet Access (DIA) from branch locations for accessing Software-as-a-Service (SaaS) applications and facilitating guest access. This forward-looking approach ensures that Banking Network Solutions remains at the forefront of technological innovation while enhancing network security and user experience across its operations.

SD-WAN Design Overview - Banking Network Solutions

Overall, Cisco SD-WAN architecture is designed to provide centralized control, intelligent traffic routing, and secure connectivity for distributed enterprises, improving application performance, reducing costs, and enhancing user experience across the WAN.

This section provides high level overview of Banking Network Solutions SD-WAN deployment. SD-WAN technology will provide an end-to-end segmented overlay network using a centralized control plane and a hub and spoke encrypted data plane. SD-WAN solution provides secure encrypted transport over any layer 3 network, regardless of the underlying transport.

The Banking Network Solutions SD-WAN network delivers secure end-to-end network virtualization. The Overlay Management Protocol (OMP) centrally updates all routes and policy information for each network segment. All of the components mutually authenticate each other, and all of the WAN Edge devices are authorized before they are allowed into the network. Every packet across the data plane, control plane, and management plane that flows through the network is authenticated and encrypted using Secure Sockets Layer (SSL) and IP Security (IPSec) technologies.

Control plane components such as SD-WAN Manager, SD-WAN Validator, and Controllers will be located in the DC and DR locations. WAN Edge routers from remote sites are required to connect to the control plane hosts at both DC and DR for redundancy. WAN Edge routers in the DC, DR, and branch establish the overlay IPsec tunnel for data plane traffic via MPLS transport underlay networks. These MPLS transports are provided by various service providers in Banking Network Solutions existing network. All branch sites are classified into different types depending on number of MPLS & Internet transport links and number of WAN Edge routers present in branch. Any traffic from the SD-WAN branch sites will directly reach the controllers located in DC/DR using the secure DTLS tunnels they establish and for the data plane, it will be forwarded into the DC/DR.

The control plane traffic exchanged amongst the SD-WAN controllers will also use DTLS connections. The figure below provides the Banking Network Solutions SD-WAN Architecture at a high level.

SD-WAN Overview – Banking Network Solutions

 

gkavyasn_0-1723572924518.png

 


Data Centre Design

The following figure shows the layout of the Data Center site showing the physical connectivity after migrating to SD-WAN.

sitirkey_4-1722370090109.png

 

 

In the DC, DR, and NDR sites, the following design considerations apply:

  • Four WAN Edge routers will be deployed in each of the DC, DR, and NDR sites.
  • WAN Edge routers are connected to MPLS CE routers via Layer 2 switches in the DC and DR sites, while in the NDR site, WAN edge routers are connected directly to MPLS CE routers.
  • Each WAN-Edge will have two underlay links representing two separate MPLS transports.
  • SD-WAN edge routers at the DC, DR, and NDR sites will use static routing over each underlay link with MPLS CE routers.
  • MPLS CE routers will have eBGP peering to upstream PE routers for the MPLS network.
  • MPLS CE routers will advertise WAN Edge transport locator (TLOC) IPs into MPLS networks and controller subnets over BGP peering.
  • MPLS CE routers will also advertise controller subnets into the underlay of MPLS networks through eBGP peering.
  • On the service side, each WAN Edge router will connect to Core Switches using 10G links and will be configured with 802.1q routed sub-interfaces for appropriate service VPNs.
  • In this Design, we have considered OSPF as the routing protocol for peering with Core switches on the service side.
  • For on-premises controller deployment, servers will be connected to SD-WAN switches, which will be connected to active/standby firewalls.
  • In Future, Internet Transports will be terminated on the WAN Edge routers.
  • Also, for Simplicity, we will have 1:1 NAT for edge public transport IPs and Controller IP will be configured in the Firewalls.

The following figure shows the layout of the Data Center site showing the logical connectivity after migrating to SD-WAN.

sitirkey_5-1722370090109.png

 

 The following figure shows the layout of the DR site showing the physical connectivity after migrating to SD-WAN.

sitirkey_6-1722370090109.png

 

 
 The following figure shows the layout of the DR site showing the logical connectivity after migrating to SD-WAN.
 
sitirkey_7-1722370090109.png

 

 
 The following figure shows the layout of the NDR site showing the physical connectivity after migrating to SD-WAN.
 
sitirkey_8-1722370090109.png

 

 

The following figure shows the layout of the NDR site showing the logical connectivity after migrating to SD-WAN.

sitirkey_9-1722370090109.png

 

Head-End SD-WAN Router Design

Banking Network Solutions realized early in the design process that they would need to use two methods of scaling the data plane of the head-end SD-WAN routers at the data center locations – vertical scaling and horizontal scaling. Both are necessary for the data centers to be able to handle the expected number of SD-WAN tunnels from all the branch sites, as well as the desired aggregated head-end throughput of up to 40 Gbps per data center.

Vertical scaling involves deploying head-end routers which can handle higher throughput and higher SD-WAN tunnel capacity. Throughput of SD-WAN routers is expressed in terms of millions of packets per second (Mpps) and gigabits per second (Gbps). This reflects the fact that throughput is constrained by how many packets per second the SD-WAN router can process. Hence the larger the packet size (for example 1,400 bytes), the higher the throughput in Gbps. Likewise, the lower the packet size (for example 64 bytes), the lower the throughput in Gbps. Actual customer networks do not have just one packet size. Therefore, for realistic throughput numbers, a mixture of packet sizes is used, based upon experience with existing customer networks. This is referred to as IMIX traffic. Hence, Banking Network Solutions based their decision as to the platform choice for their data center head-end routers on throughput capacity of the platform with IMIX traffic.

Throughput is also based on the features enabled on the SD-WAN router platforms. Since Banking Network Solutions has requirements for SAIE (formerly known as DPI) / statistics collection, they based their decision as to the platform of choice for their data center head-end routers on the combination of feature sets which include IPsec encapsulation on the SD-WAN overlay tunnels, Quality of Service (QoS), DPI, and Flexible NetFlow (FNF) collection and export.

After discussing platform choices with their Cisco account team, Banking Network Solutions decided to implement ASR 1002-HX Series platforms as head-end SD-WAN routers within each of the data centers within each of the overlays.

Banking Network Solutions decided to maintain the existing eight regional MPLS provider circuits and the Internet circuit within each data center. Therefore, each SD-WAN head-end router has a WAN transport (VPN 0) tunnel interface connection to each of the four regional MPLS service providers, through their respective MPLS CE routers. In addition to this, each head-end SD-WAN router has a WAN transport (VPN 0) tunnel interface connection to the Internet via the Internet Edge firewall within the data center. Hence, each data center head-end SD-WAN router is configured with 3 TLOCs.

The following figure shows the TLOC colors implemented at the head-end (and branch) routers for the Banking Network Solutions SD-WAN network.

sitirkey_10-1722370090110.png

 

Service Provider

Tloc Color

Regional MPLS Provider1

Private1, Private2

Regional MPLS Provider2

Regional MPLS Provider3

Regional MPLS Provider4

Regional MPLS Provider5

Regional MPLS Provider6

Regional MPLS Provider7

Regional MPLS Provider8

Internet Service Provider

Biz-Internet

WAN Transport Identification:

  • Each WAN transport tunnel is uniquely identified by assigning it a specific identifier, referred to as a "color." This identifier is one of the Transport Locator (TLOC) parameters associated with the tunnel.

WAN Transport Methods:

  • The organization uses MPLS and Internet as its WAN transport methods. MPLS connections are currently provided by multiple different service providers. Additionally, the Internet is used as a transport method in the organization's SD-WAN fabric.

Dual Homed MPLS Links:

  • At the DC, DR, and NDR sites, all MPLS links are dual-homed in an active/standby setup on MPLS CE routers.

Service Provider Combinations:

  • Depending on the branch site, a combination of service providers will be used.

Logic For Consuming Multiple MPLS Providers

The organization uses multiple MPLS providers but does not extend each provider as a separate transport on the SD-WAN headend. Instead, the following approach is adopted:

Aggregation of MPLS Links:

  • All MPLS connections from different providers are aggregated and treated as a single transport method within the SD-WAN fabric.

TLOC Configuration:

  • Only two TLOCs (Transport Locators) are configured for all MPLS connections, representing two logical MPLS transports (MPLS-1 and MPLS-2).

Simplified Management:

  • This approach simplifies the management of the SD-WAN fabric by reducing the complexity associated with handling multiple separate transports. It allows the SD-WAN headend to treat all MPLS connections uniformly, thereby simplifying configuration and policy enforcement.

Redundancy and Load Balancing:

  • The dual TLOC configuration ensures redundancy and load balancing across the aggregated MPLS links. Traffic can be efficiently routed through the available MPLS connections based on the SD-WAN policies, ensuring optimal performance and resilience.

This method leverages the strengths of multiple service providers while maintaining a streamlined and manageable SD-WAN architecture.

WAN Transport Type

Service Provider

DC

DR

NDR

Router-1

Router-2

Router-1

Router-2

Router-1

Router-2

MPLS

Regional MPLS Provider1

X

X

-

X

-

X

MPLS

Regional MPLS Provider2

-

X

-

X

X

-

MPLS

Regional MPLS Provider3

X

X

X

-

X

X

MPLS

Regional MPLS Provider4

X

-

X

X

X

X

MPLS

Regional MPLS Provider5

X

X

-

X

X

X

MPLS

Regional MPLS Provider6

X

X

-

X

X

-

MPLS

Regional MPLS Provider7

X

-

X

X

-

-

MPLS

Regional MPLS Provider8

-

X

X

X

-

-

Internet

Internet Service Provider

X

X

X

X

X

X

 

TLOC Color Restriction

  • Branch and head-end SD-WAN routers can handle multiple WAN transport (VPN 0) tunnel interfaces or TLOCs.
  • Supporting multiple WAN transport interfaces typically leads to a rise in the number of SD-WAN tunnels between endpoints.
  • The number of SD-WAN tunnels formed depends on whether TLOC color restriction is implemented. TLOC color restriction prevents the formation of SD-WAN tunnels between TLOCs of different colors.
  • Implementing TLOC color restriction can significantly decrease the number of SD-WAN tunnels required, especially in a hub-and-spoke network topology.
  • Banking Network Solutions chose to implement color restrictions on each regional MPLS TLOC (private1 and private2) and the Internet TLOC (biz-internet) within each SD-WAN overlay. This decision aimed to reduce the number of SD-WAN tunnels needed on each head-end router within the data center sites.

TLOC Color Restriction in a Hub-and-Spoke Topology

sitirkey_11-1722370090110.png

 

Color Value

Color Type

Option

Transport Description

private1

Private

restrict

MPLS - 1

private2

Private

restrict

MPLS - 2

biz-internet

Public

no restrict

Internet

 

Site Type

Site Description

Colors

Site Type-1

Single router Dual MPLS Links

Private1, private2

Site Type-2

Single router with one MPLS & one 4G link (MPLS)

Private1, private2

Site Type-3

Dual Router with Dual MPLS Links

Private1, private2

Site Type-4

Single Router with one MPLS Link & one Internet

Private1, biz-internet

Site Type-5

Single Router with one MPLS & one 4G Link (Internet)

Private1, biz-internet

Site Type-6

Dual Router with one MPLS Link & one Internet

Private1, biz-internet

Tunnel Groups:

  • In a hub-and-spoke fabric data plane topology, both throughput and maximum SD-WAN tunnel capacity constrain the data plane scale of head-end Data Center Sites (hubs).
  • Despite TLOC color restriction, the number of SD-WAN tunnels needed can exceed a single head-end router's capabilities when the number of branches in an overlay is large.
  • Redundant head-end routers can't resolve head-end tunnel scale issues since spoke routers still form SD-WAN tunnels to each head-end router.
  • Banking Network Solutions conducted calculations to determine the required number of SD-WAN tunnels at each data center head-end router for each overlay when all branch sites connected to a single pair of head-end routers.
  • The number of tunnels required depends on branch prototype designs and the numbers of each branch type.
  • Banking Network Solutions' overlay is divided into northern and southern regions, with MPLS providers primarily servicing branches within their respective areas.
  • With a single pair of SD-WAN routers per data center, each data center head-end router would need to support over 10,000 SD-WAN tunnels, exceeding hardware capabilities.
  • Horizontal data plane scaling involves provisioning multiple SD-WAN routers at the head-end and allowing only certain spoke routers to form SD-WAN tunnels to specific head-end routers.
  • This is achieved through Tunnel Groups in this document.However horizontal scaling can also be achieved thru Centralized control policy as well.
 
sitirkey_12-1722370090110.png

TLOC Color Restriction in a Hub-and-Spoke Topology

 

sitirkey_13-1722370090110.png

 

  • Tunnel Groups dictate SD-WAN tunnel formation; routers only form tunnels with matching Tunnel Group numbers or with no Tunnel Group (default).
  • To enhance data center site scalability for supporting SD-WAN tunnels, Banking Network Solutions redesigned the data centers to accommodate two pairs of SD-WAN routers.
  • Each pair of SD-WAN routers within a data center belongs to a distinct SD-WAN Tunnel Group, as illustrated in the figure.

TLOC Color Restriction in a Hub-and-Spoke Topology

sitirkey_14-1722370090110.png

 

  • Banking Network Solutions utilized the geographic area and region information in their Site ID numbering scheme to distribute branch sites into Tunnel Group 1 and Tunnel Group 2.
  • Branch sites in the "north" region were assigned to Tunnel Group 1, while those in the "south" region were assigned to Tunnel Group 2.
  • This allocation evenly distributed the SD-WAN tunnel count load between the data center head-end SD-WAN routers.
  • With this distribution, each set of routers is tasked with handling less than 8000 SD-WAN tunnels, aligning with the targeted capabilities of the router platform.
  • The design not only balances the load but also increases the throughput capacity of the overall data center site by spreading traffic across two sets of ASR 1002-HX head end routers.
  • Dual head-end SD-WAN routers within each Tunnel Group is planned at each data centers to have tunnel scale capacity with hardware redundancy.

SD-WAN Controller Design

On-Prem Controller DC & DR

Banking Network Solutions are deployed On-Prem SD-WAN Controllers in both the Data Center (DC) and Disaster Recovery (DR) locations.

sitirkey_15-1722370090110.png
  • On-Prem SD-WAN Controllers are deployed in Data Center (DC) and Disaster Recovery (DR) locations.
  • 6 UCS (Unified Computing System) physical servers are deployed in both DC and DR locations.
  • Each UCS Server will host, 1 SD-WAN Manager, 1 Controller, and 1 SD-WAN Validator virtual machine on top of ESXi virtualization.
  • 6 SD-WAN Manager Instance/Nodes are deployed, forming one SD-WAN Manager cluster for effective management of the SD-WAN network
sitirkey_16-1722370090110.png
  • In the Data Center (DC), an active 6-node SD-WAN Manager cluster is deployed, while a standby 6-node SD-WAN Manager cluster is set up in the Disaster Recovery (DR) location.
  • Both DC and DR locations have 4 active instances of SD-WAN Validators and 4 active instances of SD-WAN Controllers.
  • Additionally, in both DC and DR, 2 instances of SD-WAN Validators and 2 instances of SD-WAN Controllers are installed on the 5th and 6th UCS nodes as cold standby setups.
  • The SD-WAN Validator and Controllers operate in an Active-Active configuration, ensuring redundancy and high availability across the network infrastructure.

Controller VM Sizing

The resources required to run the controller instances on VMware vSphere ESXi server vary depending on the number of devices to be deployed in the overlay network.

As per Banking Network Solutions requirements, the server recommendation for the control elements per Data Center location is as follows:

Controller/VM

vCPUs

RAM

OS Volume

Additional
Volume

# of instances

# of vNICs

SD-WAN Manager

32 vCPUs

128 GB

20 GB

10TB

6

3 (tunnel interface, management, cluster)

SD-WAN Validator

4 vCPUs

8 GB

10 GB

NA

4

2 (tunnel interface, management)

SD-WAN Controller

8 vCPUs

16 GB

10 GB

NA

6

2 (tunnel interface, management)

The tested and recommended limit of supported Cisco SD-WAN Validators instances in a single Cisco Catalyst SD-WAN overlay is eight.

Physical Deployment – UCSC

gkavyasn_0-1723573397274.png  
  • Each Data Center (DC) and Disaster Recovery (DR) location will deploy 6 x UCSC-C240-M5SX servers for the network infrastructure.
  • The UCS servers will be connected to WAN Cisco switches with 2x10G links in Active Standby mode. NIC teaming will not be configured on the server uplinks.
  • The CIMC (Cisco Integrated Management Controller) will be connected to the Banking Network Solutions MGMT network for management purposes.
  • ESXi virtualization platform will be installed on the UCS servers to host the SD-WAN controllers.
  • VPN0 and VPN512 will be configured on the UCS uplink on one pair in Active/Standby mode, while the Cluster VLAN will be configured on the other UCS uplink pair links for efficient network traffic management.

Physical Deployment – UCSC

sitirkey_18-1722370090110.png

 

  • Each UCS server will be connected to both L2 switches with 2 links each for redundancy and load sharing.
  • NIC1 will be active for VPN0 & VPN512 VLANs, while NIC3 will be standby for these VLANs.
  • NIC4 will be active for the Cluster VLAN, and NIC2 will be standby for the Cluster VLAN.
  • Switch-A will be the Spanning Tree Protocol (STP) primary for VPN0 & VPN512 VLANs, while Switch-B will be the STP primary for the Cluster VLAN.
  • Both Switch-A and Switch-B will be dual-homed to both Firewalls in an Active/Standby setup.
  • Switch-A and Switch-B will be connected back-to-back with each other for the vPC (Virtual PortChannel) Peer link using a 2x10G Port-Channel and a vPC peer keep-alive link (1G or logical link).
  • VPN0, VPN512, and Cluster VLANs will be configured as vPC VLANs and allowed over the vPC Peer link for seamless communication and redundancy in the network infrastructure
sitirkey_19-1722370090111.png

 

gkavyasn_1-1723573423169.png

 


 

SD-WAN Manager Network Management System

 

sitirkey_21-1722370090111.png

 

 

  • The SD-WAN Manager Network Management System (NMS) is a centralized network management system deployed as a virtual machine. It allows configuration, management, and troubleshooting of the overlay network through a graphical dashboard.
  • Each SD-WAN Manager instance has separate interfaces for Control, Management, and cluster communication, ensuring efficient and secure network operations.
  • A requirement for less than 5ms latency between cluster members guarantees smooth communication within the SD-WAN Manager cluster.
  • Separate VPNs are designated for control and management purposes, enhancing network security and segmentation.
  • Minimal configuration is necessary for the bring-up process, including settings for Connectivity, System IP, Site ID, Org-Name, and SD-WAN Validator IP.
  • In Banking Network Solutions, a total of 12 SD-WAN Manager instances will be deployed, with an active cluster of 6 SD-WAN Managers in the Data Center (DC) and a standby cluster of 6 SD-WAN Managers in the Disaster Recovery (DR) location, ensuring redundancy and failover capabilities between DC and DR.
  • Additionally, in both DC and DR, provisions will be made for 1 SD-WAN Manager instance in cold-standby mode to ensure redundancy within each location, ready to take over in case of a VM failure.

SD-WAN Manager Services Distribution

 

vManage 1

vManage 2

vManage 3

vManage 4

vManage 5

vManage 6

Application Server

P

P

P

P

P

P

Statistics Database

     

P

P

P

Configuration Database

P

P

P

     

Messaging Server

P

P

P

     

SDAVC

Optional

Optional

Optional

Optional

Optional

Optional

 

Note: Control connections will be formed on all vManage instances

  • Application Server: This service, which includes the web GUI and APIs utilizing Wildfly, will be enabled on all SD-WAN Manager instances to provide multiple access points to the SD-WAN Manager GUI.
  • Statistics Database: Utilizing Elasticsearch, this service maintains a database of statistical data, audit logs, alarms, events, and DPI information. It will be enabled on 3 SD-WAN Manager instances.
  • Configuration Database: Using Neo4j in SD-WAN version 18.2 and later, this service stores configuration information, policies, templates, certificates, and more. It will be enabled on 3 SD-WAN Manager instances.
  • Messaging Server: This service, which employs NATS, facilitates message passing between SD-WAN Manager devices within the cluster. It will be enabled on all 6 SD-WAN Manager instances to ensure seamless communication and coordination within the SD-WAN Manager network management system.

SD-WAN Controller

sitirkey_23-1722370090111.png

 

  • The SD-WAN Controller serves as the centralized control point in the Cisco SD-WAN solution, managing the flow of data traffic across the network.
  • It collaborates with the SD-WAN Validator to authenticate Cisco devices upon network entry and to coordinate connectivity among the cEdge routers.
  • In the Banking Network Solutions network, 8 SD-WAN Controller will be deployed, with 4 in the Data Center (DC) and 4 in the Disaster Recovery (DR) site. Each set of SD-WAN Controller in DC and DR will operate in an active-active mode to ensure continuous operation and load balancing.
  • Additionally, 4 more SD-WAN Controller (2 DC and 2 DR) will be deployed in cold standby mode, ready to be activated when required for increased redundancy and failover capabilities.

Affinity settings will be utilized to establish redundancy and synchronization between the Controllers deployed in the DC and DR locations, enhancing network reliability and continuity.

SD-WAN Validator

sitirkey_24-1722370090111.png

 

  • The SD-WAN Validator plays a crucial role in automatically orchestrating connectivity between cEdge routers and Controllers in the Cisco SD-WAN solution.
  • Each SD-WAN Validator is deployed as a virtual machine with separate interfaces for control and management purposes, along with separate VPNs for control and management traffic.
  • Minimal configuration is required for the bring-up process, including settings for Connectivity, System IP, Site ID, Org-Name, and the local SD-WAN Validator IP.
  • Acting as the primary point of authentication using a white-list model, the SD-WAN Validator also distributes lists of Controllers and SD-WAN Manager to all cEdge routers, ensuring seamless connectivity and network operation.
  • In the Banking Network Solutions network, 8 SD-WAN Validator will be installed, with 4 in the Data Center (DC) and 4 in the Disaster Recovery (DR) location. Additionally, 4 more SD-WAN Validators (2 DC and 2 DR) will be deployed as cold standby nodes, ready to be activated when required to enhance redundancy and network availability.

Firewall Port Consideration

The controllers would be placed behind the firewall in DC & DR network, certain ports must be opened on the firewalls so that devices in the Cisco SD-WAN overlay network can exchange traffic.

Port details are mentioned in the following diagram

gkavyasn_2-1723573463556.png

Source device

Source port

Destination device

Destination port

SD-WAN Manager/SD-WAN Validator (DTLS)

Core1 = UDP 12346
Core2 = UDP 12446
Core3 = UDP 12546
Core4 = UDP 12646
Core5 = UDP 12746
Core6 = UDP 12846
Core7 = UDP 12946
Core8 = UDP 13046

SD-WAN Validator

UDP 12346

SD-WAN Manager (DTLS)

UDP 12346

SD-WAN Validator

UDP 12346

SD-WAN Manager (DTLS)

UDP 12346

SD-WAN Manager

UDP 12346

SD-WAN Validator (DTLS)

UDP 12346

SD-WAN Validator

UDP 12346

WAN Edge (DTLS)

UDP 12346+n, 12366+n, 12386+n, 12406+n, and 12426+n, where n=0-19 and represents the configured offset

SD-WAN Validator

UDP 12346

WAN Edge (DTLS)

UDP 12346+n, 12366+n, 12386+n, 12406+n, and 12426+n, where n=0-19 and represents the configured offset

SD-WAN Manager/SD-WAN Validator

Core1 = UDP 12346
Core2 = UDP 12446
Core3 = UDP 12546
Core4 = UDP 12646
Core5 = UDP 12746
Core6 = UDP 12846
Core7 = UDP 12946
Core8 = UDP 13046

SD-WAN Manager (TLS)

TCP random port number > 1024

SD-WAN Validator

TCP 23456

SD-WAN Manager (TLS)

TCP random port number > 1024

SD-WAN Manager

TCP 23456

SD-WAN Validator (TLS)

TCP random port number > 1024

SD-WAN Validator

TCP 23456

WAN Edge (TLS)

TCP random port number > 1024

SD-WAN Manager/SD-WAN Validator

Core1 = TCP 23456
Core2 = TCP 23556
Core3 = TCP 23656
Core4 = TCP 23756
Core5 = TCP 23856
Core6 = TCP 23956
Core7 = TCP 24056
Core8 = TCP 24156

WAN Edge (IPsec)

UDP 12346+n, 12366+n, 12386+n, 12406+n, and 12426+n, where n=0-19 and represents the configured offset

WAN Edge

UDP 12346+n, 12366+n, 12386+n, 12406+n, and 12426+n, where n=0-19 and represents the configured offset

  • The secure sessions between WAN Edge routers and controllers (and among controllers) in the Cisco SD-WAN overlay network are established using Datagram Transport Layer Security (DTLS), which operates over the User Datagram Protocol (UDP). The default base source port for these secure sessions is 12346.
  • Port hopping is a feature where WAN Edge devices attempt to establish connections using different source ports if the initial connection attempt fails. This feature is enabled by default on WAN Edge routers but can be disabled globally or on a per-tunnel-interface basis if needed.
  • It's important to note that port hopping is disabled by default on the controllers (SD-WAN Manager and Controllers) and should remain disabled to ensure proper and consistent communication.
  • Control connections on SD-WAN Manager and Controllers with multiple cores utilize different base ports for each core, ensuring efficient and optimized communication within the network architecture.

Design Recommendation:

  • Port-hopping should be disabled on the headend routers as a design recommendation. This ensures a consistent and predictable path for traffic, enhancing stability and simplifying troubleshooting.

        TLOC-Extension Port Range:

  • The TLOC-extension port range should be properly configured to allow seamless communication between TLOCs. This involves ensuring that the port range is open and available for TLOC extensions, facilitating reliable connectivity and failover.

Ports for SD-WAN Manager Clustering and Disaster Recovery

For an SD-WAN Manager cluster, the following ports may be used on the cluster interface of the controllers. Ensure the correct ports are opened within firewalls that reside between cluster members.

SD-WAN Manager Service

Protocol/Port

Direction

Application Server

TCP 80, 443, 7600, 8080, 8443, 57600

bidirectional

Configuration Database

TCP 5000, 7474, 7687

bidirectional

Coordination Server

TCP 2181, 2888, 3888

bidirectional

Message Bus

TCP 4222, 6222, 8222

bidirectional

Statistics Database

TCP 9200, 9300

bidirectional

Tracking of device configurations (NCS and NETCONF)

TCP 830

bidirectional

Cloud Agent

TCP 8553

bidirectional

Cloud Agent V2

TCP 50051

bidirectional

SD-AVC

TCP 10502, 10503

bidirectional

If disaster recovery is configured, ensure that the following ports are opened over the out-of-band interface across the data centers between the primary and standby cluster:

Summary of ports needed for SD-WAN Manager disaster recovery

SD-WAN Manager Service

Protocol/Port

Direction

Disaster Recovery

TCP 8443, 830

bidirectional

Branch SD-WAN Router Design

Common Branch Design Principles

  • Use of IP Prefix Aggregation: Banking Network Solutions implements IP prefix aggregation at all branch locations. Aggregation helps in reducing the size of routing tables and conserves address space by summarizing multiple contiguous IP addresses into a single, more efficient prefix.  
  • Use of TLOC Color Restriction: TLOC color restriction is employed uniformly across all branch locations by Banking Network Solutions. This restriction ensures that SD-WAN tunnels are formed only between TLOCs of the same color, enhancing security and network segmentation.
  • Use of Tunnel Groups: The deployment of Tunnel Groups is consistent across all branches within Banking Network Solutions. Tunnel Groups allow routers to form SD-WAN tunnels selectively, based on predefined groups, optimizing network traffic and resource utilization.

These design principles ensure a standardized and efficient network architecture across all branch locations within Banking Network Solutions' SD-WAN infrastructure, promoting scalability, security, and ease of management.

Banking Network Solutions operates 6000 + branch sites, each varying in size and design to cater to different operational requirements.

Large/Regional Branch Sites:

  • These sites are deemed business-critical and are equipped with dual active circuits from distinct regional MPLS carriers. Both routers at these sites maintain Internet connectivity (via wired broadband or public LTE/5G) to enhance bandwidth and ensure high availability. Dual routers are deployed to manage WAN access at these paramount locations.

Medium-Sized Branch Sites:

  • Medium-sized branches typically feature either one MPLS circuit from a regional carrier alongside Internet connectivity (wired broadband or public LTE/5G), or dual MPLS circuits from two regional carriers, with both circuits actively utilized. While these sites are essential, their criticality does not necessitate dual routers. Instead, a strategy emphasizing the swift replacement of networking equipment has been adopted. Hence, only a single router is deployed at Medium-Sized Branch Sites.
  • Small Branch Sites and ATM:
    • Small branch sites encompass a broad spectrum, ranging from single ATM units co-located within larger retail establishments like grocery stores to compact kiosks offering limited financial services in shopping malls. These sites typically have a single provisioned circuit, either MPLS or Internet (wired broadband or LTE/5G), based on site availability. Small Branch Sites are perceived as having low criticality, leading to the deployment of a single router with integrated switch ports. In cases where the integrated switch ports are inadequate, a small standalone Layer 2 switch may supplement the router's networking capabilities.
  • The distribution of Large/Regional, Medium-Sized, and Small Branch Sites within the Banking Network Solutions network is outlined in the following table.

Branch Site Type 1 (Small Branch and ATM)

  The following figure shows the high-level branch type 1 design

gkavyasn_3-1723573585915.png

 

Below are the design considerations for Branch Site type 1.

WAN Edge Device Configuration:

  • Branch Site Type 1 includes a single WAN Edge device with two WAN transports.

  WAN Facing Ports:

  • Two specific GigabitEthernet interfaces will be designated as the WAN facing ports. These ports will be assigned to the primary VPN for WAN transport connections.

  LAN Facing Port:

  • Another GigabitEthernet interface will be designated as the LAN facing port. Dot1q sub-interfaces will be configured on this port, and these sub-interfaces will be assigned to the various service VPNs to handle internal traffic.

Branch Site Type 2 (Large/Medium Branch Sites)

  The following figure shows the high-level branch type 2 design

gkavyasn_4-1723573595841.png

 

Below are the design considerations for Branch Site type 2 with MPLS transport with TLOC extension.

WAN Edge Device Configuration:

  • Branch Site Type 2 includes two WAN Edge devices, each with a single WAN transport.

TLOC Extensions:

  • TLOC extensions are configured between the two WAN Edge routers, allowing each device to access both WAN transports. These TLOC-extension links are part of the primary VPN for WAN transport connections.

WAN Facing Ports:

  • Specific GigabitEthernet interfaces on each WAN Edge device will be designated as the WAN facing ports. These ports will be assigned to the primary VPN for WAN transport connections.

LAN Facing Port:

  • Another GigabitEthernet interface on each WAN Edge device will be designated as the LAN facing port. Dot1q sub-interfaces will be configured on these ports, and these sub-interfaces will be assigned to the various service VPNs to handle internal traffic.

TLOC-Extension Ports:

  • A specific GigabitEthernet interface on each WAN Edge device will be designated as the TLOC-extension port. Dot1q sub-interfaces will be configured on these ports for different WAN transports.
  • If there is no direct connectivity between the WAN Edge routers, TLOC-extension sub-interfaces will be extended via the core switches. In this case, dot1q sub-interfaces on the designated LAN facing port will be used for this extension.

Branch Site Type 3 (Small Branch and ATM)

The following figure shows the high-level branch type 3 design

gkavyasn_5-1723573605344.jpeg

 

Below are the design considerations for Branch Site type 3 with One MPLS transport and One 4G/MPLS Transport

WAN Edge Device Configuration:

  • Branch Site Type 3 includes a single WAN Edge device with two WAN transports: one MPLS transport and one 4G/MPLS transport.

       WAN Facing Ports:

  • Specific GigabitEthernet interfaces will be designated as the WAN facing ports and assigned to the primary VPN for WAN transport connections.

       LAN Facing Port:

  • Another GigabitEthernet interface will be designated as the LAN facing port. Dot1q sub-interfaces will be configured on this port, and these sub-interfaces will be assigned to the various service VPNs to handle internal traffic.

Branch Site Type 4 (Small Branch)

The following figure shows the high-level branch type 4 design

gkavyasn_6-1723573613253.png

 

Below are the design considerations for Branch Site Type-4 with MPLS and Internet transport.

WAN Edge Device Configuration:

  • For Branch Site Type 4, there is a single WAN Edge device with dual WAN transports: one MPLS transport and one Internet transport.

WAN Facing Ports:

  • Designated GigabitEthernet interfaces will serve as the WAN facing ports, connected to the primary VPN for managing WAN transport connections.

LAN Facing Port:

  • Another GigabitEthernet interface will act as the LAN-facing port. Dot1q sub-interfaces will be configured on this port to connect to various service VPNs for internal traffic management.

Branch Site Type 5 (Large/Medium Branch Sites)

The following figure shows the high-level branch type 5 design

gkavyasn_7-1723573621515.png

 

Below are the design considerations for Branch Site type 5 with MPLS transport with TLOC extension.

Devices: 2 WAN Edge devices

Transports:

  • MPLS
  • Internet

Topology and Connectivity:

  • TLOC extensions between WAN Edge devices for access to MPLS and Internet transports.
  • TLOC-Extension links integrated into VPN 0.

Port Configurations:

WAN Facing Ports:

  • Designated for WAN connectivity.
  • Assigned to VPN 0.

LAN Facing Port:

  • Designated GigabitEthernet interface.
  • Configured with dot1q sub-interfaces for Service VPNs.

TLOC-Extension Port:

  • Designated GigabitEthernet interface between WAN Edge devices.
  • Dot1q sub-interfaces assigned for MPLS and Internet transports.
  • Utilization of Core switches for TLOC-Extension in absence of direct connectivity.
  • Dot1q sub-interfaces on LAN facing port used for extension scenarios.

This template ensures clarity and organization in presenting the design considerations for Branch Site Type 5 configurations involving multiple WAN Edge devices and dual transports

Branch Site Type 6 (ATM)

The following figure shows the high-level branch type 6 design

gkavyasn_8-1723573631100.jpeg

 

Below are the design considerations for Branch Site Type-6 with MPLS and one 4G Internet transport.

Device Type: Single WAN edge device

Transports:

  • MPLS
  • 4G Internet

Port Configuration:

WAN Facing Ports:

  • Designated for WAN connectivity.
  • Assigned to VPN 0.

LAN Facing Port:

  • Designated GigabitEthernet interface.
  • Configured with dot1q sub-interfaces.
  • Sub-interfaces assigned to Service VPNs.

This setup ensures effective management of MPLS and 4G Internet transports, with clearly defined WAN and LAN facing ports tailored to specific VPN assignments and internal traffic segregation using dot1q sub-interfaces.

Future Design Options for Banking Networking Solutions

Hierarchical SD-WAN – MRF ( Multi-Region Fabric) – Design Option 1

gkavyasn_9-1723573640690.png

 

  • Banking Network Solutions considered re-designing the network infrastructure within each SD-WAN overlay to be based on multiple smaller regions, hierarchically connected to an SD-WAN backbone. This hierarchical SD-WAN design offers several advantages:
  • Multi-region fabric (hierarchical) SD-WAN design simplifies centralized control policy creation for hub-and-spoke topology.
  • Large/Regional Branch Sites would serve as SD-WAN Regional Hub Sites, reducing the number of SD-WAN devices per regional fabric and tunnels required within each regional full-mesh.
  • Smaller number of SD-WAN devices per regional fabric allows the use of lower-priced SD-WAN routers within branch sites.
  • Significant effort required to redistribute circuits and circuit bandwidth from existing Data Center Sites to Regional Hub Sites to accommodate the new architecture.
  • Expanded role of Large/Regional Branch Sites as SD-WAN Regional Hub Sites necessitates re-evaluation of the support model, including staffing, physical space, power systems, and air conditioning requirements.
  • Banking Network Solutions decided to table the multi-region fabric (hierarchical) SD-WAN design initially and consider it as a potential future design.

Benefits of MRF

Cisco SD-WAN Multi-Region Fabric offers Banking Network Solutions the capabilities to enhance, scale up, and, more importantly, simplify their Cisco SD-WAN fabric across regions. Cisco SD-WAN Multi-Region Fabric helps you:

  • Scalability: The hierarchical structure allows for easy scalability by adding new regions or expanding existing ones without significant impact on the overall architecture. This flexibility accommodates the growing needs of Banking Network Solutions.
  • Improved Performance: With optimized traffic management between regions, the hierarchical SD-WAN design enhances application performance and user experience. This is achieved by directing traffic along the most efficient paths within the network.
  • Centralized Management: Centralizing network management and configuration simplifies operations for Banking Network Solutions. With a hierarchical design, management tasks can be streamlined, reducing administrative overhead.
  • Resilience: Redundant links and failover mechanisms at both regional and backbone levels ensure high availability and resilience against network failures. This robustness minimizes disruptions to banking operations.
  • Security: Implementing security policies and segmentation at multiple levels enhances the overall security posture of Banking Network Solutions. With granular control over network access and traffic flows, the hierarchical SD-WAN design bolsters cybersecurity defenses.

In summary, the deployment of a multi-region fabric Cisco Catalyst SD-WAN design in a hierarchical structure empowers Banking Network Solutions to modernize their network infrastructure, meet evolving demands, and adapt to future challenges effectively.

https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/hierarchical-sdwan/hierarchical-sdwan-guide/migrate.html

Multiple Overlay – Design Option 2

In planning for future scalability, Banking Network Solutions can explore the option of deploying multiple overlays, replicating the successful design strategy employed in their single overlay infrastructure. This approach ensures scalability by accommodating increased network demands, provides redundancy and resilience to mitigate downtime, optimizes performance through traffic segregation, facilitates geographic expansion while maintaining reliability, streamlines management with proper tools and strategies, and ensures readiness for evolving technology and business needs, thus offering a robust and adaptable network architecture for long-term growth.

Benefits of Multiple Overlays:

  • Scalability: Implementing multiple overlays allows for easier scalability as the network grows, enabling Banking Network Solutions to accommodate increasing demands and traffic without overburdening a single overlay.
  • Redundancy and Resilience: Having multiple overlays provides redundancy and resilience, ensuring that if one overlay experiences issues or failures, the other overlay can continue to operate, thus minimizing downtime and disruptions to banking operations.
  • Traffic Segregation: Different overlays can be dedicated to specific types of traffic or applications, allowing for better traffic segregation and prioritization based on business needs and requirements.
  • Geographic Expansion: Multiple overlays can support geographic expansion more effectively, allowing Banking Network Solutions to extend their network infrastructure to new regions or locations while maintaining performance and reliability.
  • Ease of Management: Managing multiple overlays may introduce some complexity, but with proper management tools and strategies, Banking Network Solutions can effectively oversee and control the entire network infrastructure while maintaining centralized management policies
  • Future-Proofing: Considering multiple overlays as part of future scaling plans ensures that Banking Network Solutions can adapt to evolving technology trends and business requirements, providing a future-proof network architecture that can support long-term growth and innovation.
  • Overall, incorporating multiple overlays into future scaling plans offers several benefits, including scalability, redundancy, traffic segregation, geographic expansion support, ease of management, and future-proofing the network infrastructure for Banking Network Solutions.

Multiple SDWAN Overlay – Option A

gkavyasn_10-1723573729434.png

 


Multiple SDWAN Overlay  - Option B

gkavyasn_11-1723573737914.png

 

High Availability and Redundancy

When designing a Cisco SD-WAN solution for the Banking Networking Solutions, ensuring high availability and redundancy is crucial to maintain the continuous operation of critical financial services and applications. Here are key considerations for implementing high availability and redundancy in a Cisco SD-WAN deployment

Dual-Active Edge Routers :

  • Deploy dual-active edge routers at branch offices to ensure redundancy and load balancing of traffic.
  • Configure these routers in an active-active mode, where both routers actively participate in forwarding traffic and can take over each other's traffic load in case of failure.
  • Asymmetric routing occurs when incoming and outgoing packets of a communication session take different paths through the network, potentially leading to challenges in network monitoring, security enforcement, and Quality of Service (QoS) management.

  Active-Standby Edge Routers:  

  • Active-Standby edge routers ensure high availability by designating one router as active (handling traffic) and the other as standby (ready to take over if needed). 
  • This setup minimizes downtime during hardware or software failures, ensuring continuous service availability for network operations. 

  Redundant WAN Links :

  • Establish redundant WAN links from branch offices to the SD-WAN fabric, including MPLS, Internet, and 4G/LTE connections.
  • Our SD-WAN router currently does not support aggregation of WAN links. However, it does support redundant WAN links from the same provider, allowing configuration of TLOC on loopback interfaces to enhance redundancy and reliability.

Application-Aware Routing :

  • Dynamic Path Selection: Application-aware routing employs dynamic path selection algorithms to continuously evaluate network conditions such as link quality, latency, and jitter in real-time.
  • Policy-based Traffic Steering: It utilizes policies to define thresholds for network metrics. When these thresholds are exceeded, traffic is automatically redirected to alternative paths to ensure optimal performance for critical applications.
  • Enhanced Application Performance: By prioritizing application requirements over traditional routing metrics, application-aware routing enhances overall application performance and reliability, adapting to changing network conditions proactively.

Active-Active Data Centers :

  • Deploy active-active data centers with redundant SD-WAN gateways to ensure continuous availability of services and applications hosted in the data center.
  • Use load balancing and traffic engineering techniques to distribute traffic across multiple data center links and servers, minimizing the risk of downtime and improving scalability.

Controller Redundancy :

  • Deploy redundant controllers, such as Cisco Controllers and SD-WAN Manager, in geographically diverse locations to ensure high availability of centralized management and orchestration functions.
  • Configure controllers in a cluster mode with automatic failover capabilities to provide seamless operation in case of controller failure or maintenance.

Resilient Control Plane :

  • Design a resilient control plane architecture with distributed control plane nodes to prevent single points of failure and improve fault tolerance.
  • Use Overlay Management Protocol (OMP) to establish peer relationships between control plane nodes and exchange routing information.

Monitoring and Alerting :

  • Implement proactive monitoring and alerting mechanisms to detect and respond to network faults and performance degradation in real time.
  • Use network management tools and platforms to monitor key performance indicators (KPIs), link utilization, and availability metrics, and trigger automated alerts and notifications for remediation.

By implementing these high availability and redundancy measures, financial institutions can ensure continuous operation of their SD-WAN infrastructure, minimize downtime, and maintain service reliability for critical financial services and applications.

SD-WAN Manager DR for High Availability

gkavyasn_12-1723573768823.png

In the Banking Network Solutions infrastructure, there are a total of 12 SD-WAN Manager instances distributed between the primary Data Center (DC) and the Disaster Recovery (DR) site:

DC SD-WAN Manager Cluster:

  • Six SD-WAN Manager instances located in the Data Center from an active cluster for centralized network management and control.

DR SD-WAN Manager Cluster:

  • Another six SD-WAN Manager instances located in the Disaster Recovery site serve as a standby cluster to provide redundancy and failover capability in case of DC failure.

IP Space Configuration:

  • The SD-WAN Manager instances in the DR site are configured with a different IP address space compared to those in the DC, ensuring distinct identification and separation.

Tunnel Interface Shutdown:

  • To prevent the SD-WAN Manager instances in the DR site from joining the overlay network unintentionally, their tunnel interfaces are placed in a shutdown mode until needed.

Configuration and Certification:

  • The SD-WAN Manager instances in the DR undergo a thorough configuration and certification process to ensure their readiness for activation during failover scenarios.

Database Backup:

  • Regular snapshots and backups of the SD-WAN Manager configuration database are taken to facilitate restoration and recovery processes in case of unexpected failures.

Standby SD-WAN Manager Restoration:

  • In the event of an unexpected failure of the active SD-WAN Manager NMS, the most recent configuration database backup is used to restore the standby SD-WAN Manager NMS. Once restored, the standby SD-WAN Manager NMS can be activated, ensuring minimal disruption to network operations.

By following these measures, Banking Network Solutions can maintain high availability and resilience in their SD-WAN management infrastructure, ensuring continuous network operations even in the face of unforeseen challenges or disasters.

SD-WAN Controller High Availability

gkavyasn_13-1723573781814.png

 

SD-WAN Controller Deployment:

  • Banking Network Solutions plans to deploy 8 SD-WAN Controller instances organized into 8 groups to effectively manage their SD-WAN infrastructure.
  • Controller affinity group assigns each Controller to a specific group, which enhances control and oversight across the network.

Deployment Strategy:

  • SD-WAN Controllers are configured in an active-active mode:
  • 4 instances located in the primary Data Center (DC).
  • 4 instances situated in the Disaster Recovery (DR) site.
  • Redundancy is ensured with 2 instances in cold-standby mode at both the DC and DR sites, serving as backups in case of virtual machine failures.

Affinity Group and Redundancy:

  • Affinity Group settings establish redundancy between Controllers in the DC and DR sites, facilitating seamless failover and continuity of network management operations.
  • This approach optimizes WAN Edge router connections, supporting efficient network operation and redundancy across Banking Network Solutions' SD-WAN infrastructure.

SD-WAN Validator High Availability

gkavyasn_14-1723573798892.png

 

SD-WAN Validator Deployment:

  • Banking Network Solutions plans to deploy 8 SD-WAN Validator instances in their network infrastructure.
  • Host entries will be configured instead of relying on DNS to ensure more reliable and predictable communication.

Redundancy and Grouping Strategy:

  • Validators are grouped to ensure redundancy between primary (DC) and secondary (DR) sites.
  • For the first 1500 WAN Edges, host entries point to SD-WAN Validator 1 & 2 in the DC, with entries pointing to SD-WAN Validator 1 & 2 in the DR. For the Second 1500 WAN Edges, host entries point to SD-WAN Validator 2 & 3 in the DC, with entries pointing to SD-WAN Validator 2 & 3 in the DR. This grouping pattern repeats for subsequent groups of 1500 WAN Edges.

Cold-Standby Instances:

  • Both DC and DR environments include 2 SD-WAN Validator instances in cold-standby mode.
  • These standby instances act as backups in case of virtual machine failures, enhancing overall redundancy and reliability within the infrastructure.

SD-WAN Head-End Scalability

SD-WAN Head-End scalability is crucial for ensuring your network can handle growing bandwidth demands, increasing numbers of sites, and diverse traffic patterns. Here's an overview of key approaches to achieve SD-WAN Head-End scalability:

  Vertical Scaling:

  • Function:  Increases the processing power and memory of existing head-end hardware.
  • Benefits:
    • Simpler implementation, often requiring only hardware upgrades to existing equipment.
    • Potentially cost-effective for smaller increases in capacity.
  • Considerations:
    • Limited scalability compared to horizontal scaling.
    • May not be suitable for significant increases in network size or complexity.
    • Hardware upgrades might disrupt network operations, requiring careful planning and scheduling.

  Horizontal Scaling:

  • Function:  Adds additional head-end instances to distribute the workload and increase overall capacity.
  • Benefits:
    • Highly scalable, allowing significant increases in network size and complexity.
    • Enables elastic scaling, adding head-end instances as needed to meet growing demands.
    • Improves fault tolerance, with remaining head-ends continuing operations if one fails.
  • Considerations:
    • Requires additional hardware or virtual resources for each added head-end instance.
    • Complexity increases with more head-ends, requiring careful configuration and management.
    • May introduce potential performance impacts if not properly implemented and configured.

Banking Network Solutions Scalability Considerations

  • 6000+ branch sites in total, comprising of:
  • 3 x DC/DR/NDR sites with 4 cEdge per site
  • 100 x sites with 2 cEdge per site
  • 6000 x sites with 1 cEdge per site
  • 3 service VPNs for branch sites for segmentation

Scalability Considerations – Controllers

gkavyasn_15-1723573846264.png

 SD-WAN Manager

sitirkey_39-1723558570132.png

Note: For Banking Network Solutions SDWAN, Two - 6 Node SD-WAN Manager Clusters are deployed.  Active Cluster (DC) and Cold Standby Cluster (DR)

 SD-WAN Controller

  • 2,000 WAN edge with control connections over a single transport = 2000 OMP sessions and 2000 DTLS per SD-WAN Controller
  • 2,000 WAN edge with control connections over dual transports = 2000 OMP sessions and 4000 DTLS per SD-WAN Controller
  • Configurations on the WAN Edge routers dictate how many total connections or OMP sessions are made to the multiple SD-WAN Controllers
  • Controller group configurations are used to achieve affinity, and help to manage scale & SD-WAN Controller connection preferences
  • For large deployments (> 2000 WAN Edge), affinity (an additional design tool) are used to manage SD-WAN Controller horizontal scaling

Note: For Banking Network Solutions SDWAN, Horizontal scale out model with 8 SD-WAN Controllers are deployed.  4 SD-WAN Controller in DC and 4 SD-WAN Controller in DR along with controller affinity.

Additional Considerations:
  • Scalability limitations of different SD-WAN controller platforms (SD-WAN Manager and SD-WAN Validator) should be evaluated based on specific deployment requirements.
  • Network infrastructure, geographical distribution, and traffic patterns should be factored into the overall scalability plan.

 SD-WAN Validator

  • For <1500 WAN Edge with no HA a single SD-WAN Validator instance is adequate
  • For <1500 WAN Edge with HA requirement, 2 SD-WAN Validator is recommended
  • For >1500 WAN Edge, Horizontal scale out of two more SD-WAN Validators is required

Note: For Banking Network Solutions SDWAN, Horizontal scale out model with 8 SD-WAN Validator is deployed.  4 SD-WAN Validator in DC and 4 SD-WAN Validator in DR

WAN Edge

There is a limit on the number of BFD session tunnels that each cEdge router can accommodate. A BFD session tunnel is defined as an active IPsec data plane tunnel.

  • For DC WAN Edge C8500: Max number of IPSEC tunnels is 8000
  • For Branch WAN Edge C8200 :  Max number of IPSEC tunnels is 1500 to 2500, depending on the HW model.

Note: For Banking Network Solutions SDWAN Hub and Spoke topology, to reduce the number of tunnels built on DC WAN edge routers

  • DC WAN edge routes are grouped into two different groups
  • Branch WAN edge router forms tunnels to DC WAN edge routers from one of the group
gkavyasn_16-1723573865357.png

SD-WAN Topologies

SD-WAN deployments can utilize various topologies to connect different network locations and optimize traffic flow.

Hub and Spoke Topology

In the Banking Network Solutions SD-WAN solution, the hub-and-spoke topology will be used. In a hub-and-spoke topology, the WAN Edge routers at DC/DR/NDR site will act as the hubs that receive the data traffic from all the branch sites. Once they receive the data they would redirect the traffic to the appropriate destinations. This topology reduces the number of IPsec tunnels by restricting direct IPsec tunnels between all the branches and hence spoke routers do not need to spend their CPU for these tunnels. Hub-and-spoke topologies save on tunnel capacity since tunnels are only built to the hub routers. It is also beneficial from the administrative point of view in applying centralized policies at WAN Edge hub routers to control the traffic between all sites.

  • Simplified management:  Centralized configuration and control of network policies and security settings from the hub.
  • Efficient bandwidth utilization:  Traffic can be aggregated and backhauled to the internet through the central hub, optimizing bandwidth usage.
  • Increased security:  Centralized security policies and potential firewall deployment at the hub can offer enhanced security posture.

The following diagrams illustrate HUB(DC) & Spoke / Partial Mesh topologies

SD-WAN Topology – Hub/Spoke

gkavyasn_17-1723573884016.png

SD-WAN Topology – Partial Mesh

gkavyasn_18-1723573892204.png

The choice between Hub and Spoke and Partial Mesh Tunnels depends on your specific needs and network characteristics:

  • Network size and complexity:  Consider the number of sites, traffic patterns, and potential future growth.
  • Security requirements:  Evaluate your security needs and how each topology aligns with your security posture.
  • Performance and latency:  Choose the topology that optimizes performance and minimizes latency for your critical applications.
  • Hybrid approaches:  Combining Hub and Spoke with Partial Mesh Tunnels might be suitable in scenarios with diverse traffic patterns and varying needs across different sites.
  • SD-WAN solutions:  Many SD-WAN solutions offer built-in support for both Hub and Spoke and Partial Tunnel functionalities, providing flexibility in network design.

In Banking Network Solutions, the choice of Hybrid Tunnels topology was made to have Voice traffic between Contact Centre locations required to avoid latency.

By understanding the advantages and limitations of each topology and carefully considering your specific needs, you can choose the most appropriate SD-WAN architecture for your organization.

SD-WAN policy Design

In SD-WAN policy design, various elements are used to define how traffic is handled, routed, and controlled across the network. Here are some key components:

  1. Color : Colors are used to classify traffic into different categories based on specific criteria such as application type, source/destination IP addresses, or quality of service (QoS) requirements. Each color represents a different class or type of traffic, allowing for granular control and prioritization.
  2. Transport Locator : Transport locators define the physical or logical paths through which traffic will be forwarded across the network. They specify the network interfaces, tunnels, or service provider links that traffic will traverse to reach its destination. Transport locators can be configured based on factors such as cost, latency, bandwidth, or security requirements.
  3. Tunnel-Group : Tunnel groups are used to group together multiple network tunnels or paths that share similar characteristics or properties. By defining tunnel groups, administrators can apply common policies, settings, and attributes to a set of tunnels, simplifying management and configuration tasks.
  4. Restrict : Restrictions are used to enforce specific access control policies or limitations on traffic flows within the SD-WAN environment. Restrictions can include bandwidth limitations, traffic shaping policies, access control lists (ACLs), or quality of service (QoS) policies that dictate how traffic is permitted or restricted based on predefined criteria.

These components work together to define the behavior, routing decisions, and quality of service parameters for traffic traversing the SD-WAN environment. By carefully configuring and managing these elements within SD-WAN policies, organizations can ensure optimal performance, reliability, and security for their network traffic.

In Banking Network Solutions, many networks/routes will be advertised into the SD-WAN overlay network from the data centres. Below are some of the routes that we would receive from the DC/DR/NDR.  

  • Default route - 0.0.0.0/0
  • DC LAN subnets
  • DR LAN subnets
  • NDR LAN subnets
  • Public Cloud DX/ER Private Subnets via DR & NDR

  As the prefixes will be large and change any time, the routes received on the service side at DC/DR/NDR will be matched with OSPF Tags.

The following design considerations apply:

  • DC Originated LAN routes will be the set to OSPF Tag 10 in the DC LAN
  • DR Originated LAN routes will be the set to OSPF Tag 20 in the DC LAN
  • NDR Originated LAN routes will be the set to OSPF Tag 30 in the NDR LAN
  • There are AWS Direct Connect and Azure Express route connectivity in DR and NDR.
  • Routes received from the Cloud providers will be tagged in DR with Tag 820 and in NDR with Tag 830 in OSPF with their respective LAN networks.
  • Default Route will be advertised from DC/DR/NDR into the SD-WAN fabric.
  • SDWAN Controller/vSmart will advertise this default route to all remote sites by default. However, this behaviour will be modified with policies as follows:
    • While the default route is advertised to Remote sites, OMP Route Preference (300, 200, 100) will be modified to prefer the default route from DC, DR and NDR in order
  • OSPF tag 10, 20, 30, 820 & 830 will be mapped to OMP tags while redistributing the OSPF to OMP in DC/DR/NDR Wan Edge routers.
  • Routes advertised from DC with Tag 10 will be set with highest OMP preference of 300 and advertised to the remote branches
  • Routes advertised from DR with Tag 20 will be set with highest OMP preference of 300 and advertised to the remote branches
  • Routes advertised from NDR with Tag 30 will be set with highest OMP preference of 300 and advertised to the remote branches
  • Routes advertised from DR with Tag 820 will be set with OMP preference of 200 and advertised to the remote branches
  • Routes advertised from DR with Tag 820 will be set with OMP preference of 300 and advertised to the remote branches

  SD-WAN Control Policy for DC/DR/NDR Route Preference to Group 1 branches

gkavyasn_19-1723573908677.png

SD-WAN Control Policy for DC/DR/NDR Route Preference to Group 2 branches

gkavyasn_20-1723573917074.png

 

  • Routes advertised from the remote branches from VPN 10, will be set with OMP Tag 100 and advertised to DC/DR/NDR
  • Routes advertised from the remote branches from VPN 20, will be set with OMP Tag 200 and advertised to DC/DR/NDR
  • Routes advertised from the remote branches from VPN 30, will be set with OMP Tag 300 and advertised to DC/DR/NDR
  • While redistributing the Remote branch routes into the OSPF at DC/DR/NDR, OMP Tags will be mapped to OSPF tags as per the below table in each DC accordingly.

 

Branch Site
OMP Tag

Service VPN

DC OSPF Tag

DR OSPF Tag

NDR OSPF Tag

100

10

510

610

710

200

20

520

620

720

300

30

530

630

730

 

  • In Oder to avoid the routing loops, the branch routes that are advertised to the DC/DR/NDR Lan will be dropped in the OSPF matching all the OSPF tags in the range (510 – 730) found in the above table on the SD-WAN edge routers using table-map configurations under OSPF in their respective VRF/VPNs.

  SD-WAN Control Policy for OSPF - OMP Tagging and Route Loop Avoidance.

gkavyasn_21-1723573925234.png
  • At remote sites , Static and Connected routes will be redistributed into OMP with Tagging and metrics
  • A remote site will advertise all prefixes available within that site only.

Direct Internet Access (DIA)

Cisco SD-WAN Direct Internet Access is a solution that improves the user experience for SaaS applications at remote sites by eliminating the performance degradations related to backhauling Internet traffic to central data centers. DIA allows control of Internet access on a per VPN basis.

Direct Internet Access

gkavyasn_22-1723573933983.png

This section explains how the Internet Access at branches will be replaced by Direct Internet Access (DIA) provided by Cisco SD-WAN edges. Internet access at DCs will not use SD-WAN Edges since it is normally handled by dedicated devices in the DMZ within the Data Centers.

Benefits of using DIA include :

  • Reduced bandwidth consumption, latency and cost savings on WAN links by offloading Internet traffic from the private WAN circuit.
  • Improved branch office user experience by providing Direct Internet Access (DIA) for employees at remote site locations

Cisco SD-WAN Direct Internet Access is characterized by providing different options and large flexibility to customers. DIA can be controlled by routing, by policy or by a combination of the two and it can be also redirected to a cloud-based security solution (like Cisco Umbrella). Different options for primary and backup path can be used. This section first describes the general principles for branch DIA.

Branch Direct Internet Access (Plane DIA)

Branch DIA

Branch DIA covers two main use cases, branch internal employees and guest. As shown in the figure, branch (remote-site) employees are allowed direct access to the Internet for cloud- based applications and user web access. This is achieved by configuring the WAN edge routers as an Internet exit point. Internal employee Internet traffic uses the directly connected Internet transport for direct Internet access, while the corporate traffic uses the SDWAN tunnels to the destination.

Branch Internal User DIA Traffic

gkavyasn_23-1723573956526.png

Branch users access the Internet directly for user web access and cloud-based applications, without routing their traffic via the internal network and through the Hub sites.

In general DIA use, segmentation is useful in keeping authenticated employee or users separate from the guest users and it is achieved by assigning users and guests to different SDWAN service VPNs.

For DIA, NAT translation for packets exiting into the internet within the branch is enabled on the WAN edge devices via NAT. NAT is a required feature when providing a local breakout.

The NAT type used can be of different types: overload (i.e. NAT interface) or NAT pool or loopback. The most common is NAT overload that is the mapping of multiple unregistered IP addresses to a single registered IP address by using different ports. NAP pool allows a larger and configurable pool of public IP to be used. NAT loopback use the loopback interface on the inside. The NAT operation on outgoing traffic is performed in VPN 0, which is always only a transport VPN. The router's connection to the Internet is in VPN 0.

There are two ways in which traffic from the service VPNs (user or guest) can be redirected to the DIA on VPN0:

  • DIA with NAT DIA Route traffic redirection
  • Centralized data policy

Guest Access

Cisco SD-WAN provides an easy and secure way to create an isolated Guests segment that is isolated from the enterprise network and has its own security policies. Typical DIA traffic policies include:

  • Restricting bandwidth usage of guest’s users
  • Restricting access to certain Internet resources
  • Restricting access to internal  enterprise resources
  • Protecting the network from malicious content

Security

Cisco SD-WAN allows pushing the security stack directly on the WAN edge devices onsite. This reduces the need for security appliances at every branch, by providing inbuilt security features which include DNS security, Application-aware firewall, URL filtering, IPS/IDS, and Advanced Malware Protection (AMP).

In addition, instead of enabling the security stack at the WAN edge routers, the DIA feature allows for routing the traffic through a cloud security provider. In this case, the traffic from a particular remote site is routed to the cloud security provider through point-to-point IPsec tunnels. The cloud security provider then pushes the traffic through the predefined security policies and route it out to the Internet.

Application Prioritization and QoS

In the Banking Network Solutions application prioritization and Quality of Service (QoS) are crucial for ensuring optimal performance and reliability of critical financial applications. Here are key considerations for implementing application prioritization and QoS in a Cisco SD-WAN deployment:

The QoS feature on the Edge routers works by examining packets entering at the edge of the network. The localized data policy can be used to:

  • Provision QoS
  • Classify incoming data packets into multiple forwarding classes based on their importance level
  • Spread the classes across different interface queues
  • Schedule the transmission rate level for each queue.

A class map can be configured for each output queue to specify the bandwidth, buffer size, and packet loss priority (PLP) of the output queues. This allows the Edge to determine how to prioritize data packets for transmission to the destination. Depending on the priority of the traffic and configuration, the Edge router assigns packets higher or lower bandwidth, buffer levels, and drop profiles.

In Banking Network Solutions following Four Class QoS model will be deployed. Banking Network Solutions Critical/Non-Critical applications are differentiated based on the IP details for classification.

 

Class

Class-1

Class-2

Class-3

Class-4

Traffic type

Control/Priority Traffic

Critical Subnet Traffic

Non-Critical Subnet Traffic

Best Effort/Default

BW Allocation

10%

40%

20%

30%

Typical Expected Business Traffic

Control/Priority Traffic

Interactive Video, Mission critical applications,

Financial transactions

E-mail,

Client Server transactions, Intranet applications,

Streaming Audio/Video

Web browsing, FTP,

LAN-to-LAN data transfer

  Application Identification:

  • Utilize deep packet inspection (DPI) and application recognition capabilities to accurately identify and classify financial applications, such as trading platforms, banking portals, and transaction processing systems.
  • Create application-aware policies based on application signatures, protocols, or port numbers to prioritize and differentiate traffic flows based on their importance to business operations.

Traffic Prioritization:

  • Define QoS policies to prioritize traffic based on application criticality, latency sensitivity, and user requirements.
  • Allocate dedicated bandwidth, priority queues, and buffer resources to high-priority applications to ensure timely delivery and low latency for critical transactions and real-time data feeds.

Traffic Shaping and Policing:

  • Implement traffic shaping and policing mechanisms to regulate bandwidth utilization and enforce QoS policies across the SD-WAN fabric.
  • Shape outbound traffic to smooth bursty traffic patterns and prevent congestion, while policing inbound traffic to enforce rate limits and prevent bandwidth abuse.

Class-Based QoS :

  • Organize traffic into different classes or traffic profiles based on their characteristics, such as delay-sensitive, mission-critical, or best-effort traffic.
  • Apply differentiated QoS policies to each traffic class, specifying parameters such as bandwidth guarantees, latency thresholds, and packet drop probabilities to meet service level objectives (SLOs).

Dynamic Path Selection :

  • Leverage dynamic path selection algorithms to intelligently route traffic over the most suitable path based on application requirements and network conditions.
  • Consider factors such as link quality, latency, jitter, and available bandwidth when selecting optimal paths for different types of traffic, dynamically adjusting routing decisions as network conditions change.

Application Performance Monitoring :

  • Monitor application performance metrics, such as response times, throughput, and packet loss rates, to assess the effectiveness of QoS policies and identify areas for optimization.
  • Use performance monitoring tools and dashboards to visualize application performance across the SD-WAN fabric and troubleshoot performance issues proactively.

Continuous Optimization :

  • Continuously review and optimize QoS policies based on evolving business requirements, application usage patterns, and network performance metrics.
  • Collaborate with application owners, network engineers, and business stakeholders to refine QoS policies and ensure alignment with business priorities and performance objectives.

Per Tunnel QoS

This feature allows to apply a Quality of Service (QoS) policy on individual tunnels, ensuring that branch offices with smaller throughput are not overwhelmed by larger aggregation/hub sites. Per-tunnel QoS on a hub allows to shape tunnel traffic to individual spokes. It also differentiates individual data flows going through the tunnel or the spoke for policing.

Per-tunnel QoS for Cisco SD-WAN provides the following benefits -

  • A QoS policy is configurable on the basis of session groups, thus providing the capability of regulating traffic from hub to spokes at a per-spoke level.
  • The hub cannot send excessive traffic to a small spoke and overrun it.
  • The maximum outbound bandwidth and QoS queue are set up automatically when each spoke registers with an Overlay Management Protocol (OMP) message.
  • The amount of outbound hub bandwidth that a “greedy” spoke can consume can be limited; therefore, the traffic can’t monopolize a hub’s resources and starve other spokes.

Few restrictions of per-tunnel QoS feature to be noted as below -

  • Only Cisco IOS XE SD-WAN devices can be configured as hubs but both Cisco IOS XE SD-WAN devices and Cisco vEdge device can be configured as spokes.
  • Per-tunnel QoS can only be applied on hub-to-spoke network topologies.

Per-Tunnel QoS

gkavyasn_24-1723574007453.jpeg

In Banking Network Solutions, per-tunnel QoS will be deployed to take advantage of above-mentioned features as primarily Banking Network Solutions network topology is Hub and Spoke type.

By implementing these best practices for application prioritization and QoS in a Cisco SD-WAN deployment, Banking Network Solutions can ensure reliable and responsive delivery of critical Banking applications, improve user experience, and maintain business continuity even during periods of high network congestion or degraded performance.

VPN Segmentation:

Service VPN Segmentation in Banking Network Solutions' SD-WAN

Following their SD-WAN migration, Banking Network Solutions implemented network segmentation using up to four Service VPNs for enhanced security.  

Service VPN Descriptions:

  • Service VPN 10 (Employee VPN) : Internal business applications for employees.
  • Service VPN 20 (ATM VPN): Financial applications accessed by ATMs and other devices. (Required at all branch sites)
  • Service VPN 30 (Monitoring VPN): Logging information (syslog/SNMP traps) and in-band management. (Required at all branch sites)

Site ID – Planned/Utilised in DC WAN Edge Routers

  • Site ID Definition: The site ID is a 32-bit unique integer used to identify locations within the SD-WAN network.
  • Purpose: It identifies the source location of an advertised prefix within the network.
  • Relevance for Controllers: While SD-WAN controllers do not directly utilize the site ID, it is still required to be set in the configuration.
  • Centralized Control Policy: Establishing a Site ID List helps enforce structure within the network's centralized control policy.  
  • Site Type Consideration: The policy is applied based on-site types, so defining the range of site IDs for each type is crucial.  
  • Future Scaling: The site ID ranges are designed with future scaling in mind, ensuring that as the network grows, there is room for expansion without reconfiguration.

Traffic Engineering

Traffic engineering plays a crucial role in optimizing network performance and ensuring efficient utilization of resources in the financial sector. When deploying Cisco SD-WAN in this vertical, several key traffic engineering features should be leveraged:

Dynamic Path Selection:

  • Utilize dynamic path selection algorithms to intelligently route traffic over the most optimal paths based on real-time network conditions, application requirements, and business priorities.
  • Implement algorithms such as dynamic multipath optimization (DMPO) to dynamically distribute traffic across multiple paths and avoid congestion or performance degradation on any single link.

Application-Aware Routing:

  • Implement application-aware routing policies to prioritize and steer traffic based on the specific requirements of financial applications.
  • Define routing policies that consider application characteristics, such as latency sensitivity, bandwidth requirements, and quality of service (QoS) parameters, to ensure optimal performance and user experience.

By leveraging dynamic path selection, application-aware routing traffic engineering features in Cisco SD-WAN, financial institutions can optimize network performance, ensure reliable connectivity for critical transactions, and enhance user experience across the network infrastructure.

BFD Path Quality

BFD is used not only to detect blackout conditions but is also used to measure various path characteristics such as loss, latency, and jitter. These measurements are compared against the configured thresholds defined by the application-aware routing policy, and dynamic path decisions can be made based on the results in order to provide optimal quality for business-critical applications.

For measurements, the WAN Edge router collects packet loss, latency, and jitter information for every BFD hello packet. This information is collected over the poll-interval period, which is 10 minutes by default, and then the average of each statistic is calculated over this poll-interval time. A multiplier is then used to specify how many poll-interval averages should be reviewed against the SLA criteria. By default, the multiplier is 6, so 6 x 10-minute poll-interval averages for loss, latency, and jitter are reviewed and compared against the SLA thresholds before an out-of-threshold decision is made. The calculations are rolling, meaning, on the seventh poll interval, the earliest polling data is discarded to accommodate the latest information, and another comparison is made against the SLA criteria with the newest data.

gkavyasn_25-1723574018779.png

Since statistical averages are used to compare against configured SLA criteria, how quickly convergence happens depends on how far out of threshold a parameter is. Using default settings, the best case is an out-of-threshold condition that occurs after 1 poll interval is completed (10 minutes) and in the worst case, it occurs after 6 poll intervals are completed (60 minutes).  When an out-of-threshold condition occurs, traffic is moved to a more optimal path.

The following figure shows an example when an out-of-threshold condition is recognized when latency suddenly increases. When latency jumps from 20 ms to 200 ms at the beginning of poll-interval 7, it takes 3 poll intervals of calculations before the latency average over 6 poll intervals crosses the configured SLA threshold of 100 ms.

gkavyasn_26-1723574027023.png

You may want to adjust application route poll-interval values, but you need to exercise caution, since settings that are too low can result in false positives with loss, latency, and jitter values, and can result in traffic instability. It is important there is enough BFD hellos per poll interval for the average calculation, or large loss percentages may be incorrectly tabulated when one is lost. In addition, lowering these timers can affect overall scale and performance of the WAN Edge router.

For 1 second hellos, the lowest application route poll-interval that should be deployed is 120 seconds. With 6 intervals, this gives a 2-minute best case and 12-minute worst case before an out-of-threshold is declared and traffic is moved from the current path. Any further timer adjustments should be thoroughly tested and used cautiously.

gkavyasn_27-1723574035495.png

Note: You may want to adjust application route poll-interval values, but you need to exercise caution, since settings that are too low can result in false positives with loss, latency, and jitter values, and can result in traffic instability.

Deployment

When deploying Cisco SD-WAN in the financial sector, it's essential to consider various deployment aspects to ensure a successful implementation. Here are key considerations for each phase of the deployment process:  

Implementation Prerequisites :

  • Assess network infrastructure readiness, including existing hardware, software, and connectivity requirements.
  • Define business and technical requirements for SD-WAN deployment, such as performance objectives, security policies, and regulatory compliance mandates.
  • Ensure sufficient resources, including skilled personnel, budget allocation, and project timelines, to support the deployment effort effectively.

Deployment Considerations:

  • Evaluate deployment options, such as phased rollout vs. full deployment, based on organizational priorities, risk tolerance, and resource availability.
  • Plan for network migration and cutover strategies to minimize disruption to business operations during the transition to SD-WAN.
  • Consider scalability, flexibility, and future growth requirements when designing the SD-WAN architecture to accommodate evolving business needs and technological advancements. Top of Form

Deployment by Branch Size:

Large, Medium & Small Branch Sites: Require 3 Service VPNs (Employee, ATM, Monitoring)

Optional: Service VPN 40 (Guest) if customer-facing representatives are present. Business decision to offer Service VPN 40 (Guest) in select small branches based on cost-effectiveness.

Future Considerations:

  • Customer-facing web applications are currently on-premises but may move to the cloud.
  • Guest Wi-Fi access in small branches allows for potential future customer service opportunities.

Key Takeaways:

  • Banking Network Solutions prioritizes secure network segmentation using Service VPNs.
  • They tailor Service VPN deployment based on branch size and function.
  • Guest Wi-Fi access is a strategic decision for potential customer engagement.

 

Planning

 

Site ID Planning for DC Routers

DC Name

DC Code

Group Range

Site-ID range

DC

10

1-9

101-109

DR

20

1-9

201-209

NDR

30

1-9

301-309

 

 

Site ID for DC/DR/NDR WAN Edges

Site Name

Hostname

Site ID

DC

DC-cEdge01

101

DC-cEdge02

DC-cEdge03

102

DC-cEdge04

DR

DR-cEdge01

201

DR-cEdge02

DR-cEdge03

202

DR-cEdge04

NDR

NDR-cEdge01

301

NDR-cEdge02

NDR-cEdge03

302

NDR-cEdge04

 

Site ID – Planned/Utilised For SD-WAN Manager, Controller & Validator

Site Name

Hostname

Site ID

Hostname

Site ID

Hostname

Site ID

DC

DC-Manager01

131

DC-Controller01

121

DC-Validator01

111

DC-Manager02

DC-Controller02

122

DC-Validator02

112

DC-Manager03

DC-Controller03

123

DC-Validator03

113

DC-Manager04

DC-Controller04

124

DC-Validator04

114

DC-Manager05

DC-Controller05

125

DC-Validator05

115

DC-Manager06

DC-Controller06

126

DC-Validator06

116

 

 

 

DR

DR-Manager01

231

DR-Controller01 

221

DR-Validator01

211

DR-Manager02

DR-Controller02

222

DR-Validator02

212

DR-Manager03

DR-Controller03

223

DR-Validator03

213

DR-Manager04

DR-Controller04

224

DR-Validator04

214

DR-Manager05

DR-Controller05

225

DR-Validator05

215

DR-Manager06

DR-Controller06

226

DR-Validator06

216


Site ID - Planned/Utilised in Branch Edge Routers

 For Group-1 Branch Routers 

Site ID Planning for Group-1 Branch Routers

Grp ID (1)

Reg Code (2)

Site Type (2)

Site Code (4)

Site ID Range (9)

1

01-99

11-99

0001-9999

10111001-199999999

 

For Group-2 Branch Routers

Site ID Planning for Group-1 Branch Routers

Grp ID (1)

Reg Code (2)

Site Type (2)

Site Code (4)

Site ID Range (9)

2

01-99

11-99

0001-9999

20111001-299999999

System IP Planning

System IP (Persistent IPv4 address)

  • Uniquely identifies the device independently of interface address
  • Does not need to be routable or advertised but used as a marker for control policy

In Banking Network Solutions, System IP will be following the below guidelines:

  • Each cEdge and controller device needs a unique System IP assigned to it
  • Loopback 100 ip is used as System IP on cEdge
  • System IP used for controller as per below table
  • System IP will be allocated sequentially from the 10.237.0.0/16 IP block.

 

System IP for Controllers

Site Location

System IP Range

DC

10.237.1.0/25

DR

10.237.2.0/25

 

 

 

DC

DR

Branch Locations

System IP for Controllers

10.237.1.0/25

10.237.2.0/25

 

System IP for DC Routers

10.237.3.0/25

10.237.4.0/25

 

System IP for Branch Routers

   

10.237.11.0/24-10.237.250.0/24

 

SD-WAN Controller IP Addressing – SD-WAN Manager

 

SD-WAN Manager IP Addressing

Site Name

Hostname

Site ID

System IP

VPN 0 IP

VPN 0 GW

Cluster Interface IP

VPN 512 IP

VPN 512 GW

DC

DC-Manager01

131

10.237.1.31

10.255.2.36

10.255.2.33

10.255.2.68

10.255.2.106

10.255.2.97

DC-Manager02

10.237.1.32

10.255.2.37

10.255.2.33

10.255.2.69

10.255.2.107

10.255.2.97

DC-Manager03

10.237.1.33

10.255.2.38

10.255.2.33

10.255.2.70

10.255.2.108

10.255.2.97

DC-Manager04

10.237.1.34

10.255.2.39

10.255.2.33

10.255.2.71

10.255.2.109

10.255.2.97

DC-Manager05

10.237.1.35

10.255.2.40

10.255.2.33

10.255.2.72

10.255.2.110

10.255.2.97

DC-Manager06

10.237.1.36

10.255.2.41

10.255.2.33

10.255.2.73

10.255.2.111

10.255.2.97

 

DR

DR-Manager01

231

10.237.2.31

10.255.5.36

10.255.5.33

10.255.5.68

10.255.5.106

10.255.5.65

DR-Manager02

10.237.2.32

10.255.5.37

10.255.5.33

10.255.5.69

10.255.5.107

10.255.5.65

DR-Manager03

10.237.2.33

10.255.5.38

10.255.5.33

10.255.5.70

10.255.5.108

10.255.5.65

DR-Manager04

10.237.2.34

10.255.5.39

10.255.5.33

10.255.5.71

10.255.5.109

10.255.5.65

DR-Manager05

10.237.2.35

10.255.5.40

10.255.5.33

10.255.5.72

10.255.5.110

10.255.5.65

DR-Manager06

10.237.2.36

10.255.5.41

10.255.5.33

10.255.5.73

10.255.5.111

10.255.5.65

 

 

SD-WAN Controller IP Addressing – SD-WAN Controllers

 

Controllers IP Addressing

Site Name

Hostname

Site ID

System IP

VPN 0 IP

VPN 0 GW

VPN 512 IP

VPN 512 GW

DC

DC-Contoller01

10.237.1.21

121

10.255.2.42

10.255.2.33

10.255.2.112

10.255.2.97

DC-Contoller02

10.237.1.22

122

10.255.2.43

10.255.2.33

10.255.2.113

10.255.2.97

DC-Contoller03

10.237.1.23

123

10.255.2.44

10.255.2.33

10.255.2.114

10.255.2.97

DC- Contoller04

10.237.1.24

124

10.255.2.45

10.255.2.33

10.255.2.115

10.255.2.97

DC- Contoller05

10.237.1.25

125

10.255.2.46

10.255.2.33

10.255.2.116

10.255.2.97

DC-Contoller06

10.237.1.26

126

10.255.2.47

10.255.2.33

10.255.2.117

10.255.2.97

 

DR

DR-Contoller01

10.237.2.21

221

10.255.5.42

10.255.5.33

10.255.5.112

10.255.5.97

DR-Contoller02

10.237.2.22

222

10.255.5.43

10.255.5.33

10.255.5.113

10.255.5.97

DR-Contoller03

10.237.2.23

223

10.255.5.44

10.255.5.33

10.255.5.114

10.255.5.97

DR-Contoller04

10.237.2.24

224

10.255.5.45

10.255.5.33

10.255.5.115

10.255.5.97

DR-Contoller05

10.237.2.25

225

10.255.5.46

10.255.5.33

10.255.5.116

10.255.5.97

DR-Contoller06

10.237.2.26

226

10.255.5.47

10.255.5.33

10.255.5.117

10.255.5.97

 

SD-WAN Controller IP Addressing - SD-WAN Validator

 

SD-WAN Validator IP Addressing

Site Name

Hostname

Site ID

System IP

VPN 0 IP

VPN 0 GW

VPN 512 IP

VPN 512 GW

DC

DC-Validator01

10.237.1.11

111

10.255.2.4

10.255.2.1

10.255.2.118

10.255.2.97

DC-Validator02

10.237.1.12

112

10.255.2.5

10.255.2.1

10.255.2.119

10.255.2.97

DC-Validator03

10.237.1.13

113

10.255.2.6

10.255.2.1

10.255.2.120

10.255.2.97

DC-Validator04

10.237.1.14

114

10.255.2.7

10.255.2.1

10.255.2.121

10.255.2.97

DC-Validator05

10.237.1.15

115

10.255.2.8

10.255.2.1

10.255.2.122

10.255.2.97

DC-Validator06

10.237.1.16

115

10.255.2.9

10.255.2.1

10.255.2.123

10.255.2.97

 

DR

DR-Validator01 

10.237.2.11

211

10.255.5.4

10.255.5.1

10.255.5.118

10.255.5.118

DR-Validator02

10.237.2.12

212

10.255.5.5

10.255.5.1

10.255.5.119

10.255.5.118

DR-Validator03

10.237.2.13

213

10.255.5.6

10.255.5.1

10.255.5.120

10.255.5.118

DR-Validator04

10.237.2.14

214

10.255.5.7

10.255.5.1

10.255.5.121

10.255.5.118

DR-Validator05

10.237.2.15

215

10.255.5.8

10.255.5.1

10.255.5.122

10.255.5.118

DR-Validator06

10.237.2.16

216

10.255.5.9

10.255.5.1

10.255.5.123

10.255.5.118

 

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: