cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4017
Views
5
Helpful
1
Comments
kadsteph
Cisco Employee
Cisco Employee
 
 
07c1b4d3-f97c-4452-921b-ef298a7b80b1.png

 

Cisco Public



Cisco Software-Defined Access for Airports

 

Contents

 

Authors

Makesh Srinivasan
Customer Delivery Architect

Naveen Philip
Consulting Engineer

Nitesh Syal
Leader, Customer Delivery

Pavan Kuchinad
Consulting Engineer

 

SD-Access Overview

Challenges in managing a network is manifold due to the manual configuration and various tools and orchestration points. Also, Manual operations are slow and error-prone, and these issues are exacerbated due to the constantly changing environment with more users, devices, and applications. With the growth of users and different device types coming into the network, configuring user credentials, and maintaining a consistent policy across the network is more complex. If your policy is not consistent, there is the added complexity of maintaining separate policies between wired and wireless. As users move around the network, locating the users and troubleshooting issues also become more difficult. Without adequate knowledge of who and what is on the network and how they are using it, network administrators cannot create endpoint inventories or map traffic flows, leaving them unable to properly control the network. Network cannot respond quickly to evolving business needs if changes need to be done manually. Such modifications take a long time and are error prone. Traditional way of building a network does not cater to the needs evolving network and ever-growing security concerns.

Cisco® Software-Defined Access (SD-Access) is a solution within Cisco Digital Network Architecture (Cisco DNA) which is built on intent-based networking principles. Cisco SD-Access provides visibility-based, automated end-to-end segmentation to separate user, device, and application traffic without redesigning the underlying physical network. Cisco SD-Access automates user-access policy so organizations can make sure the right policies are established for any user or device with any application across the network. This is accomplished by applying unified access policies across LAN and WLAN, which creates a consistent user experience anywhere without compromising on security.

Benefits

  • Enhance visibility by using advanced analytics for user and device identification and compliance. Employ artificial intelligence and machine learning techniques to classify similar endpoints into logical groups.

  • Leverage policy analytics by a thorough analysis of traffic flows between endpoint groups and use it to define the right group-based access policies. Define, author, and enforce these policies using a simple and intuitive graphical interface.

  • Segment to secure by automatically setting up all wired and wireless network devices for granular two-level segmentation for complete zero-trust security and regulatory compliance.

  • Exchange operating policies and ensure consistency by utilizing Cisco’s intent-based networking multidomain architecture for enforcement throughout the access, WAN, and multi-cloud data center networks.

image2.png

Figure 1 SD-Access Overview


Scope and Audience for this Document

This Deployment guide provides guidance for deploying SD-Access in an Airport network. It is a companion to the associated design and deployment guides for enterprise networks, which provide guidance in how to deploy the most common implementations of SD-Access.

This document discusses the implementation for Cisco Software-Defined Access (SD-Access) deployments for Airports. For the associated deployment guides, design guides, and white papers, refer to the following documents:


Implementation Overview

The SD-Access for an Airport deployment is based on the Cisco Software-Defined Access Design Guide: https://cs.co/sda-sdg. The design enables wired and wireless communications between devices in an Indoor/outdoor or group of outdoor environments, as well as interconnection to the WAN and Internet edge at the network core.

This document provides implementation guidelines for a Single site SD-Access deployment. The validation topology showcases an example of a deployment site as described below.

This site in the deployment guide is connected directly to the fusion devices by IP transit links. Its node roles are divided so that the Cisco Catalyst 9500s act as the combined fabric border and control nodes, and Cisco Catalyst 9300s in stacking configuration serve as the fabric edge nodes. This site has the WLC located on the shared services block in this implementation.

Note: Border/Control Plane & Edge node roles can be planned based on the SDA Compatibility Matrix

For latency requirements from Cisco DNA Center to a fabric edge, refer to the Cisco DNA Center User Guide at the following URL:

https://www.cisco.com/c/en/us/support/cloud-systems-management/dna-center/products-user-guide-list.html

For network latency requirements from the AP to the WLC, refer to the latest Campus LAN and Wireless LAN

Design Guide at Cisco Design Zone:

https://www.cisco.com/c/en/us/solutions/design-zone/networking-design-guides/campus-wired-wireless.html#~campus-guides

Each site, building, floor, or geographic location has an enterprise access switch (for example, the Cisco Catalyst 9300) with at least two switches arranged in a stack. Ruggedized Cisco Industrial Ethernet (IE) switches are connected to the enterprise fabric edges as extended nodes or policy extended nodes and thus extend the enterprise network to the non-carpeted spaces. Figure 2 shows the validation topology. Figure 1 shows the validation topology.

image3.png

Figure 2 SD-Access Fabric Topology

Security policies are uniformly applied, which provides consistent treatment for a given service across the enterprise and Extended Enterprise networks. Controlled access is given to shared services and other internal networks by appropriate authorization profile assignments.

Application policies are applied to extended nodes using template functionality on Cisco DNA Center.


Reference Hardware/Software Matrix

Table 1 contains a list of the Reference hardware and software components but not limited to the below

Table 1 Device Discovery

Role / Platform

Part Number

Software Version

Cisco DNA Center Appliance

DN2-HW-APL (M5-based chassis)

2.3.3.6

Cisco Identity Services Engine

SNS-3695-k9

3.1

Cisco Wireless LAN Controller

Cisco Catalyst 9800 Series Wireless Controllers

IOS XE 17.6.4 or above

Fabric Border and Control

Cat 9500 Series switches

IOS XE 17.6.4 or above
Fabric Edge

Cat 9300 Series switches

IOS XE 17.6.4 or above
Access Points

C9100 Series Access Points

IOS XE 17.6.4 or above

Extended Nodes

Cisco Catalyst IE4000 / Cisco Catalyst IE5000 series

IOS 15.2(7)E1a or above

Notes:

  • The above list is only for reference that is used in this guide. Refer > the SDA Compatibility > matrix > for the supported hardware

  • Fusion Router used in the reference is Catalyst 9300 series switch, any device with support for BGP can used as a fusion device

  • DNAC 3 Node Cluster is Recommended


Airport Network

Airport network serves for various critical airport applications, other applications and as a managed service provider for Airlines/concessionaire not limiting to the below.

image4.png

Figure 1 Airport Network Applications

  • Check-In Systems

  • Airport Operation Command Center User machines

  • Flight Information Display Systems

  • Workstations.

  • Phones

  • Printers

  • Attendance Systems

  • Scanners

  • Kiosks

  • CCTV Cameras

  • CCTV Monitoring Stations

  • Access Control Systems


Requirements

Below are the requirements in an Airport Network including the following:

  • Digital transformation of airport network

  • Multi-tenant network

  • Increase operational efficiency

  • Improve security Assurance

  • Enhance User/Travel Experience

  • Paving the way toward smart airports.

  • Extend connectivity to rugged and outdoor areas

  • Meet compliance and regulatory goals


Airport Challenges

Some of the challenges seen in Airport Networks including the following:

  • Need End to end segmentation for Security reasons.

  • Compliance requirements

  • Provide day-to-day management of network services & infrastructure management along with security.

  • Network & Service level resiliency must exist for geographically dispersed network

  • IT along with OT

  • Control & Complexity

  • Business Process – People – Tech – Digital Twins

Given the critical nature of the services delivered by Airports, the network architecture challenges they face are often quite unique compared to customers in other verticals. The following sections outline several of the most critical capabilities required by Airport when deploying a network.


Security Compliance Mandates

Airport systems need to protect highly sensitive personal records and information of passengers and are strictly bound by government regulations (for example, HIPAA in the United States and GDPR in the European Union). Smart connected Airport systems bring new opportunities and better performance, but also new risks as cyberattacks grow in number and sophistication. No computer system, including the systems used to fly aircraft or air traffic control, can ignore this fact, especially in view of the potentially devastating consequences of an attack.

Airport Systems and related entities must provide regulation-compliant wired and wireless networks. These networks must provide complete and constant visibility for network management and monitoring. Sensitive data and devices such as Airport Operational Data Base (AODB) servers must be protected so that malicious devices cannot compromise the network. The proliferation of the Internet of Things (IoT) devices and Bring Your Own Device (BYOD) policies is adding to the need for segmentation between different groups and also within the groups between different types of users and devices. This means that macro-segmentation and micro-segmentation capabilities must be top priorities for any Airport network architecture.

Finally, given that staff, Airlines, and passenger traffic all share the same network infrastructure, every group must be isolated from one another and restricted to only the resources they are permitted to access.


Mobility of Staff and Devices

Within the Airport environment, staff and equipment must often move between rooms and floors in a rapid amount of time. It is also quite common for devices to have static IP addresses resulting in subnets must be stretched across the facility to allow the equipment to move anywhere it is needed. To increase operational efficiency, device and staff mobility should be completely dynamic without needing network staff to constantly reconfigure networking infrastructure. Mobility must be available for devices and users whether they attach to the network via wired or wireless media.


Increased Reliance on IoT Endpoints

Airport networks are populated by a wide variety of devices in multiple locations. This makes locating and identifying all of the devices in a network time-consuming and tedious. In some cases, these devices will sit silently on the network and will not send any traffic unless they are first contacted by another device. In other cases, the network staff may not even know all of the types of devices in their network which makes onboarding an extremely difficult task. Modern security threats seek vulnerable points of entry, such as undetected or silent hosts, to exploit the network’s valuable resources. These devices must be identified and tracked to meet the secure framework of the network.


Resiliency and High Availability

Airport Network demand levels of high availability and resiliency beyond most other enterprises. The network is a key component in delivering access to key information and services, such as Flight information, and it must be part of a wider resiliency and high availability strategy across IT. Any network solution must provide user and device access to critical resources in the event of loss of communication to the authentication infrastructure. To minimize service impact, strict network-level and application-level resiliency must exist.


Passenger Safety and Asset Tracking

Given the steadily increasing reliance on IoT endpoints in Airport Networks, asset management (baggage) is becoming more difficult as increasingly more diverse devices become mobile and are shared across locations. There are times when asset tracking is crucial to ensure that a asset has not left a specific area - for example, baggage is loaded / unloaded from a flight. The ability to track both passenger and devices in a single system is a critical requirement for every modern Airport network provider.


IT Network Operational Challenges

The combination of ever-increasing devices and security requirements on the network is putting immense operational pressure on Airport IT staff. These are manifested in the complexity of traditional IP-Based rule management and the struggle of managing those policies across large enterprise footprints. Timely troubleshooting is becoming more critical and difficult without onsite staff.


Cisco SD-Access Benefits for Airports

Below are some of the benefits seen in Airport Networks including the following:

  • Unleash the power of data from the edge to gain operational insights and improve processes and systems

  • Enable new digital experiences for your customers and increase customer satisfaction

  • Generate new incremental revenue for your business by digitizing your airport

  • Manage the enterprise network centrally and reduce operating expenses (OpEx)

  • Simplify, secure, and control IT-run Airport environment


Design

Cisco SD-Access (Software-Defined Access) is a network architecture that provides a unified fabric for wired and wireless networks. It is designed to simplify network operations, improve security, and enhance user experience through automation and policy-based network management. The components, roles, and planes of Cisco SD-Access include:

Components of Cisco SD-Access:

Campus Fabric: It is the foundation of Cisco SD-Access and consists of network devices such as switches, routers, and wireless controllers that are part of the fabric. These devices are typically deployed in the campus network and form the underlay for the overlay network.

Fabric Edge Nodes: These are the network devices that connect end-user devices, such as PCs, laptops, and access points, to the campus fabric. They include access switches and wireless controllers that provide connectivity to end devices and enforce policies defined in the fabric.

Control Plane Nodes: These are the devices responsible for running the SD-Access control plane, which manages the overlay network and policy enforcement. Control plane nodes include fabric border nodes, which connect the campus fabric to external networks, and control plane border nodes, which provide policy-based segmentation within the fabric.

Fabric Wireless Controller: It is a key component of Cisco SD-Access for managing wireless access points and wireless clients in the fabric. It provides wireless connectivity and enforces policies defined in the fabric.

Fabric Roles in Cisco SD-Access:

Fabric Edge Node: The Fabric Edge Node is responsible for connecting end-user devices, such as PCs, laptops, servers, and access points, to the campus fabric. It includes access switches and wireless controllers that provide wired and wireless connectivity to end devices. The Fabric Edge Node enforces policies defined in the fabric, such as segmentation and access control, and acts as the entry point for end devices into the fabric.

Control Plane Border Node: The Control Plane Border Node is responsible for enforcing policies and providing policy-based segmentation within the campus fabric. It acts as a gateway between different segments or virtual networks within the fabric, and controls the routing and forwarding decisions based on the defined policies. Control Plane Border Nodes are typically deployed at the edges of the fabric to provide inter-segment connectivity and enforce security policies.

Fabric Border Node: The Fabric Border Node is responsible for connecting the campus fabric to external networks, such as the Internet, other campuses, or data centres. It provides the external connectivity for the fabric and acts as a gateway for traffic entering or leaving the fabric. The Fabric Border Node enforces policies defined in the fabric, such as access control and traffic filtering, at the border of the fabric.

Control Plane Node: The Control Plane Node is responsible for running the SD-Access control plane, which manages the overlay network and policy enforcement. It includes devices that participate in the control plane protocols, such as routing and policy distribution, and work together to ensure consistent policy enforcement across the fabric. Control Plane Nodes can include Control Plane Border Nodes, as well as other devices within the fabric that participate in the control plane.

Fabric Wireless Controller: The Fabric Wireless Controller is responsible for managing wireless access points (APs) and wireless clients in the campus fabric. It provides wireless connectivity, enforces policies defined in the fabric, and manages wireless network operations, such as AP configuration, roaming, and security settings.

These different fabric device roles work together to create a unified fabric in Cisco SD-Access, providing automated, policy-based, and secure network access for wired and wireless devices in the campus network.

Planes of Cisco SD-Access:

Control Plane: It is responsible for managing the overlay network and enforcing policies defined in the fabric. Control plane nodes, including fabric border nodes and control plane border nodes, run the control plane protocols and manage the routing and forwarding decisions in the fabric.

Data Plane: It is responsible for forwarding data traffic between end devices in the fabric. Fabric edge nodes, including access switches and wireless controllers, perform data plane functions, such as switching, routing, and policy enforcement, based on the policies defined in the fabric.

Management Plane: It is responsible for managing the configuration, monitoring, and troubleshooting of the fabric. Fabric administrators and network administrators use management plane tools, such as Cisco DNA Center, to configure and manage the fabric, monitor its performance, and troubleshoot issues.

Overall, Cisco SD-Access comprises a distributed architecture with multiple components, roles, and planes that work together to provide automated, policy-based, and secure network access for wired and wireless devices in the campus network.

This section provides an overview of the topology used throughout this guide as well as the routing modalities used to provide IP reachability.

The validation topology represents a medium site as described in the companion Software-Defined Access Solution Design Guide. It shows a single building with multiple wiring closets that is part of a larger multiple-building network that aggregates each building’s core switches to a pair of super core routers. The shared services block contains a virtual switching system (VSS) Catalyst 6800/9600 switch providing access to the Wireless LAN Controllers, Cisco DNA Center, the Identity Services Engine, Windows Active Directory, and DHCP/DNS servers.

image3.png

Figure 2 Airport Network SD-Access Topology

 

Tech tip
For diagram simplicity and ease of reading, the intermediate nodes (distribution layer) is not show in the topology.

image5.png

Figure 3 Airport Network SD-Access Routing Topology

The Airport network deployment uses Static Routes towards Firewall and WAN/Internet from Fusion. This provides IP reachability between the Fusion, shared services, and the enterprise edge firewalls.

The building core switches operate as the fabric border and control plane nodes creating the northbound boundary of the fabric site. Taking advantage of the LAN automation capabilities of Cisco DNA Center, the network infrastructure southbound of the core switches run the Intermediate System to Intermediate System (IS-IS) routing protocol.

Between the fabric border and the Fusion (super core), BGP is used. Routes from the existing enterprise network are redistributed into BGP on the enterprise edge Firewall. Routes from the SD-Access fabric site are redistributed from IS-IS to BGP and default routes per virtual networks from BGP to ISIS would be advertised allowing end-to-end IP reachability. Static Routes are configured on the WAN/Internet routers towards enterprise edge firewalls for Fabric subnets reachability allowing end-to-end IP reachability.

IP prefixes for shared services must be available to both the fabric underlay and overlay networks while maintaining isolation among overlay networks. To maintain the isolation, VRF-lite extends from the fabric border nodes to a set of fusion routers. The Peer device implement VRF route leaking using a BGP route target import and export configuration. This is completed in later procedures after the fabric roles are provisioned.

During initial discovery, for IP reachability between Cisco DNA Center and the fabric site, BGP or Static Routes can be used. Once a seed device is discovered and managed by Cisco DNA Center, it is recommended practice to not manually add configuration, particularly any configuration that might be overridden through the automated configuration placing the device in an unintended state. With the introduction of LISP Pub-Sub feature, simplifies the communication between border & control plane nodes.

Tech tip
In deployments with multiple points of redistribution between several routing protocols, using BGP for initial IP reachability northbound of the border nodes helps ensure continued connectivity after the fabric overlay is provisioned without the need for additional manual redistribution commands on the border nodes.

 

Airport Subsystems

Below are the Airport Subsystems that are available on an Airport network

  • Modular Terminal Communications System (MTCS)

  • Building Management System (BMS)

  • Access Control System (ACS)

  • Close Circuit TV (CCTV)

  • Automated Flight Announcement System (AFAS)

  • Public Address System (PA)

  • Wi-fi

  • Videowall

  • Self-Baggage Drop (SBD)

  • Baggage Handling System (BHS)

  • Distributed Antenna System (DAS)

  • Lighting Management System (LMS)

  • Chiller Plant Manager

  • SCADA

  • First Bag Last Bag (FBLB)

  • Fire Alarm System (FAS)

  • Vertical & Horizontal Transport Systems (VHT)

  • Passenger boarding Bridge

  • TETRA Mobile Radio (TMR)

  • Internet Protocol TV (IPTV)

  • Radio Over IP (RoIP)

  • Information Broker (IB)

  • Airlines departure control system (DCS)

  • Airport Operational Database (AODB)

  • Beacon Infrastructure

  • Car Park Management System (CPMS)

  • Common User Passenger Processing System (CUPPS)

  • Common-Use Self Service (CUSS)

  • Electronic Gates

  • Electronic Point of Sales (EPoS)

  • Flight Information Display System (FIDS)

  • Information Kiosk (INK)

  • Interactive Voice Response System (IVRS)

  • Message Distribution System (MDS)

  • Preconditioned Air Units (PCA)

  • System Application Product (SAP)

  • Voice over IP (VoIP)

  • Visual Docking Guidance System (VDGS)


Design Considerations

Many Airports have physically segmented their networks to achieve consolidation while maintaining separation and isolation. Deploying multiple Virtual Networks is also an effective way to increase the Return on Investment (ROI) of assets and reduce the overall cost of implementation.

Airport Networks consists of Layer 3 Networks and Layer 2 networks.


Layer 3 Design

Virtual Networks (VNs) in the fabrics are used to isolate IP reachability or routing table visibility between two segments or entities. VNs are essential VRFs in traditional routing terminology. A logically separated routing instance that, as of now, can only get to another VN via fusion router. Fusion Router is used to facilitate (leak routes) communication between VNs (VRFs).

One of the important requirements for any Airport network is Isolation between various departments / Applications, this is taken care using Virtual Networks (VN). And based on the business requirements controlled communication between VNs is to be allowed, this is done using a Fusion router.

Below are some of the Virtual Networks (VNs) in an Airport Network:

  • AIRPORT_APP_VN

  • COMMUNICATION_VN

  • RETAILER_VN (L2 VN)

  • AIRLINE_VN (L2 VN)

  • INFRASTRUCTURE_VN

  • PHYSICAL_SECURITY_VN

  • PA_VN

  • WIRELESS_VN

Traffic Flow for a VNs is depicted below

image6.png

 

Traffic Flow for Inter VN via Firewall/Fusion device is depicted below

image7.png

 

Traffic Flow for VNs with Gateway (Firewall) outside the fabric is depicted below

 

image8.png

 

 

 

 

All airport internal services/applications will use Layer 3 services.

System IP Pool Virtual Network (VN)
Modular Terminal Communications System (MTCS) 10.1.0.0/24 COMMUNICATION_VN
Building Management System (BMS) 10.2.0.0/24 INFRASTRUCTURE_VN
Access Control System (ACS) 10.3.0.0/24 PHYSICAL_SECURITY_VN
Close Circuit TV (CCTV) 10.4.0.0/24 PHYSICAL_SECURITY_VN
Automated Flight Announcement System (AFAS) 10.5.0.0/24 AIRPORT_APP_VN
Public Address System 10.6.0.0/24 PA_VN
Wi-fi 10.7.0.0/24 WIRELESS_VN
Videowall 10.8.0.0/24 PHYSICAL_SECURITY_VN
SELF BAGGAGE DROP SBD 10.9.0.0/24 AIRPORT_APP_VN
Baggage Handling System (BHS) 10.10.0.0/24 AIRPORT_APP_VN
Distributed Antenna System (DAS) 10.11.0.0/24 AIRPORT_APP_VN
Lighting Management System (LMS) 10.12.0.0/24 INFRASTRUCTURE_VN
Chiller Plant Manager 10.13.0.0/24 INFRASTRUCTURE_VN
Scada 10.14.0.0/24 AIRPORT_APP_VN
First Bag Last Bag (FBLB) 10.15.0.0/24 AIRPORT_APP_VN
Fire Alarm System (FAS) 10.16.0.0/24 INFRASTRUCTURE_VN
Vertical & Horizontal Transport Systems (VHT) 10.17.0.0/24 AIRPORT_APP_VN
Passenger boarding Bridge 10.18.0.0/24 AIRPORT_APP_VN
TETRA Mobile Radio (TMR) 10.19.0.0/24 AIRPORT_APP_VN
Internet Protocol TV (IPTV) 10.20.0.0/24 IPTV_VN
Radio Over IP (RoIP) 10.21.0.0/24 COMMUNICATION_VN
Information Broker (IB) 10.22.0.0/24 AIRPORT_APP_VN
Airlines DCS 10.23.0.0/24 AIRPORT_APP_VN
Airport Operational Database (AODB) 10.24.0.0/24 AIRPORT_APP_VN
Beacon Infrastructure 10.25.0.0/24 COMMUNICATION_VN
Car Park Management System (CPMS) 10.26.0.0/24 INFRASTRUCTURE_VN
Common User Passenger Processing System (CUPPS) 10.27.0.0/24 AIRPORT_APP_VN
Common-Use Self Service (CUSS) 10.28.0.0/24 AIRPORT_APP_VN
Electronic Gates 10.29.0.0/24 AIRPORT_APP_VN
Electronic Point of Sales (EPoS) 10.30.0.0/24 AIRPORT_APP_VN
Flight Information Display System (FIDS) 10.31.0.0/24 AIRPORT_APP_VN
Information Kiosk (INK) 10.32.0.0/24 INFRASTRUCTURE_VN
Interactive Voice Response System (IVRS) 10.33.0.0/24 COMMUNICATION_VN
Message Distribution System (MDS) 10.34.0.0/24 COMMUNICATION_VN
Preconditioned Air Units (PCA) 10.35.0.0/24 INFRASTRUCTURE_VN
System Application Product (SAP) 10.36.0.0/24 INFRASTRUCTURE_VN
Voice over IP (VoIP) 10.37.0.0/24 COMMUNICATION_VN
Visual Docking Guidance System (VDGS) 10.38.0.0/24 AIRPORT_APP_VN
Multicast Pool 1 (Loopbacks) – PHYSICAL SECURITY VN 192.168.1.0/24 -
Multicast Pool 2 (Loopbacks) – IPTV_VN 192.168.2.0/24 -
Multicast Pool 3 (Loopbacks) – PA_VN 192.168.3.0/24 -

 

Note: Multicast Pools is used for Loopback addresses for every Fabric edge. /24 considered here can support maximum for 255 fabric edge nodes. If the network is more, consider to use a larger subnet.


Layer 2 Design

In SD Access environment, unlike the traditional networks, the gateway of a particular VLAN is present right at the Edge / Access switch and not on the Core/Distribution switch. As there is an anycast gateway present on the Edge, the Layer 2 boundary for that client ends at the access level itself, eliminating the traditional problems with STP and allowing for seamless roaming of the end client devices across the Fabric.

In an Airport network there are multiple customers that are served by providing infrastructure for appropriate services. In these cases, the IP Space is not under the control of Airport and just a VLAN will be provided for which the gateway would be outside the Airport Network/fabric. So, the traffic flow inside fabric has to be Layer 2 only.

Airlines Back office networks and Retailers are the major users of Layer 2 only networks in an airport network. In most cases the gateway for these are on a Firewall.

image8.png

 

Airline & Retail L2 Network details

 

Customer VLAN Mapped Virtual Network (VN)
Airline 1 VLAN 101 AIRLINE_VN
Airline 2 VLAN 102 AIRLINE_VN
Airline 3 VLAN 103 AIRLINE_VN
Airline 4 VLAN 104 AIRLINE_VN
Retailer 1 VLAN 201 RETAILER_VN
Retailer 2 VLAN 202 RETAILER_VN
Retailer 3 VLAN 203 RETAILER_VN

 

Wireless Design

Airport Wireless Network is highly mobile and dynamic environment. Wireless has become the primary access medium for a wide range of business-critical devices in an airport. A properly designed and deployed SD-Access Wireless network is required for the network infrastructure with the below wireless architectures:

• CUWN Wireless Over The Top (OTT),

• SD-Access Wireless (also referred as Fabric Enabled Wireless),

• Mixed-Mode (Fabric and Non-Fabric on same WLC).

Wireless is major service in an airport network which enhances the passengers experience in the Airport.


Wireless Over the Top (OTT) of SDA Fabric Architecture

image9.png

Wireless Over The Top (OTT)

  • Works with the existing WLC/AP IOS/AireOS.

  • Support for SD-Access incompatible devices.

  • Used for Migration scenarios.

  • CAPWAP for Control Plane and Data Plane

  • SDA Fabric is just a transport

  • Supported on any WLC/AP software and hardware

  • Only Centralized mode is supported

When to use Wireless Over The Top (OTT)

  • No SDA advantages for wireless

  • Migration step to full SD-Access

  • Customer wants/needs to first migrate wired (different Ops teams managing wired and wireless, get familiar

with fabric, different buying cycles, etc.) and leave wireless “as it is.”

  • Customer cannot migrate to fabric yet (Older AP, needs to certify the software, etc.)

With OTT based Wireless deployments, wireless control, management, and data plane traffic traverse the Fabric using a (CAPWAP) tunnel between the APs and WLC. This CAPWAP tunnel uses the Cisco SD-Access Fabric as a transport network.


SD-Access Wireless

image10.png

Airports can integrate wireless natively into SD-Access using two additional components—fabric wireless controllers and fabric mode APs. Supported Cisco Wireless LAN Controllers are configured as fabric wireless controllers to communicate with the fabric control plane, registering L2 client MAC addresses, SGT, and L2 VNI information. The fabric mode APs are associated with the fabric wireless controller and configured with fabric-enabled SSIDs.

In SD-Access Wireless, the Control plane is centralized: CAPWAP tunnel is maintained between APs and WLC. The main difference is that; the data plane is distributed using VXLAN directly from the Fabric enabled APs. WLC and APs are integrated into the Fabric and the Access Points connect to the Fabric overlay (Endpoint ID space) network as “special” clients.

Some of these benefits are:

  • Centralized Wireless control plane: the same innovative RF features that Cisco has today in Cisco Unified Wireless Network (CUWN) deployments will be leveraged in SD-Access Wireless as well. Wireless operations stay the same as with CUWN in terms of RRM, client onboarding and client mobility and so on which simplifies IT adoption.

  • Optimized Distributed Data Plane: The data plane is distributed at the edge switches for optimal performance and scalability without the hassles usually associated with distributing traffic (spanning VLANs, subnetting, large broadcast domains, etc.).

  • Seamless L2 Roaming everywhere: SD-Access Fabric allows clients to roam seamlessly across the campus while retaining the same IP address.

  • Simplified Guest and mobility tunnelling: An anchor WLC controller is not needed anymore, and the guest traffic can directly go to the DMZ without hopping through a foreign controller.

  • Policy simplification: SD–Access breaks the dependencies between Policy and network constructs (IP address and VLANs) simplifying the way we can define and implement Policies. For both wired and wireless clients.

  • Segmentation made easy: Segmentation is carried end to end in the Fabric and is hierarchical, based on Virtual Networks (VNIs) and Scalable Group Tags (SGTs). Same segmentation policy is applied to both wired and wireless users.

Following are the key aspects of SD Access wireless if considering for Airport deployments:

  • All SSIDs should be Fabric enabled SSID - Corporate, Guest, Passenger, Tenant SSIDs

  • Fabric enabled APs terminate VXLAN directly into the Fabric Edge

  • Fabric enabled WLC needs to be physically located on the same site as APs

  • For wireless, client MAC address is used as EID.

  • WLC Interacts with the Host Tracking DB on Control-Plane node for Client MAC address registration with SGT and Virtual Network Information

  • The WLC is responsible for updating the Host Tracking DB with roaming information for wireless clients

  • The VN information is mapped to a VLAN on the Fes

Note: Overlapping IP Pools is not supported

All these advantages can be achieved at any FEW Airport with best-in-class wireless with future-ready Wi-Fi 6 Access Points (APs) & Cisco Catalyst Wireless LAN Controller

Mixed-Mode (Fabric and Non-Fabric on same WLC)

Airports can also go with mixed-mode where in both fabric and non-fabric wireless can co-exist on the same WLC.

  • Allows to use both SDA fabric wireless and OTT on the same WLC/AP.

  • Can be a migration path to full SD-Access wireless.

  • Supports the need for some SSIDs to remain CUWN OTT.

  • Supported for Green Field deployments

image11.png


Security

Network segmentation is essential for protecting critical business assets. In SD-Access, Security is integrated in the network using segmentation. Segmentation is a method used to separate specific group of users or devices from other group(s) for security purposes. Segmentation within SD-Access is done using,

  • Macro Segmentation - Virtual Networks (VN)

  • Micro Segmentation – Secure Group Tags (SGT)

Macro Segmentation

SD-Access network Macro segmentation can be described as a process of breaking down or splitting a single large network with a single routing table into any number of smaller logical networks (segments/VNs) providing isolation between segments, minimizing attack surface and introducing enforcement points between segments.. By providing two levels of segmentation, SD-Access makes a secure network deployment possible for Airports, and at the same time provides the simplest approach for Airport networks to understand, design, implement and support. In SD-Access Fabric, information identifying the virtual network and scalable group tag (SGT) are carried in the VXLAN network identifier (VNI) field with the VXLAN-GPO header.

Macro Segmentation logically separates a network topology into smaller virtual networks, using a unique network identifier and separate forwarding tables. This is instantiated as Virtual Routing and Forwarding (VRF) instance on switches or routers and referred to as a Virtual network (VN) on Cisco DNA Center.

A Virtual network (VN) is a logical network instance within the SD-Access fabric, providing layer 2 or Layer 3 services and defining a Layer 3 routing domain. As described above, within the SD-Access fabric, information identifying the virtual network is carried in the VXLAN Network Identifier (VNI) field within the VXLAN header. The VXLAN VNI is used to provide both the Layer 2(L2 VNI) and Layer 3(L3 VNI) segmentation.

Within the SD-Access fabric, LISP is used to provide control plane forwarding information. LISP instance ID provides a means of maintaining unique address spaces in the control plane and this is the capability in LISP that supports virtualization. External to the SD-Access fabric, at the SD-Access border, the virtual networks map directly to VRF instances, which may be extended beyond the fabric. The table below provides the terminology mapping across all three technologies.

Cisco DNA Center

Switch/Router Side

LISP

Virtual Network (VN)

VRF

Instance ID

IP Pool

VLAN/SVI

EID Space

Underlay Network

Global Routing Table

Global Routing Table


Micro Segmentation

Cisco Group Based Policy balances the demands for agility and security without the operational complexity and difficulty of deploying into existing environments seen with traditional segmentation. Endpoints are classified into groups that can be used anywhere on the network in both fabric and non-fabric Cisco environments. This allows us to decouple the segmentation policies from the underlying network infrastructure.

By classifying systems using human-friendly logical groups, security rules can be defined using these groups, not IP addresses. Controls using these endpoint roles are more flexible and much easier to manage than using IP address based controls. Scalable Groups, aka Security Groups (SG), can indicate the role of the system or person, the type of application and server hosts, the purpose of an IoT device, or the threat-state of a system, which IP addresses alone cannot. These scalable groups can simplify firewall and next-gen firewall rules, Web Security Appliance policies and the access control lists used in switches, WLAN controllers, and routers. Group Based Policy is much easier to enable and manage than VLAN-based segmentation and avoids the associated processing impact on network devices.

image12.png

 

In SD-Access, some enhancements to the original VXLAN specifications have been added, most notably the use of scalable group tags (SGTs). This new VXLAN format is currently an IETF draft known as Group Policy Option (or VXLAN-GPO).

image13.png

 

Fusion Router / Firewall

Segmentation within SD-Access takes place at both a macro and a micro level through virtual networks and SGTs, respectively. Virtual networks are completely isolated from one another within the SD-Access fabric, providing macro-segmentation between endpoints within one VN from other VNs. By default, all endpoints within a virtual network can communicate with each other. Because each virtual network has its own routing instance, an external, non-fabric device known as a fusion router or firewall is required to provide the inter-VRF forwarding necessary for communications between the VNs.

image14.png

Firewalls that use SGTs in the rules are called Scalable Group Firewalls (SGFWs). SGFWs receive only the names and scalable group tag value from ISE; they do not receive the actual policies/rules. Unlike switches, where SGACLs are configured at Cisco DNA Center and deployed by ISE, SGT-based rule definition is performed locally at the SGFW through either the CLI or other management tool. Using a SGFW, using SGFW is optional, but when used provides tight integration for the End to end security and also avoid the disadvantage of using IP address for policy as IP address could change over time when using DHCP.


Multicast

Multicast is supported in both the Overlay Virtual Networks and the physical Underlay networks in SD-Access.

The multicast source can either reside outside the Fabric or can be inside Fabric Overlay. Multicast receivers are commonly directly connected to Edge Nodes or Extended Nodes, however multicast receivers can also be outside of the Fabric Site if the source is in the Overlay. PIM Any-Source Multicast (PIM-ASM) and PIM Source-Specific Multicast (PIM-SSM) are supported.

Multicast Design options

SD-Access supports two different transport methods for forwarding multicast.

  • Head-End Replication - uses the Overlay

  • Native Multicast - uses the Underlay

Head-End Replication

Head-End Replication (or Ingress Replication) is performed either by the multicast first-hop router (FHR) when the multicast source is in the Fabric Overlay, or by the Border Nodes when the source is outside the Fabric Site.

Native Multicast

Native multicast does not require the ingress Fabric Node to do unicast replication. Rather, the whole Underlay including Intermediate Nodes is used to do the replication. To support Native multicast, the FHRs, Last-Hop Routers (LHRs), and all network infrastructure between them must be enabled for multicast.

Multicast at Scale in Airports

Due to technology changes and the proliferation of new capabilities within the client endpoints that users are bringing into the network, Cisco SD- Access must address multicast at scale at airport networks.

Native Multicast enables efficient delivery of multicast packets within the fabric and the workflow in Cisco DNAC makes it easier for the administrator to choose the right options and automate the intent.

In the Airport network SD-Access Fabric, multicast forwarding will use Any Source Multicast (ASM) design aspects using the Native Multicast option for Scale. The Rendezvous Point (RP) outside the Fabric is selected within each Virtual Network as the RP the clients should use to join the tree. Only the First multicast packet is forwarded to the RP, subsequent packets are forwarded to the destination using the SPT. The concepts of Shared Tree and Source Tree are still in operation.

Note:

  • Any-Source Multicast (ASM) is used when the multicast endpoints don’t support IGMPv3. Source Specific Multicast (SSM) can be used when IGMPv3 is supported.

  • Multicast Pools is used for Loopback addresses for every Fabric edge. /24 considered here can support maximum for 255 fabric edge nodes. If the network is more, consider to use a larger subnet.


Deploy


Implementation Prerequisites

This document is a companion of the CVDs: Software-Defined Access & Cisco DNA Center Management Infrastructure, Software-Defined Access Medium and Large Site Fabric Provisioning, and Software-Defined Access for Distributed Campus. This document assumes the user is already familiar with the prescribed implementation and the following tasks are completed:

  1. Cisco DNA Center installation

  2. ISE nodes installation

  3. Integration of ISE with Cisco DNA Center

  4. Discovery and provision of network infrastructure

  5. Fabric domain creation

  6. Fabric domain role provision for border, control, and edge nodes

  7. WLC installation per fabric site

  8. Configuration of fabric Service Set Identifier (SSID) and wireless profiles

  9. Provisioning WLC with SSIDs and addition to the fabric

Tip : When implementing SD-Access wireless, one WLC is required per fabric site.

For further information, refer to the following documents for implementation details:

CVD Software-Defined Access & Cisco DNA Center Management Infrastructure

CVD Software-Defined Access Medium and Large Site Fabric Provisioning

CVD Software-Defined Access for Distributed Campus


Deployment considerations


L2 only Networks - Tenants

Logically the Fabric works as a L2 Switch in this. In an airport network, tenants line Airlines & Concessionaires are independent networks and require the Airport network to provide a transparent L2 network. In such scenarios, the Airport network acts like a L2 service provider for these tenants. These tenants have their default gateway to be on an external firewall/router. Firewall traffic inspection is a common security and compliance requirement in many networks. In such Scenarios, today we have support for Overlapping IP Address pools in the Wired Fabric only, overlapping IP address pools are not supported in Wireless (OTT/FEW).

image15.png Figure 3 Connectivity for tenant network

NOTE: L2B can be a standalone switch/ Stackwise Virtual / Stack switch. Only one L2B per VNI is supported.

image16.png

 

The Below configurations are pushed by DNAC, shown for representation purposes only

Procedure 1. L2 Only VLAN in the Virtual Network

Step 1. Add the L2 Only VLAN to the Virtual Network and deploy it using the workflow

Example Border/Edge Node Configuration

[Border]
                                router lisp
                                eid-record instance-id <instance-id> any-mac
                                [Edge]
                                no ipv6 mld snooping vlan <vlan-id>
                                no ip igmp snooping vlan <vlan-id>
                                cts role-based enforcement vlan-list <vlan-id>
                                vlan <vlan-id>
                                name <vlan-name>
                                exit
                                router lisp
                                instance-id-range <instance-id> override
                                service ethernet
                                eid-table vlan <vlan-id>
                                database-mapping mac locator-set <rloc>
                                !
                                instance-id <instance-id>
                                service ethernet
                                eid-table vlan <vlan-id>
                                broadcast-underlay 239.0.17.1
                                flood arp-nd
                                flood unknown-unicast
                                exit-service-ethernet
                                exit-instance-id
                                

Step 2. Add the L2 Only VLAN to the L2 Handoff Border

Example L2 Border Configuration

[L2 Handoff Border]

    no ipv6 mld snooping vlan <vlan id>
                            no ip igmp snooping vlan <vlan id>
                            cts role-based enforcement vlan-list <vlan id>
                            vlan <vlan id>
                            name <vlan name>
                            exit
                            router lisp
                            instance-id <instance-id>
                            service ethernet
                            eid-table vlan <vlan id>
                            broadcast-underlay 239.0.17.1
                            flood arp-nd
                            flood unknown-unicast
                            exit-service-ethernet
                            exit-instance-id
                            no ipv6 mld snooping vlan 100
                            no ip igmp snooping vlan 100
                            cts role-based enforcement vlan-list 100
                            vlan 100
                            name CUSS
                            exit
                            router lisp
                            instance-id 6186
                            service ethernet
                            eid-table vlan 100
                            broadcast-underlay 239.0.17.1
                            flood arp-nd
                            flood unknown-unicast
                            exit-service-ethernet
                            exit-instance-id
                            

**
**

Example Border/Edge Node Configuration

    [Border]
                            router lisp
                            eid-record instance-id 6186 any-mac

 

[Edge]
                            no ipv6 mld snooping vlan 100
                            no ip igmp snooping vlan 100
                            cts role-based enforcement vlan-list 100
                            vlan 100
                            name CUSS
                            exit
                            router lisp
                            instance-id-range 6186 override
                            service ethernet
                            eid-table vlan 100
                            database-mapping mac locator-set rloc_c2600e62-d09f-4f05-b344-5ba8c24962e5
                            !
                            instance-id 6186
                            service ethernet
                            eid-table vlan 100
                            broadcast-underlay 239.0.17.1
                            flood arp-nd
                            flood unknown-unicast
                            exit-service-ethernet
                            exit-instance-id
                            


Segmentation:

Macro Segmentation: One of the preferred segmentation methods for Airport networks is to isolate the user endpoints (such as PCs, Tablets, IP Phones, etc.) from the Airport sub systems, and enterprise IoT endpoints by placing them in different virtual routing and forwarding instances (VRFs). SD-Access offers the flexibility for macro-segmentation of endpoints in different VRFs, all of which can be provisioned in the network from Cisco DNA Center. The number of VRFs supports on the Fabric today is about 256, however the recommendation is to keep this as minimal as required.

image17.png

 

Procedure 2. Create Virtual Network

Step 1. Create Virtual Network (VN) and deploy it using the workflow and save it.

image18.png

 

 

 

image19.png

 

 

 

Step 2. Create VN to the Fabric.

Step 3. Add VN to the Fabric under host onboarding

image20.png

 

 

 

Note: INFRA_VN is only for Access Points and Extended Nodes mapped to the Global routing table. WLC sits outside the fabric, the border node is responsible for providing reachability between the WLC management interface subnet and the APs’ Management IP pool, so that the CAPWAP tunnel can form, and the AP can register to the WLC.

Step 4. Add necessary IP Pools to the VN as required and deploy itimage21.png

 

 

image22.png

 

 

 

Step 5. Border & Edge Configuration

Example Border/Edge Node Configuration

[Border]

vrf definition <vrf name>

rd 1:<L3 VNI instance-id>

address-family ipv4

route-target import 1:<L3 VNI instance-id>

route-target export 1:<L3 VNI instance-id>

interface Loopback<vlan-id>

description Loopback Border

vrf forwarding <vrf name>

ip address <Loopback ip> 255.255.255.255

exit

router bgp <AS number>

address-family ipv4 vrf <vrf name>

bgp aggregate-timer 0

aggregate-address <Subnet> <subnet mask> summary-only

redistribute lisp metric 10

network <Loopback ip> mask 255.255.255.255

router lisp

instance-id <L3 VNI instance-id>

service ipv4

eid-table vrf <vrf name>

route-export site-registrations

distance site-registrations 250

map-cache site-registration

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

site site_uci

authentication-key ******

eid-record instance-id <L3 VNI instance-id> 0.0.0.0/0 accept-more-specifics

eid-record instance-id <L3 VNI instance-id> <Subnet>/<mask> accept-more-specifics

eid-record instance-id <L2 VNI instance-id> any-mac

[Edge]

vrf definition <vrf name>

address-family ipv4

cts role-based enforcement vlan-list <vlan id>

vlan <vlan id>

name <vlan name>

interface Vlan<vlan id>

no lisp mobility liveness test

no ip redirects

mac-address 0000.0c9f.f64d

description Configured from Cisco DNA-Center

vrf forwarding <vrf name>

ip address <L3 Subnet ip> 255.255.255.0

ip helper-address vrf <vrf name> <DHCP IP>

ip route-cache same-interface

lisp mobility <vlan name>-IPV4

router lisp

instance-id-range <L2 VNI instance-id> override

remote-rloc-probe on-route-change

service ethernet

eid-table vlan <vlan id>

database-mapping mac locator-set <rloc>

instance-id <L2 VNI instance-id>

service ethernet

eid-table vlan <vlan id>

exit-service-ethernet

exit-instance-id

instance-id <L3 VNI instance-id>

dynamic-eid <vlan name>-IPV4

database-mapping <Subnet>/<mask> <rloc>

exit-dynamic-eid

service ipv4

eid-table vrf <vrf name>

map-cache 0.0.0.0/0 map-request

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

ip dhcp snooping vlan <vlan id>

!

Example Border/Edge Node Configuration

[Border]

vrf definition AIRPORT_APP_VRF

rd 1:4104

address-family ipv4

route-target import 1:4104

route-target export 1:4104

!

interface Loopback98

description Loopback Border

vrf forwarding AIRPORT_APP_VRF

ip address 10.9.14.1 255.255.255.255

exit

router bgp 65002

address-family ipv4 vrf AIRPORT_APP_VRF

bgp aggregate-timer 0

aggregate-address 10.9.14.0 255.255.255.0 summary-only

redistribute lisp metric 10

network 10.9.14.1 mask 255.255.255.255

router lisp

instance-id 4104

service ipv4

eid-table vrf AIRPORT_APP_VRF

route-export site-registrations

distance site-registrations 250

map-cache site-registration

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

site site_uci

authentication-key ******

eid-record instance-id 4104 0.0.0.0/0 accept-more-specifics

eid-record instance-id 4104 10.9.14.0/24 accept-more-specifics

eid-record instance-id 6186 any-mac

[Edge]

vrf definition AIRPORT_APP_VRF

address-family ipv4

cts role-based enforcement vlan-list 98

vlan 98

name BRS

exit

interface Vlan98

no lisp mobility liveness test

no ip redirects

mac-address 0000.0c9f.f64d

description Configured from Cisco DNA-Center

vrf forwarding AIRPORT_APP_VRF

ip address 10.9.14.1 255.255.255.0

ip helper-address vrf AIRPORT_APP_VRF 10.16.255.3

ip route-cache same-interface

lisp mobility BRS-IPV4

exit

router lisp

instance-id-range 6186 override

remote-rloc-probe on-route-change

service ethernet

eid-table vlan 98

database-mapping mac locator-set rloc_b4614cb7-c156-47f1-bc50-01733dbd2c52

instance-id 6186

service ethernet

eid-table vlan 98

exit-service-ethernet

exit-instance-id

instance-id 4104

dynamic-eid BRS-IPV4

database-mapping 10.9.14.0/24 locator-set rloc_b4614cb7-c156-47f1-bc50-01733dbd2c52

exit-dynamic-eid

service ipv4

eid-table vrf AIRPORT_APP_VRF

map-cache 0.0.0.0/0 map-request

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

ip dhcp snooping vlan 98

!

Micro-segmentation: Within the scope of a single VRF, further segmentation is required in most of the cases such as:

  • Placing Various departments in different groups E.g. AIRPORT_APP_VN (VRF)

  • Placing BRS and CUPPS in different groups in AIRPORT_APP_VRF

image23.png

 


image24.png

 

 

 

For such requirements, in the traditional network architecture, the only means to segment was by placing groups in different subnets enforced by IP ACLs. In Cisco SD-Access, in addition to providing the flexibility of using different subnets, we provide the flexibility of micro-segmentation, i.e., using the same subnet in a more user and endpoint-centric approach.

Referring to the visiting and resident doctor example, each group can still be placed in the same subnet. However, by leveraging dynamic authorization, they can be assigned different Security Group Tags (SGTs) by ISE based on their authentication credentials. These SGT-based rules can then be enforced by Security-Group Access Control Lists (SGACLs).

Procedure 3. Create Scalable Group

Step 1. Create Scalable Group (SG) and deploy it using the workflow and save it.

image25.png

 

 

 

Step 2. Create Policies to control trafficimage26.png

 

 

image27.png

 

 

 

Step 3. Select your Source Scalable Group to create Policy

Step 4. Select your Destination Scalable Group to create Policy

image28.png

 

 

 

 

image29.png

 

 

 

 

Step 5. Select your Contract to create Policy

Step6. Save Policy to control traffic

image30.png

 

 

Step 7. View the create Policy

image31.png

 

 

 


Multicast for CCTV / IPTV


Procedure 4. Configure Multicast

Step 1. Enable Multicast with Native Multicast

image32.png

 

Step 2. Select Virtual Networks for which multicast is to be enabled

 

image33.png

 

 

 

Step 3. Select Multicast IP Pool the Virtual Network

image34.png

 

 

 

Note: Multicast IP Pool for each VN must be created as a pre-requisite

Step 4. Select Multicast deployment type for the Virtual Network

image35.png

 

 

Note: ASM Multicast type is selected where all multicast endpoints don’t support IGMPv3

 

image36.png

 

 

 

Note: SSM Multicast type is selected where all multicast endpoints support IGMPv3.

 

Step 5. Select the RP type for the VN

 

image37.png

 

 

 

 

Step 5a. Select SSM Multicast IP group address for the VN

image38.png

 

Step 6. Select the RP device for the Virtual Network

image39.png

 

 

Step 7. Select the RP device to utilize for the Virtual Network

image40.png

 

Step 8. Save and Deploy Multicast for the Virtual Network

image41.png

 

image42.png

 

ASM

Example Border/Edge Node Configuration

[Border]

ip multicast-routing vrf <vrf name>

interface Loopback<L3 instance id>

vrf forwarding <vrf name>

ip address <Multicast Loopback ip> 255.255.255.255

ip pim sparse-mode

interface LISP0.<L3 instance id>

ip pim lisp transport multicast

ip pim lisp core-group-range 232.0.0.1 1000

ip pim vrf <vrf name> rp-address <Multicast Loopback ip>

ip pim vrf <vrf name> register-source Loopback<L3 instance id>

ip pim vrf <vrf name> ssm default

router bgp <BGP AS#>

address-family ipv4 vrf <vrf name>

aggregate-address <Multicast Subnet> <Subnet Mask> summary-only

network <Multicast Loopback ip> mask 255.255.255.255

router lisp

instance-id <L3 instance id>

service ipv4

eid-table vrf <vrf name>

route-export site-registrations

distance site-registrations 250

map-cache site-registration

database-mapping <Multicast Loopback ip>/32 locator-set <rloc>

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

site site_uci

authentication-key ******

eid-record instance-id <L3 instance id> <Multicast subnet>/<Mask> accept-more-specifics

[Border]

ip multicast-routing vrf AIRPORT_APP_VRF

interface Loopback4102

vrf forwarding AIRPORT_APP_VRF

ip address 172.16.1.1 255.255.255.255

ip pim sparse-mode

interface LISP0.4102

ip pim lisp transport multicast

ip pim lisp core-group-range 232.0.0.1 1000

ip pim vrf AIRPORT_APP_VRF rp-address 172.16.1.1

ip pim vrf AIRPORT_APP_VRF register-source Loopback4102

ip pim vrf AIRPORT_APP_VRF ssm default

router bgp 65001

address-family ipv4 vrf AIRPORT_APP_VRF

aggregate-address 172.16.1.0 255.255.255.0 summary-only

network 172.16.1.1 mask 255.255.255.255

router lisp

instance-id 4102

service ipv4

eid-table vrf AIRPORT_APP_VRF

route-export site-registrations

distance site-registrations 250

map-cache site-registration

database-mapping 172.16.1.1/32 locator-set rloc_6d2284a0-ba3c-469a-99b5-099ce544b7df

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

site site_uci

authentication-key ******

eid-record instance-id 4102 172.16.1.0/24 accept-more-specifics

!


SSM

Example Border/Edge Node Configuration

[Border]

ip access-list standard SSM_RANGE_<vrf name>

permit <multicast subnet> <wildcard mask>

vrf definition <vrf name>

rd 1:<L3 instance id>

address-family ipv4

route-target import 1:<L3 instance id>

route-target export 1:<L3 instance id>

ip multicast-routing vrf <vrf name>

interface Loopback<L3 instance id>

vrf forwarding <vrf name>

ip address <Multicast rloc ip> 255.255.255.255

ip pim sparse-mode

interface LISP0.<L3 instance id>

ip pim lisp transport multicast

ip pim lisp core-group-range 232.0.0.1 1000

ip pim vrf <vrf name> register-source Loopback<L3 instance id>

ip pim vrf <vrf name> ssm range SSM_RANGE_<vrf name>

router bgp <BGP AS#>

address-family ipv4 vrf <vrf name>

bgp aggregate-timer 0

aggregate-address <Multicast Subnet> <Subnet mask> summary-only

redistribute lisp metric 10

network <Multicast Loopback ip> mask 255.255.255.255

router lisp

instance-id <L3 instance id>

service ipv4

eid-table vrf <vrf name>

route-export site-registrations

distance site-registrations 250

map-cache site-registration

database-mapping <Multicast Loopback ip >/32 locator-set <rloc>

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

site site_uci

authentication-key ******

eid-record instance-id <L3 instance id> <Multicast Subnet>/<mask> accept-more-specifics

!

[Edge]

ip access-list standard SSM_RANGE_<vrf name>

permit <multicast subnet> <wildcard mask>

vrf definition <vrf name>

address-family ipv4

route-target import 1:<L3 instance id>

route-target export 1:<L3 instance id>

ip multicast-routing vrf <vrf name>

interface Loopback<L3 instance id>

vrf forwarding <vrf name>

ip address <Multicast rloc ip> 255.255.255.255

ip pim sparse-mode

interface LISP0.<L3 instance id>

ip pim lisp transport multicast

ip pim lisp core-group-range 232.0.0.1 1000

ip pim vrf <vrf name> register-source Loopback<L3 instance id>

ip pim vrf <vrf name> ssm range SSM_RANGE_<vrf name>

router lisp

instance-id <L3 instance id>

service ipv4

eid-table vrf <vrf name>

map-cache 0.0.0.0/0 map-request

database-mapping <Multicast rloc ip>/32 locator-set rloc_416ae5cb-ead9-42d2-af2d-d7738dc8f0b2

exit-service-ipv4

remote-rloc-probe on-route-change

exit-instance-id

Example Border/Edge Configuration


Silent Host

Certain IoT endpoints on an Airport networks exhibit unique communication behaviour. They are broadly categorized into two categories based on their communication characteristics:

  1. Once the endpoints are onboarded onto the network, they do not send any further packets or frames. These endpoints are essentially hibernating and only respond to specific frames or packets directly addressed to them. These devices may be DHCP-capable or statically addressed.

  2. Endpoints that are cabled and have not registered to the Fabric Overlay because of a static IP address

These types of devices are sometimes referred to as Silent Hosts due to their unique communication.

For the first category of endpoints, even though the endpoint is active, the Edge Node that is connected to that endpoint may have removed that endpoint from the Edge Node’s IP Device Tracking Database (IPDT) host database. Since the endpoint is hibernating, the Edge Node is not receiving any response to periodic Internet Control Message Protocol (ICMP) probes, so the host is considered as no longer present on the Edge Node. Any other endpoint in the network with a previous record of communicating with the silent host will continue to communicate using the Silent Host’s MAC address.

Cisco SD-Access supports handling unknown unicast frames by flooding them to every Edge Node within an IP Address Pool that is enabled for Layer 2 Flooding in the Fabric Site. While not responding to ICMP, hibernating devices will respond to specific payloads of traffic that are more directly relevant. As a result, the Silent Host will respond and re-join the Fabric network if the unknown unicast frames are relevant to it. Flooding to the endpoint will also happen if the endpoint is part of a Layer 2 VNI, which requires the Layer 2 Flooding feature to be enabled.

Once the endpoint is registered in the Control Plane Node, any further communication to the endpoint can continue without concern.

image43.png

 

For the Second category of endpoints, they have never registered with the Control Plane Node for the second set of endpoints. The only way the Fabric can even know of the presence of such an endpoint is by configuring a manual IP Device Tracking (IPDT) entry on the Edge Node to which such a Silent Host is connected. This status mapping permanently ties the host’s MAC address to the Device Tracking database. Once configured, such hosts will never lose traction from the Control Plane and can continue to attract traffic. The Below configuration is recommended to be pushed using a DNAC Template.

Example Edge Node Configuration

[Edge]

device-tracking binding vlan 10 192.168.1.10 interface gigabitEthernet 1/0/1 abc.abc.abc reachable-lifetime infinite

IP Directed Broadcast:

A Catalyst 9000 SD-Access Border switch can convert an IP-directed broadcast into an Ethernet broadcast and flood to all endpoints in the destination VLAN. This feature can be used for silent host challenge.

image44.png

 

IP-directed broadcast requires the Layer 2 flooding feature to be enabled. This in turn requires multicast in the Underlay.

IP-Directed Broadcasts are forwarded to Wired Hosts only, as the broadcast packets are not forwarded between the Edge Node and Fabric AP


Wireless for Tenants & Passengers

Process 1: Preparing for Wireless

An initial configuration is required to be applied to 9800 WLCs before discovery and provisioning can occur.

The Cisco Catalyst 9800 Wireless Controller boots up similarly to any other Cisco router or switch running the IOS-XE software, which may be familiar to the user.

https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-40/installation-guide/b-wlc-ig-9800-40/power-up-and-initial-configuration.html

After setting up the WLCs, ensure they are upgraded to the same SDA compatible software version. The below link is the SDA compatibility matrix:

https://www.cisco.com/c/dam/en/us/td/docs/Website/enterprise/sda_compatibility_matrix/index.html


Procedure 1. Discover and Manage Wireless LAN Controllers

WLCs need to be discovered via DNAC to make it fabric enabled and configure via DNAC. You can discover devices using an IP address range, CDP, or LLDP.

This procedure shows you how to discover devices and hosts using an “IP address range”. We need to fill in both the Primary and Secondary IP addresses.

 

Table 1 Discovery

Discovery Name Discovery Type From IP To IP CLI User Name SNMPv2 Read Community SNMPv2 Write Community SNMPv3 Username Netconf port
WLC IP Address/Range x.x.x.x y.y.y.y dnac Read Community Write Community   830
 
  • Login to DNAC and click on the ‘tiles’ icon at the upper right corner of the screen. Click on Tools-> Discovery.

  • In the next page, click on + symbol for creating a new discovery.

  • Fill in the details as shown in the images below.

  • Click on the + symbol to add WLC specific credentials and save the config for each parameter.

  • Once all the details are filled in as shown below, start the discovery.

Step 1: From the DNA Center home page, click on Tools > Discovery.

Figure Add Discovery

image46.png

 

Step 2. In the Discovery Name field, enter a name.

Step 3. Expand the IP Address/Ranges area, if it is not already visible, and configure the following fields:

  1. For Discovery Type, click Range.

  2. In the IP Ranges field, enter the beginning and ending IP addresses (IP address range) for DNA Center to scan and click.

  3. You can enter a single IP address range or multiple IP addresses for the discovery scan.

Management IP to be used for Discovery

Cisco Wireless Controllers must be discovered using the Management IP address instead of the Service Port IP address. If not, the related wireless controller 360 and AP 360 pages will not display any data.

Step 4. Add the Credentials area and configure the credentials that you want to use for the Discovery job. Fill CLI, SNMPv2/SNMPv3 and Netconf Port 830 fields. Once saved as Global under each field, Expand the Credentials and Select the CLI, SNMP and Netconf credentials we just created.

Figure 1 Discovery Credentials

image47.png

Step 5 Netconf is mandatory for enabling wireless services on wireless capable devices such as C9800 switches/controllers. Recommended port number is 830.

  • Run Discovery

Step 6. Select the protocol & Click Start

The Discovery window displays the results of your scan.

Figure 2 Run Discovery

image48.png

Once discovery is complete, go to inventory and verify that both the WLCs are present.


Procedure 2. Configure Fabric Wireless SSIDs

  • Configure Corporate SSID

Global wireless network settings include settings for Service Set Identifier (SSID), wireless interfaces, wireless radio frequency (RF), and sensors.

Perform the below steps to configure SSIDs for an enterprise wireless network.

Step 1: Navigate to Design → Network Settings → Wireless.

Step 2: Under +Add Click Enterprise

Figure 3 Add SSID

image49.png

Step 3: In the Wireless Network Name (SSID) text box, enter a unique name “Corporate_Airport” for the wireless network.

image50.png

Step 4: Under Type of Enterprise Network, click Voice and Data radio button. The selection type defines the quality of service that is provisioned on the wireless network.

Step 5: Configure wireless band preferences by selecting one of the Wireless Options:

  • Dual band operation (2.4 GHz and 5 GHz)—The WLAN is created for both 2.4 GHz and 5 GHz. The band select is disabled by default.

  • Dual band operation with band select—The WLAN is created for 2.4 GHz and 5 GHz and band select is enabled.

  • 5 GHz only—The WLAN is created for 5 GHz and band select is disabled.

  • 2.4 GHz only—The WLAN is created for 2.4 GHz and band select is disabled.

Step 6: Check the Fast Lane check box to enable fast lane capabilities on the network.

Step 7: Click the Admin Status button On, to Enable the admin status.

Step 8: Click the BROADCAST SSID button off, if you do not want the SSID to be visible to all wireless clients within the range.

image51.png

Step 9: Under Level of Security, set the encryption and authentication type for the network. The security options selected are:

  • WPA2 Enterprise—Provides a higher level of security using Extensible Authentication Protocol (EAP) (802.1x) to authenticate and authorize network users with a RADIUS server.

Figure 4 Advanced Settings

image52.png

Step 11: After clicking Next, In the Wireless Profiles window, click + Add to create a new wireless profile.

Step 12: Click on Add profile and add a new profile name.

Step 13: Click on Associate profile and click on Next and hit Save

image53.png

 

Table 2 SSID Parameters

 

Features Settings
Wireless Network Name (SSID) Corporate_Airport
Admin status Enabled
Broadcast SSID Enabled
Wireless Option 5GHz only
Level of Security

Encryption: WPA/WPA2-AES

Authentication: 802.1x-FT

Mac Filtering Unchecked
Authentication Servers x.x.x.x
Type of Enterprise Network Voice and Data
WMM Policy Enabled
AAA Override Checked
Fastlane Checked
Fast Transition (802.11r) Enabled
Over the DS Un-Checked
Peer-2-Peer Blocking Disabled
Coverage Hole Detection Checked
Session timeout 84600
Client Exclusion 0
MFP Client Protection Optional
11k Neighbor List Checked
11v BSS TRANSITION SUPPORT BSS Max Idle Service - Checked
Directed Multicast Service – Checked
Client Idle User Timeout Checked, 3600 Seconds
  1. Configure Passenger SSID

Step 1: Navigate to Design → Network Settings → Wireless.

Figure 5 Wireless Settings

 

image54.png

 

 

 

Step 2: Under Guest Wireless, click +AddSelect Guest to create new SSIDs.

Step 3: In the Wireless Network Name (SSID) text box, enter a unique name for the guest SSID that you are creating.

Figure 6 SSID CREATION

image55.png

Step 4: Under SSID STATE, configure the following:

  • Click the Admin Status button off, to disable the admin status.

  • Click the BROADCAST SSID button off, if you do not want the SSID to be visible to all wireless clients within the range.

Step 5: Under Level of Security, select the encryption and authentication type for this guest network: Web Policy and Open.

Step 6: The Open policy type provides no security. It allows any device to connect to the wireless network without any authentication.

image56.png

image57.png

Step 7: After clicking Next, In the Wireless Profiles window, click + Add to create a new wireless profile.

Step 8: Click on Add profile and add a new profile name.

Step 9: Click on Associate profile and click on Next and hit Save

Note: There is no authentication selected for passenger SSID as it is generally taken care by ISP and clients are authenticated using OTP on their mobile phones. The traffic from this VLAN is passed directly passed to ISP router or firewall. Rate limiting can be applied on this SSID if customer demands it.

Table 3 SSID Parameters

Features Settings
Wireless Network Name (SSID) Passenger_WiFi
Admin status Enabled
Broadcast SSID Enabled
Wireless Option 2.4Ghz and 5GHz only
Level of Security None
Mac Filtering Unchecked
Authentication Servers x.x.x.x
Type of Enterprise Network Voice and Data
WMM Policy Enabled
AAA Override Checked
Fastlane Checked
Fast Transition (802.11r) Enabled
Over the DS Un-Checked
Peer-2-Peer Blocking Disabled
Coverage Hole Detection Checked
Session timeout 3600
Client Exclusion 0
MFP Client Protection Optional
11k Neighbor List Checked
11v BSS TRANSITION SUPPORT BSS Max Idle Service - Checked
  Directed Multicast Service – Checked
  Client Idle User Timeout Checked, 3600 Seconds
  1. Configure Tenant/BRS SSID

Step 1: Navigate to Design → Network Settings → Wireless.

Figure 7 Wireless Settings

image54.png

Step 2: Under Guest Wireless, click +AddSelect Enterprise to create new SSIDs.

Step 3: In the Wireless Network Name (SSID) text box, enter a unique name for the enterprise SSID that you are creating.

image58.png

Step 4: Under SSID STATE, configure the following:

  • Click the Admin Status button off, to disable the admin status.

  • Click the BROADCAST SSID button off, if you do not want the SSID to be visible to all wireless clients within the range.

Step 5: Under Level of Security, select the L2 Security as Personal. Enter a Pre-shared key (PSK). L3 security can be set to open

Note: IPSK can also be used if in case a common SSID is used across multiple tenants. The client MAC addresses of these tenants should be present in AAA server database.

Step 6: Associate SSID to Profile as in above steps.

 

Features Settings
Wireless Network Name (SSID) Tenants
Admin status Enabled
Broadcast SSID Enabled
Wireless Option 2.4Ghz and 5GHz
Level of Security WPA2/WPA3
Mac Filtering Unchecked
Authentication Servers x.x.x.x
Type of Enterprise Network Voice and Data
WMM Policy Enabled
AAA Override Checked
Fastlane Checked
Fast Transition (802.11r) Enabled
Over the DS Un-Checked
Peer-2-Peer Blocking Disabled
Coverage Hole Detection Checked
Session timeout 3600
Client Exclusion 0
MFP Client Protection Optional
11k Neighbor List Checked
11v BSS TRANSITION SUPPORT BSS Max Idle Service - Checked
  Directed Multicast Service – Checked
  Client Idle User Timeout Checked, 3600 Seconds
 

Procedure 3. Create Network Profile

The following procedure describes how to configure Network Profile for wireless network. SSIDs will be mapped at the later stage to the Network Profile after creating all the necessary SSID Profiles

In the Cisco DNA Center GUI, click the Hamburger Menu icon and choose Design > Network Profiles > Wireless -→ Add a new Profile Name  “Terminal” and click on Save button at the bottom right.

Below Screenshot shows the final stage after creating all the SSIDs.

Figure 8 Network Profile

image60.png

 

We will use the above created Network profile for separate set of SSID that might be needed in different sections of Airport.

Procedure 4. Create RF Profile

Step 1: Navigate to Design → Network Settings → Wireless.

Step 2: Choose Wireless Radio Frequency Profile → Add

Step 3: In the Profile Name text box, enter a unique name <Terminal_Outdoor_RFProfile>.

Step 4: Under 2.4GHz choose Parent Profile → Custom.

Step 5: Select DCA Channel as 1,6,11.

Step 6: Set Supported Data RateMandatory Data Rates →

Step 7: Set Power Level → Min <Value>dBm and Max <Value>dBm.

Step 8: Set TPC Power Threshold → <Value>dBm.

Step 9: Choose Auto value from the RX-SOP drop-list.

image61.png

Step 10: Under 5GHz choose Parent Profile → Custom.

Step 11: Channel width select <Value>MHz from the drop-list.

Step 12: Select DCA Channel as <allowed 5Ghz channels of that country>

Step 13: Set Supported Data Rate → Mandatory Data Rates → <Value>.

Step 14: Set Power Level → Min <Value>dBm and Max <Value>dBm.

Step 15: Set TPC Power Threshold → <Value>dBm.

Step 16: Choose Auto value from the RX-SOP drop-list.

image62.png

Refer below Table for the values to configure remaining RF profile.

Table 4 RF Profile Parameters

5 GHz Band Corporate Terminal_Indoor Terminal_Outdoor
DCA Channels 36,40,44,48,52,56,60,64,100,
104,108,112,116,120,124,149,153,157,161.
36,40,44,48,52,56,60,64,100,
104,108,112,116,120,124,149,153,157,161.
36,40,44,48,52,56,60,64,100,
104,108,112,116,120,124,149,153,157,161.
Supported Rates (mbps) 24, 36, 48, 54 12,18, 24, 36, 48, 54 12, 18, 24, 36, 48, 54
Mandatory Rates (mbps) 24 12 12,24
TPC Power - Min, Max (dBm) 5, 17 7, 14 5, 11
TPC Power Threshold -60 -67 -60
Rx-SOP Low Medium Medium
Channel Width (MHz) 40 40 40
2.4 GHz Band      
DCA Channels 1,6,11 1,6,11 1,6,11
Supported and (mbps) 12, 18, 24, 36, 48, 54 24, 36, 48, 54 12, 18, 24, 36, 48, 54
Mandatory Rates (mbps) 24 24 12,24
TPC Power - Min, Max (dBm) 7, Max 6, 13 5, 11
TPC Power Threshold -60 -67 -60

Procedure 5. Add Wireless Controllers to Inventory

Perform the below steps to add WLC to Fabric Site.  

Step 1: Navigate to Provision →  Inventory, all the discovered devices are listed in this window.

Step 2: To view devices available in a particular site, expand the Global site in the left pane, and select the site, building, or floor where the WLC is added.

Step 3: All the devices available in that selected site is displayed in the Inventory window.

Step 4: From the Device Type list, click the WLCs tab, and from the Reachability list, click the Reachable tab to get the list of wireless controllers that are discovered and reachable.

Step 5: Check the check box next to the controller device name that you want to provision.

Step 6: From the Actions drop-down list, choose Provision → Provision Device.

Step 7: The Assign Site window appears. Click Choose a site to assign a site. In the Choose a site window, select a site, and click Save.

Step 8: Click Next.

Step 9: The Configuration window appears. Select a role for the wireless controller: Active Main WLC.

Step 10: Click Select Primary Managed AP Locations to select the managed AP location for the wireless controller.

Step 11: In the Managed AP Location window, check the check box adjacent to the site name. You can either select a parent site or the individual sites. If you select a parent site, the children under that parent site automatically gets selected.

Info

Inheritance of managed AP locations allows you to automatically choose a site along with the buildings and floors under that site. One wireless controller can manage only one site.

Step 12: Click Save.

Step 13: The Status column in the Device Inventory window shows SUCCESS after a successful deployment.

Step 14: After provisioning, if you want to make any changes, click Design, change the site profile, and provision the wireless controller again.

Step 15: After the devices are deployed successfully, the Provision Status changes from Configuring to Success.

Step 16: In the Device Inventory window, click See Details in the Provision Status column to get more information about the network intent or to view a list of actions that you need to further take.

Procedure 6. Host Onboarding

 

Info

Before getting started on Host Onboarding, you will want to be sure that you have completed the ISE authorization profiles and policies for dynamic host onboarding first.

 

Step 1: Click on the Host Onboarding tab to switch to the host onboarding configuration page for the fabric site. Closed Authentication has been selected as an authentication template earlier.

Step 2: Assign AP IP pools to virtual network INFRA_VN:

  • Click on virtual network INFRA_VN under Virtual Networks section.

  • Click on “Add”

Figure 9 Edit Virtual Network – INFRA_VN

image63.png

  • Select previously created AP pool from the drop-down IP Pool menu.

  • Select AP as the Pool Type.

  • Click on Update at the bottom of the window to save the change.

  • Click Deploy to add pool to VN

 

Figure 10 Virtual Network – INFRA_VN

image64.png

  • Click on the “X” at the upper right-hand corner to return to the main Host Onboarding window.

 

Step 3: Assign enterprise wireless IP pools to the correct virtual network:

  • Click on virtual network under Virtual Networks section.

  • Click on Corp_VN

 

Figure 11 Edit Virtual Network – Corp_VN

image65.png

  • Select Enterprise Wireless Pool (Eg. Corp_Pool_BGL16_172.16.10.0/25) from the drop-down IP Pool menu.

  • Select Data from the drop-down Traffic Type list.

  • Layer-2 Flooding: Leave it unchecked for now.

  • Wireless Pool: Checked.

  • VLAN: Leave it blank.

  • VLAN Name: Check Auto Generated VLAN Name.

  • Click on Add at the bottom of the window to save the change.

  • Then click Deploy.

 

Figure 12 Edit Virtual Network – Corp_VN

image66.png

  • Click on the “X” at the upper right-hand corner to return to the main Host Onboarding

Note

Repeat same for other VNs and IP pools for respective VNs

Step 4: Under Wireless SSIDs section, assign address pools to wireless SSIDs:

  • Choose address for Enterprise SSID. For simplification, we will not assign SGT here.

  • Click on Deploy on the bottom right corner of the Wireless SSID section to save the configuration changes.

  • Repeat same for other SSID in that fabric.

Info

The IP Pools in the screenshots are not the same IP pools that will be used in WIR

Figure 13 Assign Address Pools to Wireless SSIDs

image67.png

Step 5: Configure AP connected port(s) under Port Assignment section. 

Because Closed Authentication is set as the default template, AP ports on edge switches will need to be set correctly. Use the following process to assign ports for APs to connect:

  • Select Edge Switch.

  • Select AP Port(s) and click

 

Figure 14 Assign Ports

image68.png

On the Port Assignments window:

  • Selected Interfaces: Pre-filled based on the port(s) selected earlier.

  • Connected Device Type: Choose Access Point (AP) from the drop-down menu.

  • Address Pool: AP Pool pre-selected and grayed out.

  • Authentication Template: No Authentication pre-selected.

  • Click on Update to save the assignment changes and return to the main Host Onboarding window.

 

Figure 15 Assign Ports

image69.png

  • Click on Deploy on the bottom right corner of Select Port Assignment section, and then Apply with Now selected in the Modify Interface Segment window to save the changes.

Figure 16 Assign Ports - Deploy

image70.png

Step 6: From here, connect a “test” AP to this port. The AP should get an IP from the INFRA-VN and associate it back to its controller via PnP procedure. 

PnP Note

  • PnP Flow works by creating a DHCP Option 43 in DHCP server for AP Pool, the DHCP Option 43 will have DNAC IP address. Once AP is connected to switch port that is configured as above, it will get DHCP IP along with DNAC IP and it will start the PnP automatically.

  • The APs that are onboarded successfully via PnP will be found under Provision → Plug and Play.

  • Select the APs → Claim and Assign Sites → Floors to the respective APs based on Serial Number where they are installed physically.

Info

Once test AP works and provisioned successfully, repeat same for all the APs.

 

From here, you will now log into the WLC and ensure that the AP(s) reconnect without issue

  • Log into WLC.

  • Click on Configuration from the menu and select Access Points under Wireless, you should see the APs listed

  • Click on the AP to view details.

 

Figure 17 WLC – AP Details

image71.png

Perform the below steps to on-board Access Points.

Step 1: Navigate to Provision →  Inventory.

Figure 18 Inventory

image72.png

Step 2: To view devices available in a particular site, expand the Global site in the left pane, and select the site, building, or floor that you are interested in.

Step 3: From the Device Type list, click the APs tab, and from the Reachability list, click the Reachable tab to get the list of APs that are discovered and reachable.

Step 4: Check the check box adjacent the AP device name that you want to provision.

Step 5: From the Action drop-down list, choose Provision → Provision.

Figure 19 Assign AP to Site

image73.png

Step 6: The Assign Site window appears.

Step 7: Click Choose a floor and assign an AP to the site.

Step 8: In the Choose a floor window, select the floor to which you want to associate the AP, and click Save.

Step 9: Click Next. The Configuration window appears.

Step 10: By default, the custom RF profile that you marked as default under Network Settings → Wireless → Wireless Radio Frequency Profile is chosen in the RF Profile drop-down list.

Figure 20 Assign RF Profile

image74.png

Step 11: You can change the default RF Profile value for an AP by selecting a value from the RF Profile drop-down list. The options are: High, Typical, and Low.

Step 12: The AP group is created based on the RF profile selected.

Step 13: Click Next.

Step 14: In the Summary window, review the device details, and click Deploy to provision the AP.

Figure 21 Deploy AP

image75.png

 

 



 

 

 

 

 

 

 

 

Assign Wireless Clients to VN and Enable Connectivity

In this section we will provision Wireless client IP Pools so the fabric will support the new end points as they join the network.

Step 1: Navigate to ProvisionFabric → [Fabric Domain] → [Site]

Step 2: Click on the Host Onboarding at the top of this screen to start enabling the IP pools for AP and client devices.

Step 3: From the Wireless SSID section, specify the wireless SSIDs within the network that the hosts can access.

Step 4: Click Choose Pool and select the IP pool IPP_CORP_USERS reserved for the corporate, passenger and tenant/BRS SSIDs

Step 5: From the Assign SGT drop-down list, choose a scalable group for the SSID.

Step 6: Check the Enable Wireless Multicast check box to enable wireless multicast on the SSIDs.

Procedure 7. Model Config and Template Editor

Model Config and Templates are used for Advanced SSID configurations and Global Configuration in controllers for which do not have automation workflows in DNAC.

Before Creating Additional Configs, Tag the WLC provisioned to start using Templates and Model Configs.

Figure 22 Device Tag

image76.png

Figure 23 WLC Tag - Wireless

image77.png

 

Model Configuration Creation

Step 1: Navigate to Tools → Model Config Editor - > Advanced SSID Configuration and Click on Add on top right corner.

  • Populate the details as shown below for creating a Model Config for Peer-to-Peer Blocking: Forward UP and enable DHCP Required - This Model Config is specific to Voice Profile.

  • Create another Model Config only with Peer-to-Peer Block Disabled and one more enabling it to map to other SSIDs.

Figure 24 Model Configuration - SSID specific

image78.png

  • Verify if the default CleanAir Model configs are present

Figure 25 Model Configuration - CleanAir

image79.png

Step 2: Select the Network Profile and add the Model Configs created as shown below.

  • Make sure to select correct SSIDs while adding Advanced SSID Configuration Model Config.

  • Under Applicability, add the same tag that is selected for WLC in above section.

Figure 26 Model Configuration - CleanAir Add to Profile

image80.png

Figure 27 Model Configuration - Advanced SSID Config to Profile

image81.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note

Repeat same steps to create respective Peer to Peer Blocking Model Configs based on below table per SSID and CleanAir Model Configs to all Network Profiles.


WLAN 
Peer-to-Peer Client Support
Corporate/Corporate BYOD Forward upstream
Corporate Guest Drop
Tenant SSIDs* Disabled (it may change based on tenant’s requirements)
BRS SSID Forward upstream
Passenger SSID Drop

 Step 3: Verify the Model configs are correctly added.

Figure 28 Model Configuration - Verify

image82.png

Figure 29 Provision - Model Configuration - CleanAir

image83.png

Figure 30 Provision - Model Configuration - Advanced SSID Configuration

image84.png

Figure 31 Provision - Advanced Configuration - Template

image85.png

Template Editor

Step 1: Navigate to Tools → Template Editor and Click on Add on top right corner.

  • Click Add on top left corner and create a new Project and then create a new Template as shown below.

Figure 32 Template Editor

image86.png

Step 2: Create New Project

Figure 33 Template Editor

image87.png

Step 3: Create a new Template and add the required configuration prepared for Voice SSID.

  • While creating a new Template select Template Type: Regular Template

  • Template Language: Velocity

  • Project Name: Select from the list which is previously created.

  • Tag: wireless

  • Device Type: Click Edit and select Wireless Controller in the list and click back-to-Back to Update Template on Top.

  • Software Type: IOS-XE

  • Then Click Add

Figure 34 Template Editor - Populate Details

image88.png

  • In the next window, add the configuration that needs to be pushed for voice optimization or any other additional configurations that are not possible to push via DNAC Automation.

Figure 35 Template Editor - Template Config

image89.png

  • Click on Actions on Top and click Save and Commit every time there is change in Template Config

Note

The above template is specific to Voice Network in JKH WIR, the template configuration is not final at the time of this document creation, and final template will be shared at the time of implementation.

Step 4: Add the Template created to the respective Network Profile - This Network Profile is created for Voice SSID, so add the Template in Network Profile where WLC is provisioned. 

  • Select Device Type: Wireless Controller.

  • Device Tag: wireless

  • Template: Select the one created previously.

Figure 36 Template Editor - Template Add to Profile

image90.png

 

 

 

 

 

 

 

 

 



IT-OT Deployment

With the evolution of IT networks adapting newer technology, the OT network has the opportunity to utilize the flexibility provided by the enhancements in IT networking. Before delving into how the OT network can leverage the new functionality provided to it, let us look at the pain points from the perspective of both business and technical requirements.


Business & Technical Requirements

  • Critical Business Continuity, Resiliency, and High Availability

  • IT - OT Integration Considerations

  • Purdue Model Alignment

  • Secure Segmentation

  • Visibility

  • Silent Hosts

  • Ring Topologies


Shared IT-OT Fabric

In the below figure, depicts the logical topology of Virtual Networks in the IT-OT shared Fabric Site

  • OT Virtual Networks (VN-OT#1 to VN-OT#n) in IT/OT Shared Fabric

  • IT Virtual Networks (VN-IT#1 To VN-IT#n) in IT/OT Shared Fabric

image91.png

 

The below figure depicts, both IT and OT devices can connect to Access Ports of either Extended Nodes or Edge Nodes, which are mapped to respective Virtual Networks. With SD-Access, a highly resilient & available network is possible.

image92.png

 

Extended Node

Extended Nodes are connected via a Layer 2 802.1Q trunk, which is also part of a port channel (even if the group only has one member/uplink). Extended Nodes must connect back to a single Edge Node. For better redundancy, Extended Node should be connected to an Edge Node stack (either physical StackWise on certain Cisco Catalyst 9000 family members or StackWise Virtual capable switches). This helps maintain the upstream connectivity even during the failure of the upstream switch or link.

image93.png

 

Extended Nodes – Ring Topology

A ring topology for IE switches can be automated in Cisco DNA Center, converting an RSTP ring to Resilient Ethernet Protocol (REP). This typically results in an approximately 50% improvement in Layer 2 recovery time in the event of a failure. Also, you can see that it is possible to spur off a ring with a linear daisy chain.


Operations & Management

SWIM

SWIM is an automated upgrade tool that can upgrade expansive networks with 100’s of devices in Hours. Upgrades can be planned with user defined slots and specified number of devices. The user can create Auto Batches for device upgrade, taking User Defined Selection Logic (UDSL) into consideration, while selecting the devices that need upgrade.


Architecture Diagram of SWIM

image94.png

 

Benefits of SWIM in an Airport Network

Automating expansive networks has proven to bring down manual errors with minimum outage time. Usually devices are upgraded by installing an image file. This is done by copying the image file to the device, configuring the device, and reloading the device. Anyway, this leads to device down time and impacts end user experience. Automation brings down this impact and minimizes the device downtime. In addition, this process cannot be entirely automatic as there are old devices in the network that could be exceptional from this current upgrade. These exceptional devices might undergo a manual upgrade process. SWIM notifies the results of the upgrade to the executor and raises alarm when there is a process failure.

There are two major benefits of SWIM:

  • Minimizes human errors

  • Reduces service impact time


Minimizes Human Errors

The biggest advantage of automating device upgrade process is that it brings down the manually induced errors. Network errors and failures happen majorly due to the human errors. As there are innumerable devices in a network, the manual process of image upgrade becomes time consuming for a user. These processes involve recruiting more man power as well. By automating this process, the human errors can be avoided, and the operating cost can be reduced. Automation makes the process error free and affordable.


Reducing Service Impact Time

Manually executing this automation process for a set of selected devices at one particular time, brings down the Service Impact Time. Only the selected devices go through this auto upgrade while the rest of the devices serve as backup devices to provide uninterrupted service to the end users.

  • The User can select a group of devices that need to be prioritized for upgrade.

  • The risks involved in the upgrade are distributed across locations. The locations are further classified as maintenance locations.

  • Selection of dependent devices can be done after upgrading the core devices that are not dependent on any other devices. The devices connected to the node should be selected only after upgrading the node. This reduces the Service Impact Time.

  • The upgrade process is made consistent and is performed in the same manner for all the devices.

  • The scalability of the devices is much higher when automated. However, the availability of physical resources is the only limitation which impacts the computation process.


Configuration Compliance

Config Compliance helps in identifying any intent deviation or out of band changes in the network that may be injected or reconfigured without affecting the original content. A network administrator can conveniently identify devices that do not meet compliance requirement for the different aspects of compliance such as Software Image, PSIRT, Network Profile and so on in Cisco DNA Center. Compliance checks can be automated or performed on demand.

  • Automated compliance check: Uses the latest data collected from > devices in Cisco DNA Center. This compliance check listens to the > traps and notification from various services such as inventory, > SWIM, and so on to asses data.
  • Manual compliance check: Enables user to manually trigger the > compliance in Cisco DNA Center.
  • Scheduled compliance check: A scheduled compliance job is a weekly > compliance check that runs every Saturday at 11 pm.

The following compliance checks can be performed on the DNAC

  • Startup versus Running Configuration

  • Software Image

  • Critical Security Vulnerability

  • Network Profile

  • Fabric

  • Application Visibility

image95.png

 

Note: Network Profile, Fabric and Application Visibility are optional and are displayed only if the device is provisioned with the required data.

Startup Vs Running Configuration Compliance

Startup Vs Running Configuration compliance check helps in identifying whether the startup and running configurations of a device are in sync. If the startup and running configurations of a device are out of sync, then compliance is triggered and a detailed report of the out of band changes is displayed. The compliance for startup vs running configurations is triggered within five minutes of any out of band changes.

Software Image Compliance

Software Image compliance check helps network administrator to see if tagged golden image in Cisco DNA Center is running on the device or not. It shows the difference in golden image and running image for a device. When there is a change in the software image, the compliance check is triggered immediately without any delay.

Critical Security (PSIRT)

PSIRT Compliance check enables the network administrator in checking whether the network devices are running without any critical security vulnerabilities or not.

Network Profile

Cisco DNA Center allows you to define its intent configuration via Network Profile and pushes to device via provisioning. The Intent must be running on a device. If any violations are found at any time due to out of band changes, compliance identify, assess and flag it off. The violations are shown to the user under Network Profiles on the compliance summary page. The automatic compliance check is scheduled to run after a period of 5 hours.

Note: Network profile compliance is only applicable for routers and wireless LAN controllers and not for switches.

Fabric (SDA Profile)

Fabric compliance helps to identify the fabric intent violations such as any out of band changes for fabric related configurations.


Assurance

Telemetry polls network devices and collects telemetry data according to the settings in the SNMP server, syslog server, NetFlow Collector, or wired client. Telemetry polls network devices and collects telemetry data according to the settings in the SNMP server, syslog server, NetFlow Collector, or wired client. Cisco DNA Center automatically enables application telemetry on all applicable interfaces or WLANs that are selected based on the new automatic interfaces or WLAN selection algorithm. Application telemetry is pushed to WLANs that are provisioned through Cisco DNA Center.

image96.png

Note: Interface description should have the “lan” keyword, which is case-sensitive

Fabric Assurance

Cisco SD-Access Assurance provides visibility into the Underlay and Overlay, and it enables reachability into critical network services in the Fabric Infrastructure. Fabric network devices are provisioned with model-driven streaming telemetry which monitors the status of certain protocol states. Any change in the protocol state is reported by the network device to Cisco DNA Center. In the Assurance dashboard, suggestive actions help network operators to narrow down the issue domain and eventually remediate the issue.

SD-Access Assurance provides visibility into the health and state of the Fabric Site, Virtual Networks, and SD-Access Transit.

image97.png

AI End Point Analytics

Cisco AI Endpoint Analytics is a solution that detects and classifies endpoints and IoT devices into different categories based on Endpoint Type, Hardware Model, Manufacturer, and Operating System type. The Cisco AI Endpoint Analytics engine and user interface runs on Cisco DNA Center and assigns labels to endpoints by receiving telemetry from the network infrastructure.

image98.png

image99.png

This section provides steps to configure the fabric to achieve IP reachability between Cisco DNA Center and the devices it will manage. It then demonstrates the workflow to manage devices beginning with the DESIGN and POLICY applications and continuing with device discovery, software image management (SWIM), and LAN Automation.


Underlay Deployment

Process 1: Preparing for Network Management Automation

Get ready to deploy the network designs and policies by creating a functioning network underlay including device management connectivity. As part of the integration of ISE with Cisco DNA Center shown in the companion Software-Defined Access for Distributed Campus Prescriptive Deployment Guide, ISE is configured with TACACS infrastructure device administration support.

For TACACS configurations, Cisco DNA Center modifies discovered devices to use authentication and accounting services from ISE and local failover serves by default. ISE must be prepared to support the device administration configurations pushed to the devices during the discovery process.

Procedure 1. Configure underlay network device management using the Cisco IOS Command Line Interface

Use a loopback interface on each device for Cisco DNA Center in-band discovery and management. The following steps configure point-to-point Ethernet connectivity between devices using IS-IS as the routing protocol and SSHv2 for device configuration using the device loopback interfaces. The SNMP configuration is pushed in a later procedure as part of device discovery.

Do not add a configuration to any devices that you intend to discover and configure using LAN Automation as part of a later procedure. Devices with existing configurations cannot be configured using LAN Automation. This example shows a configuration using Cisco IOS XE on a Cisco Catalyst switch which would be Fabric Border on this topology.

Step 1.

Use the device CLI to configure the hostname to make it easy to identify the device and disable unused services.

hostname [hostname] no service config

Step 2.

Configure local login and password.

username dna privilege 15 algorithm-type scrypt secret [password]

! older software versions may not support scrypt (type 9)

! username dna privilege 15 secret [password] enable secret [enable password]

service password-encryption

Step 3.

Configure Secure Shell (SSH) as the method for CLI management access.

ip domain-name ciscodna.net

! generate key with choice of modulus with a minimum of 2048 recommended

crypto key generate rsa modulus 2048

ip ssh version 2

line vty 0 15

login local

transport input ssh

transport preferred none

Step 4.

Configure the switch to support Ethernet jumbo frames. The MTU chosen allows for the extra fabric headers and

compatibility with the highest common value across most switches, and the round number should be easy to remember when configuring and troubleshooting.

system mtu 9100

Tech tip
Underlay connectivity using Cisco IOS XE on routers requires the use of an mtu command at the interface configuration level, and Cisco Catalyst and Cisco Nexus® switches not using Cisco IOS XE use a system jumbo mtu command at the global configuration level.

Step 5. Configure the switch loopback address and assign SSH management to use it.

interface Loopback0

ip address [Device loopback IP address] 255.255.255.255

Tech tip

Devices operating in a fabric role must have a /32 subnet mask for the IP address of their Loopback 0 interface.

This /32 route must be present in the routing table of all fabric devices within the fabric site.

Procedure 2. Configure underlay network links for routed access connectivity

If your underlay network is already configured using a routed access network deployment model, skip this procedure. Typical Layer 2 deployments require this procedure. If LAN Automation will be used to onboard all directly connected switches, you may also skip this procedure.

image101.png

Figure 4 Airport Network Layer 2 Underlay Topology

Step 1. Configure the switch connections within the underlay network infrastructure. Repeat this step for every link to a neighbor switch within the fabric underlay. If the underlay device will be provisioned as a fabric border node and the connection is to be used as a handoff from the fabric to the external infrastructure, then use the next procedure instead.

interface fortyGigabitEthernet1/0/1

no switchport

ip address [Point-to-point IP address] [netmask]

Step 2. Enable IP routing and enable the IS-IS routing protocol on the switch.

! ip routing is not enabled by default on some switches ip routing

ip multicast-routing

ip pim register-source Loopback0

ip pim ssm default

router isis

net 49.0000.0100.0400.0001.00 domain-password [domain password] metric-style wide

nsf ietf

log-adjacency-changes

bfd all-interfaces

Tech tip

If multicast communication is required outside of the border nodes towards the fusion routers (the multicast source is outside of the fabric site), the following multicast commands should be enabled on each border node in global configuration mode.

ip multicast-routing

ip pim register-source Loopback0

ip pim ssm default

And the following commands should be enabled at each interface or sub interface (for each virtual network):

ip pim sparse-mode

Step 3. Enable IS-IS routing on all configured infrastructure interfaces in the underlay, except for the border handoff interfaces, which are configured in the next procedure. The loopback interface is enabled to share the management IP address and the physical interfaces are enabled to share routing information with the connected infrastructure.

interface Loopback0

! ip address assigned in earlier step

ip router isis

ip pim sparse-mode

interface range Fo1/0/1, Fo1/0/2, Fo1/0/3

! routed ports with ip addresses assigned via earlier steps

ip router isis

isis network point-to-point

ip pim sparse-mode

logging event link-status

load-interval 30

bfd interval 100 min_rx 100 multiplier 3

no bfd echo

dampening

Procedure 3. Enable routing connectivity at border toward external router neighbor

If your underlay network is already configured as a routed access network and integrated with the rest of your network using BGP using an 802.1Q handoff as described in the Design section, skip this procedure.

The external device outside of the fabric site handling routing among multiple virtual networks and a global routing instance acts as a fusion router for those non-fabric networks. The separation of connectivity is maintained using VRFs connected by 802.1Qtagged interfaces between the fusion router and border nodes, also known as VRF-lite. Establishing the underlay connectivity using BGP allows Cisco DNA Center to manage initial discovery and configuration using the link, and then to use the same physical link with additional 802.1Q tags and BGP sessions for overlay VN connectivity.

Configuring BGP to allow Cisco DNA Center IP reachability to the underlay network devices while allowing further provisioning for virtual networks on the interfaces helps minimizes disruption to network connectivity.

Common network services such as DNS, DHCP, WLCs, and Cisco DNA Center are generally deployed outside the fabric site. To connect the SD-Access network to these services, extend your existing enterprise network to the border nodes as shown in later procedures.

Step 1. For each border node, if you are configuring a switch supporting VLAN trunk interfaces such as Cisco Catalyst 9000, 3800, or 6800 Series switches, you must configure a trunk on the connected interface with a dedicated VLAN to establish underlay connectivity for route peering to the fusion router.

vlan 100

interface vlan100

ip address [IP address] [netmask]

ip pim sparse-mode

no shutdown

interface range Fo1/0/23, Fo1/0/24

switchport

switchport mode trunk

switchport trunk allowed vlan add 100

no shutdown

Tech tip

If your border nodes are ASR or ISR routers, use an alternative sub interface configuration rather than a switch trunk to establish underlay connectivity to the fusion router.

interface TenGigabitEthernet0/1/0 no shutdown

!

interface TenGigabitEthernet0/1/0.100 encapsulation dot1Q 100

ip address [IP address] [netmask] ip pim sparse-mode

no shutdown

Step 2. Connect the redundant border nodes to each other with at least one routed interface for underlay communication and later IBGP peering. The configuration for integrating into the IS-IS protocol is shown. Repeat this step for each interface connecting redundant border nodes.

interface FortyGigabitEthernet1/0/20

no switchport

ip address [Point-to-point IP address] [netmask]

ip router isis

isis network point-to-point

ip pim sparse-mode

logging event link-status load-interval 30

no shutdown

Step 3. On each border node, enable BGP routing to the fusion router for connectivity to networks external to the fabric. Repeat this step for each border node.

router bgp [underlay AS number]

bgp router-id [interface]

bgp log-neighbor-changes

! fusion router is an eBGP neighbor

neighbor [fusion interface IP address] remote-as [external AS number]

! redundant border is an iBGP neighbor

neighbor [redundant border Lo0 address] remote-as [underlay AS number]

neighbor [redundant border Lo0 address] update-source Loopback0

!

address-family ipv4

network [Lo0 IP address] mask 255.255.255.255

! advertise underlay IP network summary in global routing table aggregate-address [underlay IP network summary] [netmask] summary-only redistribute isis level-2

neighbor [fusion interface IP address] activate

neighbor [redundant border Lo0 address] activate

exit-address-family

Example Border Node Configuration

router bgp 65514 bgp router-id Loopback0 bgp log-neighbor-changes

! fusion router is an eBGP neighbor neighbor 10.4.2.65 remote-as 65000 ! redundant border is an iBGP neighbor neighbor 10.4.14.4 remote-as 65514 neighbor 10.4.14.4 update-source Loopback0 ! address-family ipv4

network 10.4.14.3 mask 255.255.255.255

! advertise underlay IP network summary in global routing table

aggregate-address 10.4.14.0 255.255.255.0 summary-only

redistribute isis level-2

neighbor 14.4.2.65 activate

neighbor 10.4.14.4 activate

exit-address-family

Procedure 4. Redistribute shared services subnets into underlay IGP

A default route in the underlay cannot be used by the APs to reach the WLC. A more specific route (such as a /24 subnet or /32 host route) to the WLC IP address must exist in the global routing table at each node where the APs are physically connected. Permit the more specific routes for the WLC and DHCP shared services needed from BGP (examples: 10.4.174.0/24 and 10.4.48.0/21) into the underlay network through redistributing the shared services route at the border into the underlay IGP routing process using this procedure. Using this process, the prefixes used match prefixes in the BGP routing table.

Step 1.

Connect to each border node and add a prefix-list and route-map for subnets used for the shared services.

ip prefix-list SHARED-SERVICES seq 5 permit 10.4.48.0/21 ip prefix-list SHARED-SERVICES seq 10 permit 10.4.174.0/24 route-map GLOBAL-SHARED-SERVICES permit 10

match ip address prefix-list SHARED-SERVICES

Step 2. At each border node, redistribute the prefixes into your underlay routing protocol. This example assumes ISIS.

router isis

redistribute bgp [underlay AS] route-map GLOBAL-SHARED-SERVICES metrictype external

Procedure 5. Enable connectivity at external fusion router towards border neighbor

The fusion routers connected to your fabric border routers require CLI configuration for underlay connectivity consistent with the previous procedures. Follow this procedure at each external fusion router device that is connected to a border node.

The example fusion router is configured with a VRF (VRF-GLOBAL-ROUTES) containing the enterprise-wide global routes. The fusion router uses this VRF to peer with the global routing table on the fabric border for the underlay reachability.

An alternative approach is to place the enterprise-wide routes in the global routing table on the fusion router and peer with the fabric border without using a VRF.

Step 1. On each external fusion router, create the VRF, route distinguisher, and route targets for the initial management connectivity to the border.

vrf definition VRF-GLOBAL-ROUTES

rd 100:100

address-family ipv4

route-target export 100:100

route-target import 100:100

exit-address-family

Step 2. For each interface on the fusion router connected to a fabric border, enable the interface, VLAN-tagged sub interface, and IP addressing. This example uses 802.1Q VLAN tagging on a router with sub interfaces. For switches, which require trunk port configurations, match the corresponding configuring completed in previous procedures.

interface TenGigabitEthernet0/1/7

description Connected to 9500-1.ciscodna.net Border

mtu 9100

no ip address

no shutdown

interface TenGigabitEthernet0/1/7.100

encapsulation dot1Q 100

vrf forwarding VRF-GLOBAL-ROUTES

ip address [IP network] [netmask]

IP connectivity is now enabled for the VLAN (example: 100) on the 802.1Q tagged connection between the fusion router and the border node.

Step 3. Create route maps to tag routes and avoid routing loops when redistributing between the IGP used within the rest of the network and BGP when connecting using multiple links. IGPs can vary—the example shown is for EIGRP, completing the routing connectivity from IS-IS to BGP to EIGRP.

route-map RM-BGP-TO-EIGRP permit 10

set tag 100

!

route-map RM-EIGRP-TO-BGP deny 10

match tag 100

route-map RM-EIGRP-TO-BGP permit 20

Step 4. Enable BGP peering from redundant fusion routers to the border nodes and redistribute the IGP that is used to reach the networks beyond the fusion routers.

router bgp [AS number]

bgp router-id [loopback IP address] bgp log-neighbor-changes

address-family ipv4 vrf VRF-GLOBAL-ROUTES

redistribute eigrp 100 route-map RM-EIGRP-TO-BGP

neighbor [redundant fusion IP] remote-as [external AS number] neighbor [redundant fusion IP] activate

neighbor [border IP address] remote-as [underlay AS number] neighbor [border IP address] activate

default-information originate

exit-address-family

Example BGP Configuration on Fusion Router

router bgp 65500 bgp router-id 10.4.0.1 bgp log-neighbor-changes !

address-family ipv4 vrf VRF-GLOBAL-ROUTES redistribute eigrp 100 route-map RM-EIGRP-TO-BGP neighbor 10.4.0.2 remote-as 65500 neighbor 10.4.0.2 activate

neighbor 172.16.172.29 remote-as 65514 neighbor 172.16.172.29 activate default-information originate exit-address-family

Step 5. Redistribute BGP into the IGP to enable reachability. IGPs can vary—the example shown is for EIGRP Named Mode.

    router eigrp [name]
                            address-family ipv4 unicast vrf [VRF]
                            autonomous-system [AS number]
                            topology base
                            redistribute bgp [AS number] metric 1000000 1 255 1 9100 route-map [route-map]
                            exit-af-topology
                            network [external IP network address] [netmask]
                            eigrp router-id [loopback IP address]
                            exit-address-family
                            

Example Redistribution Configuration on Fusion Router

    router eigrp LAN
                            address-family ipv4 unicast vrf VRF-GLOBAL-ROUTES
                            autonomous-system 100
                            topology base
                            redistribute bgp 65000 metric 1000000 1 255 1 9100 route-map RM-BGP-TO-EIGRP
                            exit-af-topology
                            network 10.4.0.0.0.255.255
                            eigrp router-id 10.4.0.1
                            exit-address-family
                            


Procedure 6. Configure MTU and routing on intermediate devices

Optional

It is an advantage to have Cisco DNA Center manage all devices in a fabric domain. Cisco DNA Center already manages fabric edge nodes and border nodes; however, if you have intermediate devices within the fabric site that will not be operating in a fabric role, then the devices must still meet the requirements for transporting SD-Access traffic. The primary requirements are that they:

  • Must be Layer 3 devices that are actively participating in the routing topology within the other fabric underlay devices.

  • Must be able to transport the jumbo frames that are offered by the fabric encapsulation techniques.

For fabric intermediate node devices, you must set an appropriate MTU (example: 9100) and manually configure routing with the other devices in the underlay. Configuration guidance for this situation is device-specific and not discussed further in this guide.

Do not add a configuration to any devices that you intend to discover and configure using LAN Automation as part of a later procedure. Devices with existing configurations cannot be configured using LAN Automation.


References

Comments
kadsteph
Cisco Employee
Cisco Employee

Comments on this article have been closed.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: