cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1079
Views
10
Helpful
1
Comments
ciscomoderator
Community Manager
Community Manager

This event had place on Thursday 30, April 2020 at 10hrs PDT

Introduction

Event slides

Featured Author
VictorMoreno.jpgVictor Moreno is a Distinguished Engineer at Cisco Systems responsible for the definition of next generation network architectures. Victor has over 20 years of industry experience focused on enterprise and data center network design and architecture. A recognized expert in his field, Victor holds several patents that are at the foundation of the key protocols and networking technologies that have enabled the evolution of networking to its current state. He has worked directly on the designs of global enterprises and service providers and has done extensive research on the topic of network virtualization, being a driving force within Cisco and earlier Digital Equipment Corporation for new product definition and technological direction. Victor is the co-author of the Cisco Press titles Network Virtualization and The LISP Network. Victor has published a multitude of Internet drafts, technical papers, and articles on behalf of Cisco Systems. Victor holds a degree in electrical engineering from the Simón Bolívar University, as well as master’s degrees and specializations from the Universities of York, Cambridge, and Stanford. Victor’s current areas of focus include Virtual Networks, High Density Mobility Architectures, Network Automation and Modeling, and Software Defined Networking Architectures and he is an active contributor to the definition, implementation, and standardization of the Locator-ID Separation Protocol (LISP).

You can download the slides of the presentation in PDF format here.

Live Questions

Q:In which networking area have you felt most comfortable? Switching? Routing? Virtualization?

A:I think that all three are part of the same tribe. I think my focus has been on applying routing to create these virtual services. Switching off my route, for sure, and understanding what works well, what didn't work well in switching, actually gave me a solid foundation to look into protocols and how we could best do things there.

Q:How Next Generation of Data Networks converges with CDN providers in the delivery of video to all?

A:Content delivery providers are at the heart of this whole notion of connecting users to information, and for us, the task has been "Can I give you a scalable way of connecting to those services?". Do the services have a specific requirement? Is it a multicast service, or you need to connect to layer two? If you are uploading to the CDN, that's also the case. The CDN itself, we have done a lot of work with content providers in terms of how they handle multiple subscriptions and very broad multicast, broadcast streams. So, we also enable the CDN network itself for the distribution. 

Q:Why use the locator ID separation protocol and not going down the path of industry vendors pushing EVPN as a control plane for being a plan? 

A:I've worked and offer both. For me, the key is with the pull paradigm. We can do things that we can really do with the push paradigm, and even the quit paradigm can emulate that behavior, we found that scale stops at a very early stage. That early stage could be sufficient in many cases for now. The question is: How long do you hold that effort? And do you continue to pay the price of the complexity that creates? Simplification, scale, performance, and extensibility: those are some of the reasons why we use LISP. 

Q:Is LISP suitable for Inter-DC connection? How does this affect stretch VLANs for VMs?

A:LISP creates an overlay and can deliver both L2 and L3 services with mobility. Therefore it is suitable for the DCI application. Cisco and Microsoft Azure have collaborated on a series of deployments where LISP is used for the connectivity between on-premises DC facilities and Azure-cloud facilities leveraging the CSR-1000v as the LISP xTRs to provide a DC-interconnect. 

Q:Can we compare LISP to OMP in SDWAN?

A:These are both overlay control plane protocols. However, OMP is based largely on BGP which is a traditional push-based protocol, and LISP uses a pull/demand-based model. In the book, we touch upon the differences in detail. There are clear advantages to a demand-based protocol in overlay networking.

Q:Can the latency of LISP be compared to ARP?

A:That’s an interesting parallel. I guess the short answer is yes. Request, Reply, and Cache is the sequence followed by both protocols. Of course, there are optimizations in LISP by which traffic can be sent via a default path while this process completes so that no traffic is lost. The default path may be sub-optimal, so there would be a switch from sub-optimal to optimal. Also, in the case of mobility, updates in pre-existing caches are much faster than the procurement of a new cache. This is a function of the publisher-subscriber service by which notifications of location updates trigger an immediate refresh of the cache (this happens within a 50 ms budget is relatively large networks).

Q:I have seen that LISP is used on DNA Center for Control Plane. Is it possible to use as data plane?

A:The data plane in DNAC is VXLAN with some extensions for End-point Group metadata. This is similar to the encapsulation used in ACI. The LISP data plane does not have those extensions, although technically, they could be added at the expense of the Nonce field. So although this is technically possible, it isn’t productized at this time.

Q:Why LISP and not going down the path as industry vendors, pushing EVPN as the control plane for VXLAN?

A:There are many reasons for the choice of LISP. This is a long discussion, hopefully, you get a chance to read the arguments in the book we wrote. In a nutshell, the functionality required for a network overlay control plane is very different from what is required in traditional routing. EVPN is a BGP address family and therefore traditional routing. Although it may seem operationally friendly at first, it really has a lot of overhead that does not benefit the overlay. With LISP we have a lean, nimble, and targeted protocol and architecture that allows us to do more with much less machinery and at much faster rates and larger scale. 

Q:How can LISP become useful in an Enterprise network environment?

A:The applications are endless. A couple of fundamental things to understand is that it can simplify networking significantly while reducing the footprint (memory, CPU) required on the access switches. So you get a simpler network, with rich services at a lower cost. Some of the services we provide in LISP are unprecedented. Some examples include: Distributed NAT, Built-in Group Policies, Signal Free Multicast (no more PIM, RPs or uRPF checks), Built-in Crypto Security Association negotiation, plus the more traditional services such as L2, L3 VPNs, etc.

Q:Is LISP more of network evolution and Cisco centric or it's a standard and can be adopted on non-Cisco devices?

A:LISP is an IETF Standards Track protocol. We have been very disciplined in keeping LISP open and in the standards environment at all times. It’s been a journey from the initial experimental phase at the IETF to the later Standards Track formalization. Many open-source projects on LISP exist and there are many organizations outside of Cisco that have private implementations of the protocol. Some of our network vendor peers have LISP implementations completed although they have not productized them at this time. 

Q:Any latency issues with handling database?

A:The database is a rather simplistic database and it is structured as a set of Patricia Tries, this follows standard lookup optimizations that are well understood for the structuring of the type of key-value data required in a routing table. So the database itself is very efficient and doesn’t add meaningful latency. The resolution of mappings does add latency of about 3/2 RTT (RTT between XTR and Map-Server). In practice, this latency is not noticeable, also keep in mind that the results of the lookup are cached for 24 hours, which means that frequently accessed destinations are rarely subject to this latency, but are already learnt. There are optimizations in LISP by which traffic can be sent via a default path while the resolution process completes so that no traffic is lost, basically neutralizing the effect of latency. Also, in the case of mobility, updates in pre-existing caches are much faster than the procurement of a new cache. This is a function of the publisher-subscriber service by which notifications of location updates trigger an immediate refresh of the cache. 

Q:What are your thoughts and key give always on why Cisco goes with BGP EVPN VXLAN in Datacenter like other vendors and pushes LISP, which I love by the way, for Campus network (SD-Access VXLAN)? What would be the major advantages of that decision/path?

A:Much of this is historical. But also, the mobility and scale problem in the Campus is very different from that of the Data Center, so the Campus presented the right set of challenges for us to make a stronger push for LISP in a very broad manner. 

Related Information

Comments
Martin L
VIP
VIP

interesting; thanks for sharing!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: