This event had place on Thursday 30, April 2020 at 10hrs PDT
You can download the slides of the presentation in PDF format here.
A:I think that all three are part of the same tribe. I think my focus has been on applying routing to create these virtual services. Switching off my route, for sure, and understanding what works well, what didn't work well in switching, actually gave me a solid foundation to look into protocols and how we could best do things there.
A:Content delivery providers are at the heart of this whole notion of connecting users to information, and for us, the task has been "Can I give you a scalable way of connecting to those services?". Do the services have a specific requirement? Is it a multicast service, or you need to connect to layer two? If you are uploading to the CDN, that's also the case. The CDN itself, we have done a lot of work with content providers in terms of how they handle multiple subscriptions and very broad multicast, broadcast streams. So, we also enable the CDN network itself for the distribution.
A:I've worked and offer both. For me, the key is with the pull paradigm. We can do things that we can really do with the push paradigm, and even the quit paradigm can emulate that behavior, we found that scale stops at a very early stage. That early stage could be sufficient in many cases for now. The question is: How long do you hold that effort? And do you continue to pay the price of the complexity that creates? Simplification, scale, performance, and extensibility: those are some of the reasons why we use LISP.
A:LISP creates an overlay and can deliver both L2 and L3 services with mobility. Therefore it is suitable for the DCI application. Cisco and Microsoft Azure have collaborated on a series of deployments where LISP is used for the connectivity between on-premises DC facilities and Azure-cloud facilities leveraging the CSR-1000v as the LISP xTRs to provide a DC-interconnect.
A:These are both overlay control plane protocols. However, OMP is based largely on BGP which is a traditional push-based protocol, and LISP uses a pull/demand-based model. In the book, we touch upon the differences in detail. There are clear advantages to a demand-based protocol in overlay networking.
A:That’s an interesting parallel. I guess the short answer is yes. Request, Reply, and Cache is the sequence followed by both protocols. Of course, there are optimizations in LISP by which traffic can be sent via a default path while this process completes so that no traffic is lost. The default path may be sub-optimal, so there would be a switch from sub-optimal to optimal. Also, in the case of mobility, updates in pre-existing caches are much faster than the procurement of a new cache. This is a function of the publisher-subscriber service by which notifications of location updates trigger an immediate refresh of the cache (this happens within a 50 ms budget is relatively large networks).
A:The data plane in DNAC is VXLAN with some extensions for End-point Group metadata. This is similar to the encapsulation used in ACI. The LISP data plane does not have those extensions, although technically, they could be added at the expense of the Nonce field. So although this is technically possible, it isn’t productized at this time.
A:There are many reasons for the choice of LISP. This is a long discussion, hopefully, you get a chance to read the arguments in the book we wrote. In a nutshell, the functionality required for a network overlay control plane is very different from what is required in traditional routing. EVPN is a BGP address family and therefore traditional routing. Although it may seem operationally friendly at first, it really has a lot of overhead that does not benefit the overlay. With LISP we have a lean, nimble, and targeted protocol and architecture that allows us to do more with much less machinery and at much faster rates and larger scale.
A:The applications are endless. A couple of fundamental things to understand is that it can simplify networking significantly while reducing the footprint (memory, CPU) required on the access switches. So you get a simpler network, with rich services at a lower cost. Some of the services we provide in LISP are unprecedented. Some examples include: Distributed NAT, Built-in Group Policies, Signal Free Multicast (no more PIM, RPs or uRPF checks), Built-in Crypto Security Association negotiation, plus the more traditional services such as L2, L3 VPNs, etc.
A:LISP is an IETF Standards Track protocol. We have been very disciplined in keeping LISP open and in the standards environment at all times. It’s been a journey from the initial experimental phase at the IETF to the later Standards Track formalization. Many open-source projects on LISP exist and there are many organizations outside of Cisco that have private implementations of the protocol. Some of our network vendor peers have LISP implementations completed although they have not productized them at this time.
A:The database is a rather simplistic database and it is structured as a set of Patricia Tries, this follows standard lookup optimizations that are well understood for the structuring of the type of key-value data required in a routing table. So the database itself is very efficient and doesn’t add meaningful latency. The resolution of mappings does add latency of about 3/2 RTT (RTT between XTR and Map-Server). In practice, this latency is not noticeable, also keep in mind that the results of the lookup are cached for 24 hours, which means that frequently accessed destinations are rarely subject to this latency, but are already learnt. There are optimizations in LISP by which traffic can be sent via a default path while the resolution process completes so that no traffic is lost, basically neutralizing the effect of latency. Also, in the case of mobility, updates in pre-existing caches are much faster than the procurement of a new cache. This is a function of the publisher-subscriber service by which notifications of location updates trigger an immediate refresh of the cache.
A:Much of this is historical. But also, the mobility and scale problem in the Campus is very different from that of the Data Center, so the Campus presented the right set of challenges for us to make a stronger push for LISP in a very broad manner.