cancel
Showing results for 
Search instead for 
Did you mean: 
cancel

NSO and Network Management Datastore Architecture (NMDA)

339
Views
10
Helpful
0
Comments

NSO as the sample implementation of new technology

Cisco NSO is a product implimenting RFC 6020 (yang) faithfully, and we could use it for testing new in the automation technology before even going on implementing ourselves. The author of the RFC is also developing the product, and it is one of the software leading the industory.

From NSO 5.3, RFC 8342 Network Management Datastore Architecture (NMDA) has been implemented, and this article shows the rough introduction using NSO.

The all won't be explained, but the below are related RFCs.

  • Yang

    • RFC 6020 YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)
    • RFC 7950 The YANG 1.1 Data Modeling Language
    • RFC 8342 Network Management Datastore Architecture (NMDA)
  • Netconf

    • RFC 6241 Network Configuration Protocol (NETCONF)
    • RFC 8526 NETCONF Extensions to Support the Network Management Datastore Architecture
  • Restconf

    • RFC 8040 RESTCONF Protocol
    • RFC 8527 RESTCONF Extensions to Support the Network Management Datastore Architecture

 IETF Yang has been also updated refrecting NMDA. Here's the RFCs for it.

IETF Yang implementation

  • RFC 8343 A YANG Data Model for Interface Management
  • RFC 8344 A YANG Data Model for IP Management
  • RFC 8345 A YANG Data Model for Network Topologies
  • RFC 8346 A YANG Data Model for Layer 3 Topologies
  • RFC 8347 A YANG Data Model for the Virtual Router Redundancy Protocol (VRRP)
  • RFC 8348 A YANG Data Model for Hardware Management
  • RFC 8349 A YANG Data Model for Routing Management (NMDA Version)

About Network Management Datastore Architecture

Network Management Datastore Architecture (NMDA) has been introduced since NSO 5.3, and it is described in RFC 8342. This gives new ideas of datastores we have been using, and it is meaningful to know about it. New datastores are added, and the usage of existing datastore is changed. NMDA is based on RFC 7950 (Yang 1.1), and reading RFC 8342 needs the knowledge from it as well.

RFC 8342 Section 5 gives the relations between datastores in a diagram. Previous model is in Section 4.

     +-------------+                 +-----------+
     | <candidate> |                 | <startup> |
     |  (ct, rw)   |<---+       +--->| (ct, rw)  |
     +-------------+    |       |    +-----------+
            |           |       |           |
            |         +-----------+         |
            +-------->| <running> |<--------+
                      | (ct, rw)  |
                      +-----------+
                            |
                            |        // configuration transformations,
                            |        // e.g., removal of nodes marked as
                            |        // "inactive", expansion of
                            |        // templates
                            v
                      +------------+
                      | <intended> | // subject to validation
                      | (ct, ro)   |
                      +------------+
                            |        // changes applied, subject to
                            |        // local factors, e.g., missing
                            |        // resources, delays
                            |
       dynamic              |   +-------- learned configuration
       configuration        |   +-------- system configuration
       datastores -----+    |   +-------- default configuration
                       |    |   |
                       v    v   v
                    +---------------+
                    | <operational> | <-- system state
                    | (ct + cf, ro) |
                    +---------------+

     ct = config true; cf = config false
     rw = read-write; ro = read-only
     boxes denote named datastores

Datastores defined in RFC 6020 / 6241

RFC 6020/6241 has defined the datastores are referred from the datastores used in network devices at the time. Today, the below datastores are still been actively used in many routers and switches.

  • startup
  • candidate
  • running    

Startup datastore has configurations that is loaded when the device is booted up. Running datastore includes configurations that is currently running used on the device. Each session can have own candidate datastore on memory, and when commiting it, finally it's copied to running.

Data can be in two categories, configuration data and operational data. System becomes to the specific state with specific configuration data, and systems become always to the same state when the same configuration data is used. Operational data is something generated at runtime including like interface stats or VRRP state that users are not meant to configure.

On Yang model, "config" flag tells if the node is configuration or operational.

  • Configuration data
    • config true
  • Operational data
    • config false

Additions by RFC 8342 (NMDA)

Datastores are added as like the below. Operational may make misunderstanding, but this is not for "config false" data. 

Datastore Read/Write Tag
startup R/W <startup>
candidate R/W <candidate>
running R/W <running>
intended R <intended>
operational state R <operational>

startup/candidate/running

There's no much differences when using startup, candidate and running datastores, however running datastore may not have some configurations that previously existed. For example, default configuration is not included.

Configuration example description
inactive inactive attribute can be added to configurations, and that config is not copied to intended datastore.
template-mechanism-oriented The config is like a macro configuration, and the actual content will be expanded (configuration transformation) and set to operational datastore.

 

inactive would be used when operators want to disable it temporary, for example. They don't want to delete it and just mark the configuration is inactive.

In the example of template-mechanism-oriented, the macro configuration should be enough in running, and the actual content is redundant information. For instance, when config "A" is a template of config "B,C,D", having "A,B,C,D" in running doesn't make sense as the system should know "A" is "B,C,D" and it can calculate itself, so operators should not need to give "A,B,C,D"

In addition, with <intended> datastore explained next, these datastores are called conventional datastore.

intended datastore

Configurations derived from configurations in start/candidate/running datastore, and users can't modify contents on intended datastore. Depending on the configurations, the content of intended datastore is may be same with running datastore. When device is rebooted, the content in intended datastore is re-calculated from running datastore everytime.

It also can be validated at this time. In an example, when running datastore has a configuration with inactive, it's not affected in intended, so validation would work. Operators removes the inactive attributes, however the validation can fail and it cannot be injected into intended datastore.

operational state datastore

This datastore has the all required data to make the device in exactly same state. Because the oper-data (config false) is also included, when this datastore has the same data with another device, those two are behaving same.

This datastore includes other types of configurations like default configuration, learned configuration such as BGP RIB content and system configurations like chassis serial numbers.

Config examples

RFC 8342 Appendix C.2 (bgp configuration)

<running> datastore has the below configurations. BGP is being configured, and local-as, peer-as and peers (neighbors) are listed.

<bgp>
  <local-as>64501</local-as>
  <peer-as>64502</peer-as>
  <peer>
    <name>2001:db8::2:3</name>
  </peer>
</bgp>

We can confirm the all system state in <operational> datastore. If specific config is not found in <running>, origin attributes are set so that we can check where the config is from.

/bgp/peer/local-port is the data that operating system gives when BGP TCP session is created, so origin is system.

/bgp/peer/state is "config false" data, and it's not coming from anywhere, so there's no attributes attached.

<bgp xmlns:or="urn:ietf:params:xml:ns:yang:ietf-origin"
      or:origin="or:intended">
  <local-as>64501</local-as>
  <peer-as>64502</peer-as>
  <peer>
      <name>2001:db8::2:3</name>
      <local-as or:origin="or:default">64501</local-as>
      <peer-as or:origin="or:default">64502</peer-as>
      <local-port or:origin="or:system">60794</local-port>
      <remote-port or:origin="or:default">179</remote-port>
      <state>established</state>
    </peer>
</bgp>

Configurations in <operational> are not modified immediately after users configure on <running>. When BGP peer is removed in <running>, it's removed also from <intended>, however it's not removed in <operational>. bgp needs to shutdown the peer, and tcp needs to close the session, and it takes while. In this case, the modification on <opoerational> happens delayed.

Config examples

RFC 8342 Appendix C.3 (preconfig)

Physical interfaces on routers are added/removed by inserting/removing line cards. Even when <running> has configurations of non-existing interfaces, that cannot be used in the system. However by preparing those in <running> in advance of card insertion, that config is appeared in <operational> when the card is available automatically. This will increase the maintenancebility in enterprises.

<running> / <intended>

<interfaces>
  <interface>
    <name>et-0/0/0</name>
    <description>Test interface</description>
  </interface>
</interfaces>

<operational>

<interfaces xmlns:or="urn:ietf:params:xml:ns:yang:ietf-origin"
                 or:origin="or:intended">
  <interface>
    <name>et-0/0/0</name>
    <description>Test interface</description>
    <mtu or:origin="or:system">1500</mtu>
  </interface>
</interfaces>

New NETCONF operation

NMDA capable netconf servers can process new operations.

<get-data>

Previously <get-config> for getting configuration data (config true) and <get> for both configuration data (config true) and operational data (config false).

<edit-data>

It used as like the <edit-config> operation. <config> node was previously defined as anyxml when using <edit-config>. As <config> node is anydata for <edit-data> operation, the data can be in JSON, for example.

Usage of RESTCONF for NMDA

"ds" resource has been added to RESTCONF URI, and datastores can be specified as like the below.

For running datastore, GET/POST/PUT/PATCH is available while intended datastore can accept only GET. operational datastore is also readonly, however it can accept also POST for action invocation.

  • http://..../resetconf/ds/ietf-datastores:running/
  • http://..../resetconf/ds/ietf-datastores:intended/
  • http://..../resetconf/ds/ietf-datastores:operational/

NMDA with NSO

NMDA is supported from NSO 5.3, but origin attribute is not supported currently. Let's try the feature.

NETCONF hello

NSO NETCONF Northbound interface responds as the below to <hello> operation. yang-library 1.1 capability is included.

$ netconf-console --hello
<?xml version="1.0" encoding="UTF-8"?>
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

//snip//

    <capability>urn:ietf:params:netconf:capability:yang-library:1.0?revision=2019-01-04&amp;module-set-id=3f1f4f98aba52a2290a6a29a1a8d9c0b</capability>
    <capability>urn:ietf:params:netconf:capability:yang-library:1.1?revision=2019-01-04&amp;content-id=3f1f4f98aba52a2290a6a29a1a8d9c0b</capability>

Sample NSO package for NMDA testing

The below simple yang model is created for testing. test1 module has a conatiner "server", and apps are running on it. Each application has a leaf with config false node to have a version information.

module test1 {
  namespace "http://com/example/test1";
  prefix test1;

  container server {
    list app {
      key name;
      leaf name {
        type string;
      }
      leaf version {
        config false;
        type string;
      }
    }
  }
}

app1, app2, app3 and app4 will be created on this example, but let's set app2, app3 and app4 with inactive attribute. On NSO, we can add inactive annotation by deactivate/activate command from CLI, and NETCONF/RESTCONF can accept in normal attribute in nodes.

Creating app1 and app2 from CLI

admin@ncs# conf
Entering configuration mode terminal
admin@ncs(config)# server app app1
admin@ncs(config-app-app1)# server app app2
admin@ncs(config-app-app3)# exit
admin@ncs(config)# deactivate server app app2
admin@ncs(config)# commit dry-run outformat xml
result-xml {
    local-node {
        data <server xmlns="http://com/example/test1">
               <app>
                 <name>app1</name>
               </app>
               <app inactive="inactive">
                 <name>app2</name>
               </app>
             </server>
    }
}
admin@ncs(config)#
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)#

Creating app3 from NETCONF

First, here's the payload for the NETCONF <edit-data> operation. Please note that inactive attribute is added.

$ cat server-app3.xml
<server xmlns="http://com/example/test1">
  <app inactive="inactive">
    <name>app3</name>
  </app>
</server>

Then, netconf-console is used with the xml payload to create app3.

$ netconf-console --edit-data=server-app3.xml
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
  <ok/>
</rpc-reply>

Creating app4 from RESTCONF

In the same way, app4 is created with inactive attribute using RESTCONF.

$ cat server-app4.xml
<app inactive="inactive">
  <name>app4</name>
</app>
$ curl -X POST -u admin:admin -H 'Content-Type: application/yang-data+xml' \
http://localhost:8080/restconf/ds/ietf-datastores:running/test1:server \
-d @server-app4.xml
$

Set any data on /server/app/version

/server/app/version is a leaf of config false node. This is supposed to be entered by any app, but in this example, let's set data manually.
$ ncs_cmd -o -c 'set /server/app{app1}/version 1.0'
$ ncs_cmd -o -c 'set /server/app{app2}/version 1.0'
$ ncs_cmd -o -c 'set /server/app{app3}/version 1.0'
$ ncs_cmd -o -c 'set /server/app{app4}/version 1.0'

Status confirmation

So far, we have configured 4 apps in running, however app2, app3 and app4 are at inactive status.

$ netconf-console --get-data --db running -x /server
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
  <data>
    <server xmlns="http://com/example/test1">
      <app>
        <name>app1</name>
      </app>
      <app inactive="inactive">
        <name>app2</name>
      </app>
      <app inactive="inactive">
        <name>app3</name>
      </app>
      <app inactive="inactive">
        <name>app4</name>
      </app>
    </server>
  </data>
</rpc-reply>

Next, let's check the intended datastore. As you can see, app2, app3 and app4 are not listed.

$ netconf-console --get-data --db intended -x /server
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
  <data>
    <server xmlns="http://com/example/test1">
      <app>
        <name>app1</name>
      </app>
    </server>
  </data>
</rpc-reply>

Then, let's check operational datastore. Please notice that config false data is shown.

$ netconf-console --get-data --db operational -x /server
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
  <data>
    <server xmlns="http://com/example/test1">
      <app>
        <name>app1</name>
        <version>1.0</version>
      </app>
    </server>
  </data>
</rpc-reply>

From NCS CLI, we can see running and operational datastore as the below.

admin@ncs# show running-config server
server app app1
!
! Inactive
server app app2
!
! Inactive
server app app3
!
! Inactive
server app app4
!
admin@ncs#

To see the operational datastore, we will look at the operation-state container information. It is populated only when it's configured to do so in ncs.conf as like the below.

<ncs-config xmlns="http://tail-f.com/yang/tailf-ncs-config">
<cli>
<nmda>
<show-operational-state>true</show-operational-state>
</nmda>
</cli>
</ncs-config>

Now we can show it in CLI.

admin@ncs# show operational-state server
server app app1
 version 1.0
!
admin@ncs#