cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
1395
Views
5
Helpful
7
Replies
erdemk
Beginner

stacked services performance

Hello NSO community members,

I have 2 questions regarding stacked services. Any help is appreciated.

1/  This is related to benefit of using stacked services to improve performance..

In documentation, there are a couple of places which says , by using stacked services performance will be increased, as diff calculation will be adjusted. I decided to refer to the following thread as a reference for my question as there is already explanation here from Jan Lindblad.

Jan Lindblad : That's right. When the parent service create() function runs, it will re-compute the input configs for the child services, and invoke each child's create() etc, rippling all the way down to the bottom of the dependency structure. Except -- and here's where the savings come in --fastmap won't call the child's create() function if the input config data exactly matches the previously stored input config for that service. In this case, fastmap already knows what the child service will result in (same as last time, i.e. what's stored in the database), and just pulls that in. Since what's pulled in is exactly what's already stored in the database, 
it won't add anything to the diff that will need to be traversed in the end, so saves (potentially a lot of) CPU cycles.
 

So, in order to try this i have created a simple parent (fff) service and a simple child (ccc) service.
Parent (fff) service has inputs like that : 
 
module fff {
  namespace "http://example.com/fff";
  prefix fff;
  list fff {
    description "This is an RFS skeleton service";
    key hostname;
    leaf hostname {
      tailf:info "Device name is also service unique name";
      type leafref {
        path "/ncs:devices/ncs:device/ncs:name";
      }
    }
    uses ncs:service-data;
    ncs:servicepoint fff-servicepoint;
    leaf logging1 {      type string;    }
    leaf logging2 {      type string;    }
  }
}

and pushes config to child (ccc) service via template : 
 
<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <ccc xmlns="http://example.com/ccc">
    <hostname>{/hostname}</hostname>
    <logging1>{/logging1}</logging1>
    <logging2>{/logging2}</logging2>
  </ccc>
</config-template>

Child service has yang file like this : 
 
module ccc {
  namespace "http://example.com/ccc";
  prefix ccc;
  list ccc {
    description "This is an RFS skeleton service";
    key hostname;
    leaf hostname {
      tailf:info "Device name is also service unique name";
      type leafref {
        path "/ncs:devices/ncs:device/ncs:name";
      }
    }
    uses ncs:service-data;
    ncs:servicepoint ccc-servicepoint;
    leaf logging1 {      type string;    }
    leaf logging2 {      type string;    }
  }
}

and child service's template is : 
 
<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <devices xmlns="http://tail-f.com/ns/ncs">
    <device>
      <name>{/hostname}</name>
      <config>
          <logging xmlns="urn:ios">
              <host tags="replace">
                  <ipv4>
                      <host>{/logging1}</host>
                  </ipv4>
                  <ipv4>
                      <host>{/logging2}</host>
                  </ipv4>
              </host>
          </logging>
      </config>
    </device>
  </devices>
</config-template>

So, before i do commit of parent service, child service is already committed with SAME loggingx values, but device-manager level one of the loggingx values are deleted. 
And when i do commit dry-run, child service calculates the diff for the deleted logging entry.. 
 
admin@ncs(config-fff-Cisco_Access_1)# commit dry-run
cli {
    local-node {
        data +fff Cisco_Access_1 {
             +    logging1 6.6.6.6;
             +    logging2 7.7.7.7;
             +}
              devices {
                  device Cisco_Access_1 {
                      config {
                          logging {
                              host {
             +                    ipv4 7.7.7.7 {
             +                    }
                              }
                          }
                      }
                  }
              }
             +bbb Cisco_Access_1 {
             +}
    }
}
And going further, i commit. Then i again delete the one of the loggingx entries from device-manager. Do re-deploy dry-run, and it shows me again the diff.
admin@ncs(config-fff-Cisco_Access_1)# re-deploy dry-run 
cli {
    local-node {
        data  devices {
                   device Cisco_Access_1 {
                       config {
                           logging {
                               host {
              +                    ipv4 6.6.6.6 {
              +                    }
                               }
                           }
                       }
                   }
               }
              
    }
}
So, may someone explain me how can we save from the diff calculation with stacked services. I was thinking that for this child (ccc) service, diff calculation wouldn't take place during commit and or at least during re-deploy as per Jan Lindblad's explanation above. 


2/  when i commit service fff, i do deep-check-sync and i get following in return : 
 
Error: No forward diff found for this service. Either /services/global-settings/collect-forward-diff is false, or the forward diff has become invalid. A re-deploy of the service will correct the latter.
 
I checked that collect-service-diff is set to true, and i did re-deploy. deep-check-sync result is again the same.

May someone tell me why can i not do deep-check-sync? Am i doing something wrong or is this expected behavior?

Many thanks in advance.
7 REPLIES 7
erdemk
Beginner

any input is appreciated

thanks

rogaglia
Cisco Employee

Imagine the common situation where a NSO service is modifying 10 devices and let's say each device have 1000 lines of configs to be modified (total 10000 lines in the get-modifications.)

When you do an update of the service that only one device, NSO would need to calculate diff for the 10.000 lines even if for 9 of the 10 devices there are no changes.

Let's say now that you re-organize your code and use stacked services where each changes on a given device is isolated in a service instance with 10 input parameters. For any update (or re-deploy) call that affects one device, for the 9 other device you will only be evaluating 9x10=90 leafs instead of 9x1000=9000.

There are many similar examples but the idea is always the same, to isolate things that do not typically change "together" and the benefits are in service updates/re-deploys rather than create or delete.

Roque hello,

 

Thank you very much for your reply.

 

I understand the idea and appreciate the idea. My question is how to do it..  i couldn't figure out how to maintain the situation that when i re-deploy the upper-service, lower-service will only be diff-checked in case "lower-service input parameters" are modified. I tried to explain how i tried this above.

 

Or may be i am missing something very obvious here.

 

Thanks and regards.

rogaglia
Cisco Employee

I believe your problem is that you are having a 1:1 relationship between the number of top services instances and the lower service instances. You need to make that relationship 1:N. So, one top service creates many lower services so you can play with the different combinations.

 

Maybe something like:

```

module fff {
  namespace "http://example.com/fff";
  prefix fff;
  list fff {
    description "This is an RFS skeleton service";
    key hostname;
    leaf hostname {
      type string;
   }
 
   list devices {
      key name;
      leaf name {
         tailf:info "Device name is also service unique name";
         type leafref {
         path "/ncs:devices/ncs:device/ncs:name";
         }
      }
      leaf log { type string; }
    }
    uses ncs:service-data;
    ncs:servicepoint fff-servicepoint;
  }
}
```

Roque hello,

Thanks for your response.

I have tried below 2 scenarios as to manage multiple lower service instances via upper service. 
I still couldn't manage to do it. Whenever i modify lower service instance via device-manager ( service instance is not changed ) and re-deploy upper service instance, lower service instance is corrected.

On the other hand, there is this explanation in Development Manual : 
Also the reactive-re-deploy will make a "shallow"
re-deploy in the sense that underlying stacked services will not be re-deployed. This "shallow" feature is
important when stacked services are used for performance optimization reasons. 

I tried with shallow option, and it is doing what we are expecting in both of the scenarios..
Whenever, lower-service-instance is not modified but service content is modified on device-manager level, shallow re-deploying upper service doesn't correct device-manqager
Whenever, lower-service-instance is modified (service content is also modified in accordance), shallow re-deploying upper service corrects lower-service-instance (and hence device-manager)

So, in order to maintain subject behavior, do we need to use shallow re-deploy. Or , still, is there a way or specific scenario to maintain it without shallow re-deploy and i am missing something. 


Scenario 1: simulation of a L2-L3 vpn service :

upper service :

module fff2 {
  list fff2 {
    key name;
    leaf name {
      tailf:info "Unique service id";
      tailf:cli-allow-range;
      type string;
    }
    uses ncs:service-data;
    ncs:servicepoint fff2-servicepoint;
    list devices { 
      key name; 
      leaf name { 
        tailf:info "Device name is also service unique name"
        type leafref { 
          path "/ncs:devices/ncs:device/ncs:name"
        } 
      } 
      leaf ntp1 { type string; } 
      leaf ntp2 { type string; }
    } 
  }
}

class ServiceCallbacks(Service
    @service.create
    def cb_create(selftctxrootserviceproplist
        dvcs = service.devices
        for dvc in dvcs:
            vars = ncs.template.Variables()
            vars.add('DEVICE'dvc.name)
            vars.add('NTP1'dvc.ntp1)
            vars.add('NTP2'dvc.ntp2)
            template = ncs.template.Template(service)
            template.apply('fff2-template'vars)

<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <bbb xmlns="http://example.com/bbb">
    <hostname>{$DEVICE}</hostname>
    <ntp1>{$NTP1}</ntp1>
    <ntp2>{$NTP2}</ntp2>
  </bbb>
</config-template>


Scenario 2 : simulation of a device management service :

upper service : 

module fff {
  list fff {
    key hostname;
    leaf hostname {
      tailf:info "Device name is also service unique name";
      type leafref {
        path "/ncs:devices/ncs:device/ncs:name";
      }
    }
    uses ncs:service-data;
    ncs:servicepoint fff-servicepoint;
    leaf ntp1 { type string; }
    leaf ntp2 { type string; }
    leaf logging1 { type string; }
    leaf logging2 { type string; }
  }
}

class ServiceCallbacks(Service
    @service.create
    def cb_create(selftctxrootserviceproplist
        vars = ncs.template.Variables()
        vars.add('DUMMY''127.0.0.1')
        template = ncs.template.Template(service)
        template.apply('fff-template'vars)

<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <bbb xmlns="http://example.com/bbb">
    <hostname>{/hostname}</hostname>
    <ntp1>{/ntp1}</ntp1>
    <ntp2>{/ntp2}</ntp2>
  </bbb>
  <ccc xmlns="http://example.com/ccc">
    <hostname>{/hostname}</hostname>
    <logging1>{/logging1}</logging1>
    <logging2>{/logging2}</logging2>
  </ccc>
</config-template>

lower services for both scenarios : 

module bbb {
  list bbb {
    description "This is an RFS skeleton service";
    key hostname;
    leaf hostname {
      tailf:info "Device name is also service unique name";
      type leafref {
        path "/ncs:devices/ncs:device/ncs:name";
      }
    }
    uses ncs:service-data;
    ncs:servicepoint bbb-servicepoint;
    leaf ntp1 { type string; }
    leaf ntp2 { type string; }
  }
}
<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <devices xmlns="http://tail-f.com/ns/ncs">
    <device>
      <name>{/hostname}</name>
      <config>
          <ntp xmlns="urn:ios">
            <server tags="replace">
              <peer-list>
                <name>{/ntp1}</name>
                <prefer/>
              </peer-list>
              <peer-list>
                <name>{/ntp2}</name>
              </peer-list>
            </server>
          </ntp>
      </config>
    </device>
  </devices>
</config-template>

module ccc {
  list ccc {
    key hostname;
    leaf hostname {
      tailf:info "Device name is also service unique name";
      type leafref {
        path "/ncs:devices/ncs:device/ncs:name";
      }
    }
    uses ncs:service-data;
    ncs:servicepoint ccc-servicepoint;
    leaf logging1 { type string; }
    leaf logging2 { type string; }
  }
}
<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <devices xmlns="http://tail-f.com/ns/ncs">
    <device>
      <name>{/hostname}</name>
      <config>
          <logging xmlns="urn:ios">
              <host tags="replace">
                  <ipv4>
                      <host>{/logging1}</host>
                  </ipv4>
                  <ipv4>
                      <host>{/logging2}</host>
                  </ipv4>
              </host>
          </logging>
      </config>
    </device>
  </devices>
</config-template>

Thanks and regards,

The important point is if you were able to create the relationship where a single top instance has many lower instances (what I called 1:N). If you do so, you will be able to see when you update the top service, only the create() code of the instances that are modified is actually called.

When dealing with re-deploy, there are two behaviors:

- deep: will re-deploy all service instances involved (N+1) because you want to make sure that the full tree is in-sync all the way two to the device configuration.

- shallow: you only re-deploy the top service and all the lower service. However, if that re-deploy would imply changes in some of the lower services, the create() code of these services is also run (to capture all effects that running re-deploy will have). That is why you are seeing the changes being applied. Note that with a "shallow" option, you may still be out-of-sync with the device config as you did not tested the instances where there was no modifications.

 

Of course, shallow should be more performant than deep.

 

BTW, If possible, you want to try to avoid that and always drive modifications from top to bottom and avoid unsync between service instances.

 

 

Roque hello,

 

Thank you very much for your explanation. 

 

Regards