cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
959
Views
5
Helpful
0
Comments
KJ Rossavik
Cisco Employee
Cisco Employee

In part I and part II of this blog, we have been discussing scale and performance requirements. In this third instalment we are going to look at baselining, profiling, optimising and monitoring scale and performance.

 

Baselining Scale and Performance  

You have defined your requirements, and you have estimated an NSO server size. How do you verify that your automation solution can support your requirements? 

Before the automation solution is deployed it needs to be tested to ensure that the solution can deliver the performance needed at the scale required. Note that although it is useful to test the scale and performance of individual components in the solution, this is not a substitute for end to end solution testing. The end to end solution is likely to include other systems, such as order entry systems, resource management systems (e.g. IPAM), workflow management systems, etc. 

You will probably not have the same size of network in the lab as you have in the production environment, so most of the network will need to be simulated. Remember take into consideration the difference in behaviour between real and simulated devices, e.g. with respect to response time. 

Another point is to carry out scale and performance testing with negative scenarios. How does the solution deal with e.g. a device being unreachable, or with a device having unexpected configuration? 

 

System Profiling and Optimisation 

If you see an operation that is not performing as required, then you want to find out where time is spent so that you can decide where to look for optimisation opportunities. It is important to do this across the entire solution, and not only with individual components. Individual components can be performant, but when assembling them other performance issues may arise. 

NSO offers a lot of visibility into where time is spent in the system. This is discussed in detail in Kristian Larsson’s document 

See also: 

 

Monitoring System Performance 

The solution deployed in the production environment needs to be monitored for performance so that you have early warnings for when the system approaches not meeting the scale and performance requirements. This involves monitoring the performance of the end-to-end solution as well as monitoring individual components such as an NSO server and its applications. 

The end-to-end performance needs to be monitored from the top-level system, e.g. an order entry system. Here you can monitor throughput and operation duration. This can be plotted against scale to check for non-linear behaviour. 

The NSO server needs to be monitored at multiple levels:  

  • Operating system – monitor CPU, memory and disk usage 
  • Monitoring NSO platform. NSO offers metrics to be monitored. These are specified in the NSO Administrator Guide, in the System Monitoring chapter. A lot of service applications are now implemented as Nanoservices, and there is a lot of data to be mined related to the NSO Plan. There is a lot of activity going on in the area of observability, so watch this space for increased monitoring capabilities.  
  • Application specific monitoring. The NSO service application or function pack may have application specific attributes that can be monitored. The NSO Plan now has the feature to set a threshold for how long the Plan should take to execute, and to raise an alarm if the threshold is breached. This could be used as a mechanism for monitoring service application performance.  

See also: 

  • NSO Administrator Guide, System Monitoring chapter 

In part IV of this blog we look at some architectures to consider if your solution needs further gains in scale and/or performance

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the NSO Developer community: