I know there is some great documentation here from the NSO and Cisco AS teams on their development workflows, frameworks and CI/CD processes, but, I was curious to ask what everyone is the community is doing?
From a tactical NSO package development team level, what is your workflow? What tools are your teams using? What is working what is not? What do you really like and what do you need to improve? How mature do you think your process is? etc etc
Wanted to ask and see what approaches the many different teams here are taking!
For my team we are beginning our journey in to more formalized NSO development and have been working from a GitLab based workflow.
Projects are created, issues entered and developed with a gitflow branching strategy.
GitLab CI pipelines run package loadings and run tests inside NSO dev & stage server VMs,
Tests are usually python unittests that are run to do both functional and service creation, modification and removal tests.
For unittesting we typically use the python ncs library to open transactions to do the service modifications (along with functional tests) against both Netsims and physical lab devices.
For our prod deployment we leverage 'push on green' and gating to push package changes to prod VMs during appropriate hours.
Our process is certainly not mature yet and we are constantly learning and improving.
Curious to see other teams work flows, best practices and challenges!
That question requires a meeting or two
We are developing Service Models with NSO too, we use tools like Jenkins, Github (multi branch plugin), Ansible.
(Initial code) -> Development locally -> Code pushed to git > build&test > commit to repo > trigger jenkins build
We are using 'Lux' for Unit Test : https://github.com/hawk/lux
Be very careful with NetSim, it will not fit all your needs, so please use real devices too.
Lux makes it easy to write tests but hard to do a smooth CI/CD process.
Please send me an email and i'll share some testing workflows inside Cisco.
When you say "NetSim, it breaks a lot", could you be more specific? Isn't the problem with NetSim that it follows the YANG perfectly? That it's "too perfect"? ;-) Or did you see something else?
I'm working the NSO Function Packs, e.g. NSO PnP Server, tailf-hcc, etc.
We're using Jenkins Blue Ocean with Pipelines and Jenkinsfiles. Using Jenkinsfiles, we get test jobs for every branch that is created which is very nice.
We test on every commit to the master branch and also for every commit on pull requests, also very helpful to review pull requests because we've setup Jenkins and our Bitbucket server so that you can see the test results for every commit with the pull request.
When we are ready for release we update CHANGES and versions, before tagging the master branch with the release vsn. This triggers build and test. Delivery needs to be signed off by an actual person with access to the delivery servers, this is done using Jenkins jobs.
We're in the process of moving from another way of doing this to Pipelines and Jenkinsfiles, we're not all the way there yet but it's getting closer!
We test for every NSO release that is supported, also we run new tests when there is a new NSO version released (even if nothing has changed for the function pack).
Before there is a NSO release we are also testing against NSO pre-release builds.
edit: Added information about testing against all types opf NSO releases.
Some transparency to the process is useful too, in the form of a dashboard / summary of status to understand that a particular release was tested, approved and deployed "days" prior to the end-customer having an issue.
We're tracking the current status on a big TV in our team area. The screen shows status of tests against master branches, tests against NSO pre-releases and how the jobs building and signing our own function packs have went.
Since many teams that are using the Funciton packs tweak the code and/or find problems in their use case, that we haven't thought of ourselves we (try to) encourage contributions.
We do this through allowing pull requests from anyone with access to the tail-f repo, using guidelines listed on our team wiki (not sure if that is a good place) https://confluence.tail-f.com/display/TAILF/Contributing .
Also the Jenkins instance should be readable for everyone so you could check test results https://jenkins5-stg.lab.tail-f.com/
We try to release every two weeks, maybe we should use the NSO Field Portal more to announce when there's a new release?