07-06-2017 12:03 PM - edited 03-01-2019 03:54 AM
I believe I have heard mentions of using Ansible to manage the NSO development and runtime environments, and am interested to capture any learnings from that. This would include tasks like installing and upgrading of the NSO runtime environment and packages, and perhaps even using Ansible to manage the lifecycle of clusters or LSO deployments across servers.
07-06-2017 05:08 PM
Part of branchInfra we have some basic flow , where we use ansible for nso and package installation. It is just a basic framework .
07-06-2017 11:35 PM
Very cool. Do you have any playbooks that you would be willing to share?
07-06-2017 07:03 PM
We have used Ansible for deployment of NSO for several projects. We started to use it because a customer was using an NCS version that didn't support fully nct yet so we built up a set of playbooks that did everything from installing the packages to the target server to changing the permissions and groups on the NSO files in the system to reloading the packages on a running system.
Some customers, especially those on the latest NSO releases will upgrade NSO with each new release so Ansible is great for that. Those who have a longer validation cycle for NSO do not want the playbook that upgrades NSO because it might be used
We use it to install NSO in customer environments and to local VMs for testing. It is as simple as changing the inventory file that lists the hosts that that should get the deployment.
Because Ansible is just a wrapper around scripts with some special helpers, it can be adapted to do just about anything and frequently you will find that the Ansible folks have build in a function so that you don't have to do it with pure scripting e.g. here is a playbook for uploading packages to the NSO server. The comments document the behavior and it is clear that Ansible has built in rsync, unzip, and a (somewhat odd syntax) file delete capability.
---
- name: Copy NSO and supporting packages and resources to the target
hosts: all
tasks:
#
# Synchronize the local distribution dir with the remote distribution dir
# This will pick up everything in distribution (including packages) and
# make sure it is on the target. The synchronize command is a wrapper
# around rsync so it is pretty efficient about only sending when things change
#
- name: Synchronize the target distribution directory with the local
synchronize: src={{pkgsrc}} dest={{pkgdst}} recursive=yes delete=yes
- name: Copy the resources to the target and expand
unarchive: src={{pkgsrc}}/resources.tar.gz dest={{pkgdst}}/distribution
- name: Delete the .tar.gz on the target
file: path={{pkgdst}}/distribution/resources.tar.gz state=absent
07-06-2017 11:39 PM
Scott,
Very cool, this is exactly what I was looking for. Would you be interested in contributing some of what you have to a devnet github repo and see if we can build a reusable, shareable set?
07-07-2017 12:39 AM
love it
07-10-2017 06:38 PM
We can contribute what we have as a base. I'll make it a bit more generic and submit it to github within a week or so. Just like the other other examples we have seen so far, Ansible is very flexible and can do just about anything several different ways.
07-24-2017 04:01 AM
Sorry for late reply, but: great!
07-07-2017 06:00 AM
NGENA also uses ansible for deployments. I think it'd be great to standardize on some basic playbooks, especially if someone who is better than I am at ansible can make it fairly modular, it sounds like Scott's work is a good start.
07-10-2017 05:56 PM
After deploying NSO in a fresh install via Ansible, I needed to run some NSO commands to set up Quagga and NSO HA components.
In order to do this, I wrote all the commands out to a file using Ansible blockinfile with variables, then executed ncs_cli as the nsoadmin (ncsadmin) user which ran the commands.
Example as follows:
- name: create NSO command file
blockinfile:
path: "{{ nso_commands_file }}"
create: yes
mode: "0600"
owner: "{{nso_user}}"
marker: "! {mark} Ansible section"
block: |
packages reload
conf t
devices template hcc-master config quagga-bgp:hostname MASTER
devices template hcc-slave config quagga-bgp:hostname SLAVE
devices template hcc-failover-master config quagga-bgp:hostname MASTER
devices template hcc-none config quagga-bgp:hostname {{nso_ha_role}}
commit
!
ha bgp anycast-path-min 2 anycast-prefix {{nso_ha_anycast_ip}}/{{nso_ha_anycast_ip_bits}}
ha token {{rr_bgp_pass}} local-user {{nso_user}}
commit
!
ha failure-limit 10
ha interval 4
ha member {{nso_ha_a_hostname}} address {{nso_ha_a_mgt_ip}} default-ha-role master quagga-device nso1-quagga cluster-manager true
ha member {{nso_ha_b_hostname}} address {{nso_ha_b_mgt_ip}} default-ha-role slave quagga-device nso2-quagga failover-master true
commit
!
exit
exit
ha commands activate
# You can use shell to run other executables to perform actions inline
- name: Run ncs_cli and load in devices
command: /opt/nso/current/bin/ncs_cli -C {{ nso_commands_file }}
become_user: "{{nso_user}}"
become: true
# clean up temp file
- name: Remove NSO temp file
file: path="{{ nso_commands_file }}" state=absent
when: debug == 0
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide