Skip to content

An example for implementing Layered Service Architecture using Containerized NSO

License

Notifications You must be signed in to change notification settings

NSO-developer/lsa-container-example

Repository files navigation

Implementing Layered Service Architecture with NSO Official Docker Image (Single Version)

alt Architecture

YouTube Tutorial: LSA with Containerized NSO

Basic Architecture:

Containerized NSO workflow can be divided into two part on an abstract, i.e production and development.

Production containers are used for running NSO instances while Development containers are solely purposed for developing, building the service packages.

In this example, our NSO setup contains four nodes:

  • nso_upper : Top-level CFS Node that propogates the upper service logic to RFS Nodes under it. It is of type production.
  • nso_lower_1 : Low-level RFS Node that handles catering service configurations to "ex" namespace devices. It is of type Production.
  • nso_lower_2 : Low-level RFS Node that handles catering service configurations to "fx" namespace devices. It is of type Production.
  • nso_dev : A container which contains volume maps of all production containers' (listed above) service packages, but doesn't include any running instance of NSO (or is not intended to even run NSO). It is of type Development, hence is only used to develop service packages and build them.

Instructions for self-demonstrating kit:

Ensure that you are running docker engine and associated service.

Clone the repo and enter into the repo folder on your system.

In the directory images, store the NSO docker images of both production and development types available from Cisco software download center of your desired NSO version. Please note that production and development images should be of same NSO version. The router package supplied in the package-store is available from NSO-installation-directory/examples.ncs/getting-started/developing-with-ncs/packages and cfs-vlan and rfs-vlan packages are available in NSO-installation-directory/examples.ncs/getting-started/developing-with-ncs/22-lsa-single-version-deployment/package-store.

Run make build [VER=<NSO version> (6.2.3 is default)] [ARCH=<your CPU architecture> (x86_64 is default)], you will see following output generated and this takes a while to process:

docker load -i ./images/nso-6.2.3.container-image-dev.linux.x86_64.tar.gz
Loaded image: cisco-nso-dev:6.2.3
docker load -i ./images/nso-6.2.3.container-image-prod.linux.x86_64.tar.gz
Loaded image: cisco-nso-prod:6.2.3
docker build -t mod-nso-prod:6.2.3  --no-cache --network=host --build-arg type="prod"  --build-arg ver=6.2.3  --file Dockerfile .
...
Sending build context to Docker daemon  1.389GB
Step 1/6 : ARG  type ver
     ..... truncated for brevity

The build target starts with loading docker images, creates containers for each node, builds file structure, using dev container and rfs-vlan packages in package-store directory, creates NED package lsa-netconf-ned for production nodes to setup LSA network, using the command: ncs-make-package --no-netsim --no-java --no-python --lsa-netconf-ned /nso/UPPER/packages/rfs-vlan/src/yang --dest /nso/UPPER/packages/rfs-vlan-ned --build rfs-vlan-ned , finally compiles all the packages.

It also copies required configurations for nodes and devices during CDB booting.

After it's done, run make start

% make start
export VER=6.2.3 ; docker-compose up UPPER LOWER-1 LOWER-2 BUILD-NSO-PKGS -d
[+] Running 3/1
 ⠙ Network docker_lsa_new_NSO-net                                                                                                                         Created0.2s 
 ⠋ Container nso_lower_2                                                                                                                                  Created0.1s 
 ⠋ Container nso_upper                                                                                                                                    Created0.1s 
 ⠋ Container nso-dev                                                                             [+] Running 4/8                                          Creating0.1s 
 ........

This will attempt to start all containers, run startup scripts init.sh on each container. Additionally, On lower-nso nodes, it will create set of 3 devices for each node using router NED, starts them.

Now that environment is setup completed, We will verify if the environment has been setup successfully.

List the running containers using:

docker ps | grep nso

Example:

% docker ps --format 'table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Status}}' | grep nso
26b8a3dac60d   nso-dev                  mod-nso-dev:6.2.3            Up 28 minutes
a7f597e1a39e   nso_lower_2              mod-nso-prod:6.2.3           Up 28 minutes (healthy)
ab522156a46b   nso_lower_1              mod-nso-prod:6.2.3           Up 28 minutes (healthy)
b56dfe7c9aae   nso_upper                mod-nso-prod:6.2.3           Up 28 minutes (healthy)

Open three terminals under current folder and run the following commands correspondingly:

make cli-c_nso_upper   (for juniper terminal: cli-j_nso_upper)
make cli-c_nso_lower_1 (for juniper terminal: cli-j_nso_lower_1)
make cli-c_nso_lower_2 (for juniper terminal: cli-j_nso_lower_2)
While in NSO CLI from each terminal in Operational mode, Check the status of packages on each Node by entering command show packages package oper-status:

alt status_check

Check into all nodes, if the status of nodes' connection and devices is up using following commands (J-cli style) correspondingly:
show cluster connection -> on upper node terminal.
request devices connect -> on terminals of both lower nodes.

alt status_check

You can see that the connection between upper and lower nodes is up, also the communication between lower nodes and their devices is also setup.

This completes the environment setup and verification.

Running this example:

Now we will check how service can be deployed layer wise from upper through the lower nodes:

Get into the upper node terminal, while in configuration mode by typing config. Supply the command set cfs-vlan v1 a-router ex0 z-router fx0 iface to deploy the service:

admin@upper-nso% set cfs-vlan v1 a-router ex0 z-router fx0 iface eth3 unit 3 vid 77
[ok][2024-06-13 19:42:33]
[edit]
admin@upper-nso% commit dry-run 
cli {
    local-node {
        data  ncs:devices {
                  device lower-nso-1 {
                      config {
                          services {
             +                vlan v1 {
             +                    router ex0;
             +                    iface eth3;
             +                    unit 3;
             +                    vid 77;
             +                    description "Interface owned by CFS: v1";
             +                }
                          }
                      }
                  }
                  device lower-nso-2 {
                      config {
                          services {
             +                vlan v1 {
             +                    router fx0;
             +                    iface eth3;
             +                    unit 3;
             +                    vid 77;
             +                    description "Interface owned by CFS: v1";
             +                }
                          }
                      }
                  }
              }
             +cfs-vlan v1 {
             +    a-router ex0;
             +    z-router fx0;
             +    iface eth3;
             +    unit 3;
             +    vid 77;
             +}
    }
    lsa-node {
        name lower-nso-1
        data  devices {
                  device ex0 {
                      config {
                          sys {
                              interfaces {
             +                    interface eth3 {
             +                        enabled;
             +                        unit 3 {
             +                            enabled;
             +                            description "Interface owned by CFS: v1";
             +                            vlan-id 77;
             +                        }
             +                    }
                              }
                          }
                      }
                  }
              }
              rfs-vlan:services {
             +    vlan v1 {
             +        router ex0;
             +        iface eth3;
             +        unit 3;
             +        vid 77;
             +        description "Interface owned by CFS: v1";
             +    }
              }
    }
    lsa-node {
        name lower-nso-2
        data  devices {
                  device fx0 {
                      config {
                          sys {
                              interfaces {
             +                    interface eth3 {
             +                        enabled;
             +                        unit 3 {
             +                            enabled;
             +                            description "Interface owned by CFS: v1";
             +                            vlan-id 77;
             +                        }
             +                    }
                              }
                          }
                      }
                  }
              }
              rfs-vlan:services {
             +    vlan v1 {
             +        router fx0;
             +        iface eth3;
             +        unit 3;
             +        vid 77;
             +        description "Interface owned by CFS: v1";
             +    }
              }
    }
}
[ok][2024-06-13 19:43:31]

On upper node, Service is deployed through the template: ./upper-nso/packages/cfs-vlan/templates/cfs-vlan-template.xml

On lower nodes, following template will be used respectively: ./lower-nso-1/packages/rfs-vlan/templates/rfs-vlan-template.xml for ex0 device and ./lower-nso-2/packages/rfs-vlan/templates/rfs-vlan-template.xml for fx0 device

After you commit the service on upper node, you will see the terminals of lower nodes displaying the following messages:

On lower node 1:
admin@lower-nso-1# 
System message at 2024-06-13 19:43:31...
Commit performed by admin via ssh using netconf.
On lower node 2:
admin@lower-nso-2# 
System message at 2024-06-13 19:43:31...
Commit performed by admin via ssh using netconf.
Check the forward diff-set of the service on upper node:

Go back into the Operational mode with exit on upper node terminal and supply the command request cfs-vlan v1 get-modifications

admin@upper-nso% request cfs-vlan v1 get-modifications
cli {
    local-node {
        data  ncs:devices {
                   device lower-nso-1 {
                       config {
                           services {
              +                vlan v1 {
              +                    router ex0;
              +                    iface eth3;
              +                    unit 3;
              +                    vid 77;
              +                    description "Interface owned by CFS: v1";
              +                }
                           }
                       }
                   }
                   device lower-nso-2 {
                       config {
                           services {
              +                vlan v1 {
              +                    router fx0;
              +                    iface eth3;
              +                    unit 3;
              +                    vid 77;
              +                    description "Interface owned by CFS: v1";
              +                }
                           }
                       }
                   }
               }
              
    }
}

All changed data both on the local node and on the remote lower nodes is displayed.

After you are done, You can now stop the containers:

make stop

To clean cdb:

make clean_cdb

To clean the containers:

make clean_containers

To deep clean the environment, including the images built:

make deep_clean VER=<Your NSO Image version>

Copyright and License

Copyright (c) 2024 Cisco and/or its affiliates.

This software is licensed to you under the terms of the Cisco Sample
Code License, Version 1.1 (the "License"). You may obtain a copy of the
License at

               https://developer.cisco.com/docs/licenses

All use of the material herein must be in accordance with the terms of
the License. All rights not expressly granted by the License are
reserved. Unless required by applicable law or agreed to separately in
writing, software distributed under the License is distributed on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied.

About

An example for implementing Layered Service Architecture using Containerized NSO

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published