wiki:Tutorials/Cloud/ONAP

Version 28 (modified by seskar, 5 years ago) ( diff )

ONAP

The Open Network Automation Platform (ONAP) is an open-source software platform that enables the design, creation, and orchestration of networking services. ONAP was formed as a result of a merger of the Linux Foundation’s OPEN-Orchestrator (OPEN-O), and the AT&T ECOMP (Enhanced Control, Orchestration, Management and Policy) projects.

ONAP architecture consists of two major architectural frameworks:

  • design-time environment
  • run-time environment

Both of these environments consist of a numerous separate subsystems.

This tutorial simplified deployment no Open Stack only Kubernties; 4 machines: controller and 3 workers Basic set of subsystems:

Prerequisites

In order to access the test bed, create a reservation and have it approved by the reservation service. Access to the resources are granted after the reservation is confirmed. Please follow the process shown on the COSMOS work flow page to get started.

Resources required

At least 2 nodes are needed to run this example - one node for the Kuberneties controller and one node for the Kuberneties worker.

Execution

  1. Determine the set of nodes that you're using. If you have a topology assigned, it will be entered in the form "system:topo:group-nogroup" for example
  2. run onaptutorial-min.rb -g group-nogroup
  3. Test if the nodes are up, with omf stat -t system:topo:group-nogroup
  4. If not in state POWERON, run omf tell -a on -t system:topo:group-nogroup
  5. Once they're up, they will respond to ping, e.g. ping node13-8
  6. Download the cluster setup script wget -O create-cluster.sh
  7. Run the cluster setup script on the prepared nodes ./create-cluster.sh -c "node14-7" -w "node13-8"

Imaging the nodes

  onaptutorial-min.rb -g group-nogroup

will load controller and worker images (latest-onap-control.ndz and latest-onap-worker.ndz) on the two nodes in the group (controller on the first and worker on the second node in the group) and turn them on:

msherman@console:~$ onaptutorial-min.rb -g group-black
Imaging:
  Controller:latest-onap-control.ndz -> [node14-7.grid.orbit-lab.org]
      Worker:latest-onap-worker.ndz -> [node13-8.grid.orbit-lab.org]
2019-10-21 14:22:02 -0400 Telling set group-black: offh
2019-10-21 14:22:23 -0400 Loading image: latest-onap-control.ndz for node(s) node14-7.grid.orbit-lab.org
2019-10-21 14:22:25 -0400 Loading image: latest-onap-worker.ndz for node(s) node13-8.grid.orbit-lab.org
2019-10-21 14:26:21 -0400 Telling set group-black: on

Please check for error messages, in that case you will need to repeat the command.

Please note which node is the controller and which one is the worker since you will need it for the setup script.

Check the status of the nodes with omf stat -t system:topo:group-nogroup

msherman@console:~$ omf stat -t system:topo:group-black

 INFO NodeHandler: OMF Experiment Controller 5.4 (git 861d645)
 INFO NodeHandler: Slice ID: default_slice (default)
 INFO NodeHandler: Experiment ID: default_slice-2019-10-21t14.26.36.553-04.00
 INFO NodeHandler: Message authentication is disabled
 INFO property.resetDelay: resetDelay = 300 (Fixnum)
 INFO property.resetTries: resetTries = 1 (Fixnum)
 INFO property.nodes: nodes = "system:topo:group-black" (String)
 INFO property.summary: summary = false (FalseClass)
 INFO Topology: Loaded topology 'system:topo:group-black'.

Talking to the CMC service, please wait
-----------------------------------------------
 Node: node13-8.grid.orbit-lab.org       State: POWERON
 Node: node14-7.grid.orbit-lab.org       State: POWERON
-----------------------------------------------

 INFO EXPERIMENT_DONE: Event triggered. Starting the associated tasks.
 INFO NodeHandler:
 INFO NodeHandler: Shutting down experiment, please wait...
 INFO NodeHandler:
 INFO run: Experiment default_slice-2019-10-21t14.26.36.553-04.00 finished after 0:5

Setting up physical server cluster

In order to create the Kubernetes cluster that will run ONAP, download and execute the cluster creation script on the console of the domain where you have node (please note that you will need to use corresponding node names in the script arguments)

mkdir onap-test && cd onap-test
wget https://wiki.cosmos-lab.org/raw-attachment/wiki/tutorials/orchestration-example/create-kube-cluster.sh
chmod 755 create-kube-cluster.sh
./create-kube-cluster.sh -c "node1" -w "node2"

create-kube-cluster.sh

create-kube-cluster.sh

#!/bin/bash

rm -f cluster.yml
rm -f kube_config_cluster.yml
rm -f rke
rm -f config

omf tell -t all -a offh
sleep 60

wget https://github.com/rancher/rke/releases/download/v0.2.1/rke_linux-amd64
mv rke_linux-amd64 rke
chmod 754 rke


usage () {
  echo "Usage:"
  echo "   ./$(basename $0) -c \"cNode1 cNode2 ... cnodeN\" -w \"wNode1 wnode2 ... wnodeN\""
  echo "Note: controllers hostnames and workers hostnames are to be enclosed in \"\""
  exit 0
}

if [[ ( $# == "--help") ||  $# == "-h" ]] 
        then 
                usage
                exit 0
fi 

if [ "$#" -lt 2 ]; then
  echo "Missing Kubernetes control and worker nodes"
  usage
fi

echo "# An example of an HA Kubernetes cluster for ONAP" >> cluster.yml
echo "nodes:" >> cluster.yml

while getopts c:w: option
do
case "${option}"
in
c) CONTROLLERS=${OPTARG};;
w) WORKERS=${OPTARG};;
esac
done

IFS=' ' read -ra C <<< "$CONTROLLERS"
IFS=' ' read -ra W <<< "$WORKERS"

echo "Testing node availability. This might take some time"
for i in "${C[@]}"; do
while ! ping -c 1 -n -w 1 $i &> /dev/null
do
    printf "%c" "."
done

echo "127.0.0.1 localhost" > hosts
echo "`ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'` ${i}" >> hosts
scp hosts root@$i:/etc/hosts
done

for i in "${W[@]}"; do
while ! ping -c 1 -n -w 1 $i &> /dev/null
do
    printf "%c" "."
done
echo "127.0.0.1 localhost" > hosts
echo "`ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'` ${i}" >> hosts
scp hosts root@$i:/etc/hosts
done
echo "Availability check successful"

for i in "${C[@]}"; do
   echo "- address: `ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'`" >> cluster.yml
   echo '  port: "22"' >> cluster.yml
   echo "  role:" >> cluster.yml
   echo "  - controlplane" >> cluster.yml
   echo "  - etcd" >> cluster.yml
   echo "  hostname_override: `ping $i -c 1 | grep 'PING' | awk '{print $2}' | awk -F . '{print $1}'`" >> cluster.yml
   echo "  user: root" >> cluster.yml
   echo "  ssh_key_path: '~/.ssh/id_rsa'" >> cluster.yml
done

echo "# worker nodes start " >> cluster.yml

for i in "${W[@]}"; do
   echo "- address: `ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'`" >> cluster.yml
   echo '  port: "22"' >> cluster.yml
   echo "  role:" >> cluster.yml
   echo "  - worker" >> cluster.yml
   echo "  hostname_override: `ping $i -c 1 | grep 'PING' | awk '{print $2}' | awk -F . '{print $1}'`" >> cluster.yml
   echo "  user: root" >> cluster.yml
   echo "  ssh_key_path: '~/.ssh/id_rsa'" >> cluster.yml
done

echo 'services:
  kube-api:
    service_cluster_ip_range: 10.43.0.0/16
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
network:
  plugin: canal
authentication:
  strategy: x509
ssh_key_path: "~/.ssh/id_rsa"
ssh_agent_auth: false
authorization:
  mode: rbac
ignore_docker_version: false
kubernetes_version: "v1.13.5-rancher1-2"
private_registries:
- url: nexus3.onap.org:10001
  user: docker
  password: docker
  is_default: true
cluster_name: "onap"
restore:
  restore: false
  snapshot_name: ""' >> cluster.yml

./rke up

for i in "${C[@]}"; do
scp kube_config_cluster.yml root@$i:~/.kube/config
done

exit 0

Perform this on kubernetes control node:

  1. Set the context of kubernetes to onap by default:
kubectl config set-context --current --namespace=onap
  1. Verify the Kubernetes cluster:


kubectl get nodes -o=wide
  1. Initialize Kubernetes Cluster for use by Helm
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
kubectl -n kube-system  rollout status deploy/tiller-deploy
  1. Set only the required ONAP component to true
cd overrides/
//edit the onap-all.yaml  file and set the ONAP components to false
nano onap-all.yaml

in our example mariadb and portal are turned on:

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

###################################################################
# This override file enables helm charts for all ONAP applications.
###################################################################
cassandra:
  enabled: false
mariadb-galera:
  enabled: true

aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: false
cds:
  enabled: false
clamp:
  enabled: false
cli:
  enabled: false
consul:
  enabled: false
contrib:
  enabled: false
dcaegen2:
  enabled: false
dmaap:
  enabled: false
esr:
  enabled: false
log:
  enabled: false
sniro-emulator:
  enabled: false
oof:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
nbi:
  enabled: false
policy:
  enabled: false
pomba:
  enabled: false
portal:
  enabled: true
robot:
  enabled: false
sdc:
  enabled: false
sdnc:
  enabled: false
so:
  enabled: false
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false
modeling:
  enabled: false
  1. Start helm server and add local helm repository
// start server
helm serve &
//hit enter if the process is running to let the process run in background

// add repository
helm repo add local http://127.0.0.1:8879
  1. Make onap helm charts available in local helm repository
cd ~/oom/kubernetes
make all; make onap

The output should look like this:

root@node:make all; make onap
[common]
make[1]: Entering directory '/root/oom/kubernetes'
make[2]: Entering directory '/root/oom/kubernetes/common'

[common]
make[3]: Entering directory '/root/oom/kubernetes/common'
==> Linting common
[INFO] Chart.yaml: icon is recommended

...

Update Complete. ⎈Happy Helming!⎈
Saving 34 charts
Downloading aaf from repo http://127.0.0.1:8879
Downloading aai from repo http://127.0.0.1:8879
Downloading appc from repo http://127.0.0.1:8879
Downloading cassandra from repo http://127.0.0.1:8879
Downloading cds from repo http://127.0.0.1:8879
Downloading clamp from repo http://127.0.0.1:8879
Downloading cli from repo http://127.0.0.1:8879
Downloading common from repo http://127.0.0.1:8879
Downloading consul from repo http://127.0.0.1:8879
Downloading contrib from repo http://127.0.0.1:8879
Downloading dcaegen2 from repo http://127.0.0.1:8879
Downloading dmaap from repo http://127.0.0.1:8879
Downloading esr from repo http://127.0.0.1:8879
Downloading log from repo http://127.0.0.1:8879
Downloading sniro-emulator from repo http://127.0.0.1:8879
Downloading mariadb-galera from repo http://127.0.0.1:8879
Downloading msb from repo http://127.0.0.1:8879
Downloading multicloud from repo http://127.0.0.1:8879
Downloading nbi from repo http://127.0.0.1:8879
Downloading nfs-provisioner from repo http://127.0.0.1:8879
Downloading pnda from repo http://127.0.0.1:8879
Downloading policy from repo http://127.0.0.1:8879
Downloading pomba from repo http://127.0.0.1:8879
Downloading portal from repo http://127.0.0.1:8879
Downloading oof from repo http://127.0.0.1:8879
Downloading robot from repo http://127.0.0.1:8879
Downloading sdc from repo http://127.0.0.1:8879
Downloading sdnc from repo http://127.0.0.1:8879
Downloading so from repo http://127.0.0.1:8879
Downloading uui from repo http://127.0.0.1:8879
Downloading vfc from repo http://127.0.0.1:8879
Downloading vid from repo http://127.0.0.1:8879
Downloading vnfsdk from repo http://127.0.0.1:8879
Downloading modeling from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting onap
Lint OK

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /root/oom/kubernetes/dist/packages/onap-5.0.0.tgz
make[1]: Leaving directory '/root/oom/kubernetes'
root@node:~/oom/kubernetes# 

This make command takes quite some time to finish (i.e. 12+ minutes) so please be patient.

  1. Deploy ONAP
helm deploy demo local/onap --namespace onap -f ~/overrides/onap-all.yaml
root@node:~/oom/kubernetes# helm deploy demo local/onap --namespace onap -f ~/overrides/onap-all.yaml
fetching local/onap
release "demo" deployed
release "demo-cassandra" deployed
release "demo-mariadb-galera" deployed
root@node:~/oom/kubernetes# 
  1. Verify the deploy
helm ls

resulting in:

NAME               	REVISION	UPDATED                 	STATUS  	CHART               	APP VERSION	NAMESPACE
demo               	1       	Sun Oct 20 02:14:55 2019	DEPLOYED	onap-5.0.0          	El Alto    	onap     
demo-cassandra     	1       	Sun Oct 20 02:14:55 2019	DEPLOYED	cassandra-5.0.0     	           	onap     
demo-mariadb-galera	1       	Sun Oct 20 02:14:56 2019	DEPLOYED	mariadb-galera-5.0.0	           	onap     

— Openstack installed

— VM cloud image ubuntu 18.04-server-image

Attachments (2)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.