wiki:Tutorials/Cloud/ONAP

Version 8 (modified by hanif, 5 years ago) ( diff )

Orchestration Example

ONAP

The Open Network Automation Platform (ONAP) is an open-source software platform that enables the design, creation, and orchestration of networking services. ONAP was formed as a result of a merger of the Linux Foundation’s OPEN-Orchestrator (OPEN-O), and the AT&T ECOMP (Enhanced Control, Orchestration, Management and Policy) projects.

ONAP architecture consists of two major architectural frameworks:

  • design-time environment
  • run-time environment

Both of these environments consist of a numerous separate subsystems.

This tutorial simplified deployment no Open Stack only Kubernties; 4 machines: controller and 3 workers Basic set of subsystems:

— Running performant physical server cluster

For the purpose of orbit-lab, execute the below script on any of the console node in orbit as below: \\./create-kube-cluster.sh -c "cNode1 cNode2 … cNodeM" -w "wNode1 wNode2 … wNodeN"

create-kube-cluster.sh

#!/bin/bash

rm -f cluster.yml
rm -f kube_config_cluster.yml
rm -f rke
rm -f config

omf tell -t all -a offh
sleep 60

wget https://github.com/rancher/rke/releases/download/v0.2.1/rke_linux-amd64
mv rke_linux-amd64 rke
chmod 754 rke


usage () {
  echo "Usage:"
  echo "   ./$(basename $0) -c \"cNode1 cNode2 ... cnodeN\" -w \"wNode1 wnode2 ... wnodeN\""
  echo "Note: controllers hostnames and workers hostnames are to be enclosed in \"\""
  exit 0
}

if [[ ( $# == "--help") ||  $# == "-h" ]] 
        then 
                usage
                exit 0
fi 

if [ "$#" -lt 2 ]; then
  echo "Missing Kubernetes control and worker nodes"
  usage
fi

echo "# An example of an HA Kubernetes cluster for ONAP" >> cluster.yml
echo "nodes:" >> cluster.yml

while getopts c:w: option
do
case "${option}"
in
c) CONTROLLERS=${OPTARG};;
w) WORKERS=${OPTARG};;
esac
done

omf load -i latest-onap-control.ndz -t ${CONTROLLERS// /,} -r 60
sleep 300

omf load -i latest-onap-worker.ndz -t ${WORKERS// /,} -r 60
sleep 300

omf tell -a on -t ${CONTROLLERS// /,},${WORKERS// /,}
sleep 300

IFS=' ' read -ra C <<< "$CONTROLLERS"
IFS=' ' read -ra W <<< "$WORKERS"

echo "Testing node availability. This might take some time"
for i in "${C[@]}"; do
while ! ping -c 1 -n -w 1 $i &> /dev/null
do
    printf "%c" "."
done

echo "127.0.0.1 localhost" > hosts
echo "`ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'` ${i}" >> hosts
scp hosts root@$i:/etc/hosts
done

for i in "${W[@]}"; do
while ! ping -c 1 -n -w 1 $i &> /dev/null
do
    printf "%c" "."
done
echo "127.0.0.1 localhost" > hosts
echo "`ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'` ${i}" >> hosts
scp hosts root@$i:/etc/hosts
done
echo "Availability check successful"

for i in "${C[@]}"; do
   echo "- address: `ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'`" >> cluster.yml
   echo '  port: "22"' >> cluster.yml
   echo "  role:" >> cluster.yml
   echo "  - controlplane" >> cluster.yml
   echo "  - etcd" >> cluster.yml
   echo "  hostname_override: `ping $i -c 1 | grep 'PING' | awk '{print $2}' | awk -F . '{print $1}'`" >> cluster.yml
   echo "  user: root" >> cluster.yml
   echo "  ssh_key_path: '~/.ssh/id_rsa'" >> cluster.yml
done

echo "# worker nodes start " >> cluster.yml

for i in "${W[@]}"; do
   echo "- address: `ping $i -c 1 | grep "PING" | grep '('|awk '{gsub(/[()]/,""); print $3}'`" >> cluster.yml
   echo '  port: "22"' >> cluster.yml
   echo "  role:" >> cluster.yml
   echo "  - worker" >> cluster.yml
   echo "  hostname_override: `ping $i -c 1 | grep 'PING' | awk '{print $2}' | awk -F . '{print $1}'`" >> cluster.yml
   echo "  user: root" >> cluster.yml
   echo "  ssh_key_path: '~/.ssh/id_rsa'" >> cluster.yml
done

echo 'services:
  kube-api:
    service_cluster_ip_range: 10.43.0.0/16
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
network:
  plugin: canal
authentication:
  strategy: x509
ssh_key_path: "~/.ssh/id_rsa"
ssh_agent_auth: false
authorization:
  mode: rbac
ignore_docker_version: false
kubernetes_version: "v1.13.5-rancher1-2"
private_registries:
- url: nexus3.onap.org:10001
  user: docker
  password: docker
  is_default: true
cluster_name: "onap"
restore:
  restore: false
  snapshot_name: ""' >> cluster.yml

./rke up

for i in "${C[@]}"; do
scp kube_config_cluster.yml root@$i:~/.kube/config
done

exit 0

Perform this on kubernetes control node:

  1. Set the context of kubernetes to onap by default:
kubectl config set-context --current --namespace=onap
  1. Verify the Kubernetes cluster:


kubectl get nodes -o=wide
  1. Initialize Kubernetes Cluster for use by Helm
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
kubectl -n kube-system  rollout status deploy/tiller-deploy
  1. Set only the required ONAP component to true
cd overrides/
//edit the onap-all.yaml  file and set the ONAP components to false
nano onap-all.yaml

in our example mariadb and portal are turned on:

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

###################################################################
# This override file enables helm charts for all ONAP applications.
###################################################################
cassandra:
  enabled: false
mariadb-galera:
  enabled: true

aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: false
cds:
  enabled: false
clamp:
  enabled: false
cli:
  enabled: false
consul:
  enabled: false
contrib:
  enabled: false
dcaegen2:
  enabled: false
dmaap:
  enabled: false
esr:
  enabled: false
log:
  enabled: false
sniro-emulator:
  enabled: false
oof:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
nbi:
  enabled: false
policy:
  enabled: false
pomba:
  enabled: false
portal:
  enabled: true
robot:
  enabled: false
sdc:
  enabled: false
sdnc:
  enabled: false
so:
  enabled: false
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false
modeling:
  enabled: false
  1. Start helm server and add local helm repository
// start server
helm serve &
//hit enter if the process is running to let the process run in background

// add repository
helm repo add local http://127.0.0.1:8879
  1. Make onap helm charts available in local helm repository
cd ~/oom/kubernetes
make all; make onap

//This make command takes some time to finish

  1. Deploy ONAP
helm deploy demo local/onap --namespace onap -f ~/overrides/onap-all.yaml
  1. Verify the deploy
helm ls

— Openstack installed

— VM cloud image ubuntu 18.04-server-image

Attachments (2)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.