Version 66 (modified by 4 years ago) ( diff ) | ,
---|
Table of Contents
Open Source MANO (OSM) Tutorial
Overview
A full deployment consists of 3 components:
- OSM as the NFV Orchestrator
- Openstack, OpenVIM, Open Nebula, or similar as the virtual infrastructure manager (VIM)
- Some number of compute nodes, where the VIM will run VMs under OSM's instruction
We will use the following components:
- OSM on ubuntu 18.04
- Devstack on ubuntu 18.04, in a single node configuration (this can easily be extended following devstack guides)
- This Devstack configuration combines the VIM node and the compute node on one machine. OSM communicates with the Openstack identity service to discover the rest of the configuration.
This tutorial implements the getting started from: here, and adds support for ORBIT hardware, with the image based on Ubuntu 18.04.
To save time, run the OSM and VIM steps in separate ssh terminals, as they can be done at the same time.
Prerequisites:
- SSH keys set up for the testbed
- Ability to set up ssh tunneling to access the web interface pages
- A reservation for a domain with at least two compute resources
- For convenience, 3 terminal windows.
OSM Node Set Up
- Log into the console with your first terminal
- Load the
tutorial-osm.ndz
onto the first node, and resize to ≥ 60gb:- Example:
omf load -t srv1-lg1.sb1.cosmos-lab.org -i tutorial-osm.ndz -r 60
- Wait for imaging process to complete. Work on the other terminals in the meantime
- You should see:
INFO exp: 1 node successfully imaged ...
- Example:
- Turn the node on:
omf tell -a on -t srv1-lg1.sb1.cosmos-lab.org
- Wait for the node to come up, (up to ~3 minutes)
- You can ping the node, it will respond once booted
From Scratch
Dependencies:
- 40Gb disk space
- user:
- non-root user,
- with non-empty password
- member of sudo, docker, lxd groups
- Packages:
- net-tools
- ipv6 disabled
- Prepare the node
- Load the image:
omf load -t srv2-lg1.sb1.cosmos-lab.org -i baseline_1804.ndz -r 60
- Turn the node on:
omf tell -a on -t srv3-lg1.sb1.cosmos-lab.org
- Log in as root:
ssh native@srv3-lg1.sb1.cosmos-lab.org
- set up non-root user
echo native:native | chpasswd
- Add user to groups:
sudo groupadd lxd && sudo groupadd docker && sudo usermod -a -G lxd,docker native
- logout:
exit
- Log in as the user "native":
ssh native@srv3-lg1.sb1.cosmos-lab.org
- Load the image:
- Set up OSM
- Install "net-tools"
sudo apt install net-tools
- Download script:
wget https://osm-download.etsi.org/ftp/osm-6.0-six/install_osm.sh
- Make it executable:
chmod +x install_osm.sh
- Run it:
./install_osm.sh 2>&1 | tee osm_install_log.txt
- enter "y"
- if it crashes, run
lxd init
- if it crashes, run
- choose all defaults, except
none
for ipv6 - if failed, rerun install_osm.sh
- Install "net-tools"
- Connect browser
- Via ssh tunnel
- via VPN TODO
- navigate browser to node control ip, enter admin/admin as credentials
Run these commands to clean up the old configuration:
docker stack rm osm && sleep 60 # The sleep is for making sure the stack removal finishes before redeploying docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
Save image
VIM Node Set up:
OSM must have a Virtual Infrastructure Manager (VIM) to control. We will use openstack on single node devstack for this tutorial, following this guide.
- Log into the console with your second terminal
- Load the
tutorial-devstack.ndz
onto the second node, and resize to ≥ 60gb:- Example:
omf load -t srv2-lg1.sb1.cosmos-lab.org -i tutorial-devstack.ndz -r 60
- Wait for imaging process to complete. Work on the other terminals in the meantime
- You should see:
INFO exp: 1 node successfully imaged ...
- Example:
- Turn the node on:
omf tell -a on -t srv2-lg1.sb1.cosmos-lab.org
- Wait for the node to come up (up to ~3 minutes)
- You can ping the node, it will respond once booted
- SSH into the node
ssh native@srv2-lg1.sb1.cosmos-lab.org
- Change to the Devstack directory:
cd ~/devstack
- Run the installation script:
./stack.sh
Note, we use username: native, password: native
, because Openstack needs a non-root user.
Commands will run for a while (about 10 minutes). If successful, it will output the credentials and address to login via the webui, and will look like the following:
========================= DevStack Component Timing (times are in seconds) ========================= run_process 16 test_with_retry 2 apt-get-update 5 osc 115 wait_for_service 10 git_timed 107 dbsync 65 pip_install 201 apt-get 166 ------------------------- Unaccounted time 345 ========================= Total runtime 1032 This is your host IP address: 10.19.1.2 This is your host IPv6 address: ::1 Horizon is now available at http://10.19.1.2/dashboard Keystone is serving at http://10.19.1.2/identity/ The default users are: admin and demo The password: native
You will need the Host IP address, Horizon and Keystone URLs, and user and password in the following steps.
If you want to customize your devstack installation, you can modify the file local.conf prior to running stack.sh.
Devstack Customization
[[local|localrc]] ADMIN_PASSWORD=native DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD HOST_IP=10.19.1.2 SERVICE_HOST=$HOST_IP MYSQL_HOST=$HOST_IP RABBIT_HOST=$HOST_IP #Check out latest commits RECLONE=True #upgrade python PIP_UPGRADE=True ##Uncomment to Customize VNC #NOVA_VNC_ENABLED=True #VNCSERVER_LISTEN=0.0.0.0 #VNCSERVER_PROXYCLIENT_ADDRESS=$SERVICE_HOST ##use for ssh forwarding #NOVNCRPROXY_BASE_URL=http://127.0.0.1:6080/vnc_auto.html ##Uncomment to enable Spice, comment VNC section #NOVA_VNC_ENABLED=False #NOVA_SPICE_ENABLED=True #SPICEAGENT_ENABLED=True #enable_service n-spice #disable_service n-novnc #html5proxy_base_url=http://127.0.0.1:6082/spice_auto.html
To make virtual machines directly accessible from the console, add these snippets:
Information in this example is taken from:
root@node1-2:~# ip r default via 10.19.0.1 dev enp134s0 proto dhcp src 10.19.1.2 metric 100 10.19.0.0/16 dev enp134s0 proto kernel scope link src 10.19.1.2 10.19.0.1 dev enp134s0 proto dhcp scope link src 10.19.1.2 metric 100 172.24.4.0/24 dev br-ex proto kernel scope link src 172.24.4.1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown root@node1-2:~# ip addr show enp134s0 2: enp134s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 68:05:ca:1e:e5:98 brd ff:ff:ff:ff:ff:ff inet 10.19.1.2/16 brd 10.19.255.255 scope global dynamic enp134s0 valid_lft 2297sec preferred_lft 2297sec inet6 fe80::6a05:caff:fe1e:e598/64 scope link valid_lft forever preferred_lft forever
#set to control interface name PUBLIC_INTERFACE=enp134s0 #set to control interface IP HOST_IP=10.19.1.2 #Set to control interface subnet FLOATING_RANGE=10.19.0.0/16 #set to control interface gateway PUBLIC_NETWORK_GATEWAY=10.19.0.1 #choose range of IPs in control subnet to assign to VMs Q_FLOATING_ALLOCATION_POOL=start=10.19.100.1,end=10.19.100.254 #set IP range for private networks not to conflict with testbed nets IPV4_ADDRS_SAFE_TO_USE=172.16.100.0/24
This snipped will cause openstack to override the dhcp configuration on the control interface, so run this before running stack.sh
sudo ip r add default via 10.19.0.1 echo "DNS=10.50.0.8 10.50.0.9" | sudo tee -a /etc/systemd/resolved.conf echo "Domains=~." | sudo tee -a /etc/systemd/resolved.conf
After stack.sh, run this:
openstack security group rule create --proto icmp --dst-port 0 default openstack security group rule create --proto tcp --dst-port 22 default
From Scratch
Follow https://docs.openstack.org/devstack/latest/guides/single-machine.html
https://docs.openstack.org/devstack/latest/networking.html
If you don’t specify any configuration you will get the following:
neutron (including l3 with openvswitch)
private project networks for each openstack project
a floating ip range of 172.24.4.0/24 with the gateway of 172.24.4.1
the demo project configured with fixed ips on a subnet allocated from the 10.0.0.0/22 range
a br-ex interface controlled by neutron for all its networking (this is not connected to any physical interfaces).
DNS resolution for guests based on the resolv.conf for your host
an ip masq rule that allows created guests to route out
PUBLIC_INTERFACE=eth1 (this connects to the br-ex bridge)
https://docs.openstack.org/devstack/latest/_sources/guides/neutron.rst.txt
- Change IP without running stack.sh:
- Run unstack.sh
- In keystone database "update endpoint set url = REPLACE(url, '[old IP address]', '[new IP address]')";
- Update all Ips in /etc folder "grep -rl '[old IP address]' /etc | xargs sed -i 's/[old IP address]/[new IP address]/g'"
- Update all Ips in /opt/stack folder "grep -rl '[old IP address]' /opt/stack | xargs sed -i 's/[old IP address]/[new IP address]/g'"
- Restart apache2 server
- Run rejoin-stack.sh
- Restart Keystone "keystone-all"
Connecting to the Web Interfaces
Use your third terminal, or whatever ssh program you are using, to forward a different local port to port 80 on each of the OSM and VIM machines.
For more information, follow This Guide
For example, using a linux shell, and forwarding: 9901 → srv1-lg1:80 , 9902 → srv2-lg1:80
ssh testbeduser@sb1.cosmos-lab.org -N \ -L 9980:srv1-lg1:80 \ -L 9981:srv2-lg1:80
The OSM Web UI will likely be ready much sooner than the Openstack UI, due to the time taken for installation.
- OSM Web UI
- Credentials are:
admin / admin
- Use your browser to navigate to the first forwarded port, here
localhost:9980
- Credentials are:
- If the webui doesn't come up:
- SSH to the node with the ssh credentials: native / native
- Check to see that all containers are running
docker stack ps osm
- Try re-installing OSM by running
./install_osm.sh
- Openstack Web UI
- Credentials are:
admin / native
, unless different in the stack.sh output. - Use your browser to navigate to the second forwarded port, here
localhost:9981
- The web UI won't be ready until stack.sh completes successfully
- Credentials are:
- The following steps are executed in the respective web interfaces
Connecting OSM to the VIM
- Log into the OSM with admin/admin
- Click "VIM accounts" on the left bar
- Delete existing accounts with the trash can icon
- Add the new account for your openstack with the following information:
- URL
http://$DEVSTACK_IP/identity/v3/
- User
admin
- password
native
- tenant
admin
- name
openstack-site
- Type:
openstack
- URL
- Click create
Optional: CLI usage
SSH to the node where you installed OSM
native@node1-1:~$ export LC_ALL=C.UTF-8 native@node1-1:~$ export LANG=C.UTF-8 native@node1-1:~$ osm vim-create --name openstack-standalone --user admin --password native \ --auth_url http://10.19.1.2:5000/v2.0 --tenant admin --account_type openstack
Adding a disk image to openstack
- download the attachment cirros-0.3.4-x86_64-disk.img from this page, by clicking attachments, then clicking the small download icon.
- in a new window, go to the openstack webui
- click images
- click create image, and use the following properties (important)
- name:
cirros034
- image source: browse for the file you downloaded from the wiki
- format:
qcow2
- name:
- click create image
Creating a VNF
- Go back to the OSM webui
- VNF package onboarding
- go to packages → vnf packages
- download the attachment cirros_vnf.tar.gz
- drag the downloaded file to the window
- NS package onboarding
- go to packages → ns packages
- download the attachment cirros_2vnf_ns.tar.gz
- drag it to the window
- Instantiate the NF
- click the launch icon and enter information
- name
- description
- nsd_id:
cirros_2vnf_ns
- vim account id:
openstack-site
- click the launch icon and enter information
Interacting with the VNF
- In the Openstack UI:
- Select the project "admin" from the dropdown in the top left
- Click instances:
- you should now see a pair of cirros instances running.
- open a console on one, and try pinging the other
Changelog
08/20/2020 Updated to OSM release EIGHT, Openstack release Train
10/21/2019 Updated to OSM release SIX
Attachments (3)
-
cirros_vnf.tar.gz
(5.1 KB
) - added by 5 years ago.
cirros_vnf
-
cirros_2vnf_ns.tar.gz
(18.4 KB
) - added by 5 years ago.
cirros_2vnf_ns
-
cirros-0.3.4-x86_64-disk.img
(12.7 MB
) - added by 5 years ago.
cirros034