From 8adc0397dabc92f3e5f2faa54b10372f5e0c41ae Mon Sep 17 00:00:00 2001 From: tseed Date: Wed, 26 Oct 2022 17:58:48 +0100 Subject: [PATCH] initial commit --- 1) Openstack Network and Access.md | 133 ++ 10) Outstanding issues.md | 64 + 2) Undercloud Deployment.md | 558 +++++ 3) Overcloud Node Import.md | 512 +++++ 4) Ceph Cluster Setup.md | 1079 ++++++++++ 5) Overcloud Deployment.md | 1851 +++++++++++++++++ 6) Multi-tenancy.md | 1207 +++++++++++ 7) Example Project.md | 360 ++++ 8) Testing.md | 91 + ... the external HTTPS endpoint(s) TLS cer.md | 330 +++ README.md | 14 + university_Network.drawio.png | Bin 0 -> 70018 bytes 12 files changed, 6199 insertions(+) create mode 100755 1) Openstack Network and Access.md create mode 100755 10) Outstanding issues.md create mode 100755 2) Undercloud Deployment.md create mode 100755 3) Overcloud Node Import.md create mode 100755 4) Ceph Cluster Setup.md create mode 100755 5) Overcloud Deployment.md create mode 100755 6) Multi-tenancy.md create mode 100755 7) Example Project.md create mode 100755 8) Testing.md create mode 100755 9) Updating the external HTTPS endpoint(s) TLS cer.md create mode 100644 README.md create mode 100755 university_Network.drawio.png diff --git a/1) Openstack Network and Access.md b/1) Openstack Network and Access.md new file mode 100755 index 0000000..728f124 --- /dev/null +++ b/1) Openstack Network and Access.md @@ -0,0 +1,133 @@ +# Access to university Openstack + +``` +edit local ~/.ssh/config and include the following entries + +###### university + Host university-jump + HostName 144.173.114.20 + ProxyJump nemesis + IdentityFile ~/.ssh/id_rsa + Port 22 + User root + + Host university-proxmox + Hostname 10.121.4.5 + Proxyjump university-jump + #PreferredAuthentications password + IdentityFile ~/.ssh/id_rsa + Port 22 + User root + + Host university-proxmox-dashboard + Hostname 10.121.4.5 + Proxyjump university-jump + #PreferredAuthentications password + IdentityFile ~/.ssh/id_rsa + Port 22 + User root + DynamicForward 8888 + + Host university-undercloud + Hostname 10.121.4.25 + Proxyjump university-jump + IdentityFile ~/.ssh/id_rsa + Port 22 + User stack + ServerAliveInterval 100 + ServerAliveCountMax 2 + + Host university-ceph1 + Hostname 10.121.4.7 + Proxyjump university-jump + IdentityFile ~/.ssh/id_rsa + Port 22 + User root + + Host university-ceph2 + Hostname 10.121.4.8 + Proxyjump university-jump + IdentityFile ~/.ssh/id_rsa + Port 22 + User root + + Host university-ceph3 + Hostname 10.121.4.9 + Proxyjump university-jump + IdentityFile ~/.ssh/id_rsa + Port 22 + User root +``` + +# Logins + +## Switches + +| IP/Login | Password | Type | Notes | +| --- | --- | --- | --- | +| cumulus@10.122.0.250 | Password0 | 100G switch | 2x CLAG bond between 100G switches, 2x Peerlink CLAG across 100G switches to university Juniper core switches | +| cumulus@10.122.0.251 | Password0 | 100G switch | 2x CLAG bond between 100G switches, 2x Peerlink CLAG across 100G switches to university Juniper core switches | +| cumulus@10.122.0.252 | Password0 | 1G switch | 2x SFP+ 10G LAG bond between management switches, 1G ethernet uplink from each 100G switch for access | +| cumulus@10.122.0.253 | Password0 | 1G switch | 2x SFP+ 10G LAG bond between management switches | + +## Node OOB (IPMI / XClarity web) + +| IP | Login | Password | +| --- | --- | --- | +| 10.122.1.5(proxmox) 10.122.1.10-12(controller) 10.122.1.20-21(networker) 10.122.1.30-77(compute) 10.122.1.90-92(ceph) | USERID | Password0 | + +## Node Operating System + +| IP | Login | Password | +| --- | --- | --- | +| 10.121.4.5 (proxmox hypervisor) | root | Password0 | +| 10.121.4.25 (undercloud VM) | stack OR root | Password0 | +| 10.122.0.30-32(controller) 10.122.0.40-41(networker) 10.122.0.50-103(compute) | root OR heat-admin | Password0 | + +## Dashboards + +| Dashboard | IP / URL | Login | Password | Notes | +| --- | --- | --- | --- | --- | +| Proxmox | https://10.121.4.5:8006/ | root | Password0 | | +| Ceph | https://10.122.10.7:8443/ | admin | Password0 | 10.122.10.7,8,9 will redirect to live dashboard | +| Ceph Grafana | https://10.121.4.7:3000/ | | | many useful dashboards for capacity and throughput | +| Ceph Alertmanager | http://10.121.4.7:9093/ | | | check ceph alerts | +| Ceph Prometheus | http://10.121.4.7:9095/ | | | check if promethus is monitoring ceph | +| Openstack Horizon | https://stack.university.ac.uk/dashboard | admin | Password0 | domain: default (for AD login the domain is 'ldap')
floating ip 10.121.4.14
find password on undercloud `grep OS_PASSWORD ~/overcloudrc \\\\\| awk -F "=" '{print $2}'` | + +# Networking + +![university_Network.drawio.png](university_Network.drawio.png) + +## Openstack control networks + +- These networks reside on the primary 1G ethernet adapter. +- The IPMI network is usually only used by the undercloud, however to facilitate IPMI fencing for Instance-HA the Openstack controller nodes will have a logical interface + +| Network | VLAN | IP Range | | +| --- | --- | --- | --- | +| ControlPlane | 1 Native | 10.122.0.0/24 | | +| IPMI | 2 | 10.122.1.0/24 | | + +## Openstack service networks + +- The logical networks reside upon an OVS bridge across an LACP bond on the 2x Mellanox 25G ethernet adapters in each node. +- The 2x Mellanox 25G ethernet adapters are cabled to 100G switch1 and 100G switch2 respectively, the switch handles the LACP bond as one logical entity across switches with a CLAG. + +| Network | VLAN | IP Range | | +| --- | --- | --- | --- | +| Storage Mgmt | 14 | 10.122.12.0/24 | | +| Storage | 13 | 10.122.10.0/24 | | +| InternalApi | 12 | 10.122.6.0/24 | | +| Tenant | 11 | 10.122.8.0/24 | | +| External | 1214 | 10.121.4.0/24 Gateway 10.121.4.1 | | + +## Ceph service networks + +Use Openstack "Storage Mgmt" for the Ceph public network. + +| Network | VLAN | IP Range | | +| --- | --- | --- | --- | +| Cluster Network | 15 | 10.122.14.0/24 | | +| Public Network (Openstack Storage) | 13 | 10.122.10.0/24 | | +| Management (Openstack Storage Mgmt) | 14 | 10.122.12.0/24 | | \ No newline at end of file diff --git a/10) Outstanding issues.md b/10) Outstanding issues.md new file mode 100755 index 0000000..7c7583b --- /dev/null +++ b/10) Outstanding issues.md @@ -0,0 +1,64 @@ +# Nodes + +``` +10.122.1.5 proxmox/undercloud +10.122.1.10 controller +10.122.1.11 controller +10.122.1.12 controller +10.122.1.20 networker +10.122.1.21 networker +10.122.1.30 compute SR630 +10.122.1.31 +10.122.1.32 +10.122.1.33 faulty PSU +10.122.1.34 lost mellanox adapter +10.122.1.35 +10.122.1.36 +10.122.1.37 lost mellanox adapter +10.122.1.38 +10.122.1.39 +10.122.1.40 +10.122.1.41 +10.122.1.42 +10.122.1.43 +10.122.1.44 +10.122.1.45 +10.122.1.46 +10.122.1.47 +10.122.1.48 +10.122.1.49 +10.122.1.50 +10.122.1.51 +10.122.1.52 +10.122.1.53 +10.122.1.54 compute SR630v2 - expansion +10.122.1.55 +10.122.1.56 +10.122.1.57 +10.122.1.58 +10.122.1.59 +10.122.1.60 +10.122.1.61 +10.122.1.62 +10.122.1.63 +10.122.1.64 +10.122.1.65 +10.122.1.66 faulty PSU +10.122.1.67 +10.122.1.68 +10.122.1.69 +10.122.1.70 +10.122.1.71 +10.122.1.72 +10.122.1.73 +10.122.1.74 +10.122.1.75 +10.122.1.76 +10.122.1.77 +10.122.1.90 ceph1 +10.122.1.91 ceph2 +10.122.1.92 ceph3 +``` + + + diff --git a/2) Undercloud Deployment.md b/2) Undercloud Deployment.md new file mode 100755 index 0000000..6057fa5 --- /dev/null +++ b/2) Undercloud Deployment.md @@ -0,0 +1,558 @@ +# Proxmox installation + +Proxmox hosts the undercloud node, this enables snapshots to assist in Update/DR/Rebuild scenarios, primarily this will allow a point in time capture of working heat-templates and containers. + +> https://pve.proxmox.com/wiki/Installation + +| setting | value | +| --- | --- | +| filesystem | xfs | +| swapsize | 8GB | +| maxroot | 50GB | +| country | United Kingdom | +| time zone | Europe/London | +| keyboard layout | United Kingdom | +| password | Password0 | +| email | user@university.ac.uk (this can be changed in the web console @ datacenter/users/root) | +| management interface | eno1 | +| hostname | pve.local | +| ip address | 10.122.0.5/24 | +| gateway | 10.122.0.1 (placeholder, there is no gateway on this range) | +| dns | 144.173.6.71 | + +- Install from a standard version 7.2 ISO, use settings listed as above. +- Create a bridge on the 1G management interface, this is VLAN 1 native on the 'ctlplane' network with VLAN 2 tagged for IPMI traffic. +- Ensure the 25G interfaces are setup as an LACP bond, create a bridge on the bond with the 'tenant', 'storage', 'internal-api' and 'external' VLANs as tagged (the 'external' range has the default gateway). +- Proxmox host has VLAN interfaces into each openstack network for introspection/debug, nmap is installed. + +```sh +cat /etc/network/interfaces + +# network interface settings; autogenerated +# Please do NOT modify this file directly, unless you know what +# you're doing. +# +# If you want to manage parts of the network configuration manually, +# please utilize the 'source' or 'source-directory' directives to do +# so. +# PVE will preserve these directives, but will NOT read its network +# configuration from sourced files, so do not attempt to move any of +# the PVE managed interfaces into external files! + +auto lo +iface lo inet loopback + +iface eno1 inet manual + +iface eno2 inet manual + +iface eno3 inet manual + +iface eno4 inet manual + +iface enx3a68dd4a4c5f inet manual + +auto ens2f0np0 +iface ens2f0np0 inet manual + +auto ens2f1np1 +iface ens2f1np1 inet manual + +auto bond0 +iface bond0 inet manual + bond-slaves ens2f0np0 ens2f1np1 + bond-miimon 100 + bond-mode 802.3ad + +auto vmbr0 +iface vmbr0 inet static + address 10.122.0.5/24 + bridge-ports eno1 + bridge-stp off + bridge-fd 0 + bridge-vlan-aware yes + bridge-vids 2-4094 +#vlan 1(native) 2 (tagged) ControlPlane + +auto vmbr1 +iface vmbr1 inet manual + bridge-ports bond0 + bridge-stp off + bridge-fd 0 + bridge-vlan-aware yes + bridge-vids 2-4094 +#vlan 1(native) 11 12 13 1214 (tagged) + +auto vlan2 +iface vlan2 inet static + address 10.122.1.5/24 + vlan-raw-device vmbr0 +#IPMI + +auto vlan13 +iface vlan13 inet static + address 10.122.10.5/24 + vlan-raw-device vmbr1 +#Storage + +auto vlan1214 +iface vlan1214 inet static + address 10.121.4.5/24 + gateway 10.121.4.1 + vlan-raw-device vmbr1 +#External + +auto vlan12 +iface vlan12 inet static + address 10.122.6.5/24 + vlan-raw-device vmbr1 +#InternalApi + +auto vlan11 +iface vlan11 inet static + address 10.122.8.5/24 + vlan-raw-device vmbr1 +#Tenant +``` + +Setup the no-subscription repository. + +```sh +# comment/disable enterprise repo +nano -cw /etc/apt/sources.list.d/pve-enterprise.list + +#deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise + +# insert pve-no-subscription repo +nano -cw /etc/apt/sources.list + +deb http://ftp.uk.debian.org/debian bullseye main contrib +deb http://ftp.uk.debian.org/debian bullseye-updates main contrib +# security updates +deb http://security.debian.org bullseye-security main contrib +# pve-no-subscription +deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription + +# update +apt-get update +apt-get upgrade -y +reboot +``` + +Download some LXC containers. + +- LXC is not used in production, but during build LXC containers with network interfaces in all ranges (last octet suffix .6) was used to debug IP connectivity, switch configuration and serve linux boot images over NFS for XClarity. + +```sh +pveam update +pveam available --section system +pveam download local almalinux-8-default_20210928_amd64.tar.xz +pveam download local rockylinux-8-default_20210929_amd64.tar.xz +pveam download local ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz +pveam download local ubuntu-22.04-standard_22.04-1_amd64.tar.zst +pveam list local + +NAME SIZE +local:vztmpl/almalinux-8-default_20210928_amd64.tar.xz 109.08MB +local:vztmpl/rockylinux-8-default_20210929_amd64.tar.xz 107.34MB +local:vztmpl/ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz 203.54MB +local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst 123.81MB +``` + +# Undercloud VM instance + +## Download RHEL 8.4 full DVD image + +Select the RHEL8.4 image, choose the full image rather than the boot image, this will allow installation without registering the system during the installer, you may then attach the system to a license via the `subscription-manager` tool after the host is built. + +## Install spec + +- RHEL8 (RHEL 8.4 specifically) +- 1 socket, 16 core (must use HOST cpu type for nested virtualization) +- 24GB ram +- 100GB disk (/root 89GiB lvm, /boot 1024MiB, swap 10GiB lvm) +- ControlPlane network interface on vmbr0, no/native vlan, 10.122.0.25/24, ens18 +- IPMI network interface on vmbr0, vlan2 (vlan assigned in proxmox not OS), 10.122.1.25/24, ens19 +- External/Routable network interface on vmbr1, vlan 1214 (vlan assigned in proxmox not OS), 10.121.4.25/24, gateway 10.121.4.1, dns 144.173.6.7,1 1.1.1.1, ens20 +- ensure all network interfaces do not have the firewall enabled in proxmox or OS (mac spoofing will be required and should be allowed in the firewall if used) +- root:Password0 +- undercloud.local +- minimal install with QEMU guest agents +- will require registering with redhat subscription service + +## OCF partner subscription entitlement + +Register for a partner product entitlement. + +> https://partnercenter.redhat.com/NFRPageLayout +> Product: Red Hat OpenStack Platform, Standard Support (4 Sockets, NFR, Partner Only) - 25.0 Units + +Once the customer has purchased the entitlement, this should be present in their own RedHat portal to consume on the production nodes. + +## Register undercloud node with the require software repositories + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.2/html/director\_installation\_and\_usage/assembly_preparing-for-director-installation#enabling-repositories-for-the-undercloud](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_preparing-for-director-installation#enabling-repositories-for-the-undercloud) + +Browse to: + +> https://access.redhat.com/management/systems/create + +Create a new system with the following attributes. + +- Virtual System +- Name: university_test +- Architecture: x86_64 +- Number of vCPUs: 16 +- Red Hat Enterprise Linux Version: 8 +- Create + +Attach the following initial subscription: 'Red Hat Enterprise Linux, Self-Support (128 Sockets, NFR, Partner Only)' +Note the name and UUID of the system. + +Register the system. + +```sh +sudo su - +[root@undercloud ~]# subscription-manager register --name=university_test --consumerid=f870ae18-6664-4206-9a89-21f24f312866 --username=tseed@ocf.co.uk +Registering to: subscription.rhsm.redhat.com:443/subscription +Password: +The system has been registered with ID: a1b24b8a-933b-4ce8-8244-1a7e16ff51a3 +The registered system name is: university_test + +#[root@undercloud ~]# subscription-manager refresh +[root@undercloud ~]# subscription-manager list ++-------------------------------------------+ + Installed Product Status ++-------------------------------------------+ +Product Name: Red Hat Enterprise Linux for x86_64 +Product ID: 479 +Version: 8.4 +Arch: x86_64 +Status: Subscribed +Status Details: +Starts: 06/13/2022 +Ends: 06/13/2023 + +[root@undercloud ~]# subscription-manager list ++-------------------------------------------+ + Installed Product Status ++-------------------------------------------+ +Product Name: Red Hat Enterprise Linux for x86_64 +Product ID: 479 +Version: 8.4 +Arch: x86_64 +Status: Subscribed +Status Details: +Starts: 06/13/2022 +Ends: 06/13/2023 + +[root@undercloud ~]# subscription-manager identity +system identity: f870ae18-6664-4206-9a89-21f24f312866 +name: university_test +org name: 4110881 +org ID: 4110881 +``` + +Add an entitlement to the license system. + +```sh +# Check the entitlement/purchased-products portal +# you will find the SKU under a contract - this will help to identify the openstack entitlement if you have multiple +# find a suitable entitlement pool ID for Red Hat OpenStack Director Deployment Tools +subscription-manager list --available --all +subscription-manager list --available --all --matches="*OpenStack*" + +Subscription Name: Red Hat OpenStack Platform, Standard Support (4 Sockets, NFR, Partner Only) +SKU: SER0505 +Contract: 13256907 +Pool ID: 8a82c68d812ba3c301815c6f842f5ecf + +# attach to the entitlement pool ID +subscription-manager attach --pool=8a82c68d812ba3c301815c6f842f5ecf + +Successfully attached a subscription for: Red Hat OpenStack Platform, Standard Support (4 Sockets, NFR, Partner Only) +1 local certificate has been deleted. + +# set release version statically +subscription-manager release --set=8.4 +``` + +Enable repositories, set version of container-tools, update packages. + +```sh +subscription-manager repos --disable=* ;\ +subscription-manager repos \ +--enable=rhel-8-for-x86_64-baseos-eus-rpms \ +--enable=rhel-8-for-x86_64-appstream-eus-rpms \ +--enable=rhel-8-for-x86_64-highavailability-eus-rpms \ +--enable=ansible-2.9-for-rhel-8-x86_64-rpms \ +--enable=openstack-16.2-for-rhel-8-x86_64-rpms \ +--enable=fast-datapath-for-rhel-8-x86_64-rpms ;\ +dnf module disable -y container-tools:rhel8 ;\ +dnf module enable -y container-tools:3.0 ;\ +dnf update -y + +reboot +``` + +## Install Tripleo client + +```sh +# install tripleoclient for install of the undercloud +dnf install -y python3-tripleoclient + +# these packages are advised for the TLS everywhere functionality, probably not required for external TLS endpoint but wont hurt +dnf install -y python3-ipalib python3-ipaclient krb5-devel python3-novajoin +``` + +Install Ceph-Ansible packages, even if you are not initially using Ceph it cannot hurt to have an undercloud capable of deploying Ceph, to use external Ceph (as in not deployed by tripleo) you will need the following package. + +There are different packages for different versions of Ceph, this is especially relevant when using external Ceph. + +> https://access.redhat.com/solutions/2045583 + +- Redhat Ceph 4.1 = Nautilus release +- Redhat Ceph 5.1 = Pacific release + +```sh +subscription-manager repos | grep -i ceph + +# Nautilus (default version in use with Tripleo deployed Ceph) +#subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms + +# Pacific (if you are using external Ceph from the opensource repos you will likely be using this version) +#dnf remove -y ceph-ansible +#subscription-manager repos --disable=rhceph-4-tools-for-rhel-8-x86_64-rpms +subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms + +# install +dnf info ceph-ansible +dnf install -y ceph-ansible +``` + +# Configure and deploy the Tripleo undercloud + +## Prepare host + +Disable firewalld. + +```sh +systemctl disable firewalld +systemctl stop firewalld +``` + +Create user/sudoers, push ssh key. Sudoers required for the tripleo installer. + +```sh +groupadd -r -g 1001 stack && useradd -r -u 1001 -g 1001 -m -s /bin/bash stack +echo "%stack ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/stack +chmod 0440 /etc/sudoers.d/stack +passwd stack # password is Password0 +exit + +ssh-copy-id -i ~/.ssh/id_rsa.pub stack@university-new-undercloud +``` + +Local ssh config setup. + +```sh +nano -cw ~/.ssh/config + +Host undercloud + Hostname 10.121.4.25 + User stack + IdentityFile ~/.ssh/id_rsa +``` + +Set hostname, disable firewall (leave SElinux enabled, RHOSP tripleo requires it), install packages. + +```sh +ssh undercloud +sudo su - + +timedatectl set-timezone Europe/London +dnf install chrony nano -y + +# replace server/pool entries with PHC (high precision clock device) entry to use the hypervisors hardware clock (which in turn is sync'd from online ntp pool), this should be the most acurate time for a VM +# the LXC container running ntp (192.168.101.43) does actually use the hypervisor hardware clock, the LXC container and VM should be on the same hypervisor if this is used + +nano -cw /etc/chrony.conf + +#server 192.168.101.43 iburst +#pool 2.centos.pool.ntp.org iburst +refclock PHC /dev/ptp0 poll 2 + +systemctl enable chronyd +echo ptp_kvm > /etc/modules-load.d/ptp_kvm.conf + +# the undercloud installer should set the hostname based on the 'undercloud_hostname' entry in the undercloud.conf config file +# you can set it before deployment with the following, the Opensource tripleo documentation advises to allow the undercloud installer to set it +hostnamectl set-hostname undercloud.local +hostnamectl set-hostname --transient undercloud.local + +# RHOSP hosts entry +nano -cw /etc/hosts +10.121.4.25 undercloud.local undercloud + +reboot +hostname -A +hostname -s + +# install some useful tools +sudo su - +dnf update -y +dnf install qemu-guest-agent nano tree lvm2 chrony telnet traceroute net-tools bind-utils python3 yum-utils mlocate ipmitool tmux wget -y + +# need to shutdown for qemu-guest tools to function, ensure the VM profile on the hypervisor has guest agents enabled +shutdown -h now +``` + +## Build the undercloud config file + +The first interface (enp6s18 on the proxmox VM instance) will be on the ControlPlane range. + +- Controller nodes are in all networks but cannot install nmap, can find hosts in ranges with `for ip in 10.122.6.{1..254}; do ping -c 1 -t 1 $ip > /dev/null && echo "${ip} is up"; done`. +- Proxmox has interfaces in every network and nmap installed `nmap -sn 10.122.6.0/24` to assist with debug. + +| Node | IPMI VLAN2 | Ctrl_plane VLAN1 | External VLAN1214 | Internal_api VLAN12 | Storage VLAN13 | Tenant VLAN11 | +| --- | --- | --- | --- | --- | --- | --- | +| Proxmox | 10.122.1.54 (IPMI) (Proxmox interface 10.122.1.5) | 10.122.0.5 | 10.121.4.5 | 10.122.6.5 | 10.122.10.5 | 10.122.8.5 | +| Undercloud | 10.122.1.25 | 10.121.0.25-27 (br-ctlplane) | 10.121.4.25 (Undercloud VM) | NA | NA | NA | +| Temporary Storage Nodes | 10.122.1.55-57 | NA | 10.121.4.7-9 | NA | 10.122.10.7-9 | NA | +| Overcloud Controllers | 10.122.1.10-12 (Instance-HA 10.122.1.80-82 | 10.122.0.30-32 | 10.121.4.30-32 | 10.122.6.30-32 | 10.122.10.30-32 | 10.122.8.30-32 | +| Overcloud Networkers | 10.122.1.20-21 | 10.122.0.40-41 | NA (reserved 10.121.4.23-24) | 10.122.6.40-41 | NA | 10.122.8.40-41 | +| Overcloud Compute | 10.122.1.30-53/54,58-77 | 10.122.0.50-103 | NA | 10.122.6.50-103 | 10.122.10.50-103 | 10.122.8.50-103 | + +```sh +sudo su - stack +nano -cw /home/stack/undercloud.conf + +[DEFAULT] +certificate_generation_ca = local +clean_nodes = true +cleanup = true +container_cli = podman +container_images_file = containers-prepare-parameter.yaml +discovery_default_driver = ipmi +enable_ironic = true +enable_ironic_inspector = true +enable_nova = true +enabled_hardware_types = ipmi +generate_service_certificate = true +inspection_extras = true +inspection_interface = br-ctlplane +ipxe_enabled = true +ironic_default_network_interface = flat +ironic_enabled_network_interfaces = flat +local_interface = enp6s18 +local_ip = 10.122.0.25/24 +local_mtu = 1500 +local_subnet = ctlplane-subnet +overcloud_domain_name = university.ac.uk +subnets = ctlplane-subnet +undercloud_admin_host = 10.122.0.27 +undercloud_debug = true +undercloud_hostname = undercloud.local +undercloud_nameservers = 144.173.6.71,1.1.1.1 +undercloud_ntp_servers = ntp.university.ac.uk,0.pool.ntp.org +undercloud_public_host = 10.122.0.26 +[ctlplane-subnet] +cidr = 10.122.0.0/24 +#dhcp_end = 10.122.0.140 +#dhcp_start = 10.122.0.80 +dhcp_end = 10.122.0.194 +dhcp_start = 10.122.0.140 +#dns_nameservers = +gateway = 10.122.0.25 +#inspection_iprange = 10.122.0.141,10.122.0.201 +inspection_iprange = 10.122.0.195,10.122.0.249 +masquerade = true +``` + +## RHEL Tripleo container preparation + +Generate the `/home/stack/containers-prepare-parameter.yaml` config file using the default method for a local registry on the undercloud. + +```sh +sudo su - stack +openstack tripleo container image prepare default \ +--local-push-destination \ +--output-env-file containers-prepare-parameter.yaml +``` + +Add the API key to download containers from RHEL Quay public registry. + +RHEL requires containers to be pulled from Quay.io using a valid API token (unique to your RHEL account), containers-prepare-parameters.yaml must be modified to include the API key. +The following opensource tripleo sections explain the containers-prepare-parameters.yaml in more detail, for a quick deployment use the following instructions. + +> https://access.redhat.com/RegistryAuthentication + +Edit `containers-prepare-parameter.yaml` to include the Redhat Quay bearer token. + +```sh +nano -cw /home/stack/containers-prepare-parameter.yaml + +parameter_defaults: + ContainerImagePrepare: + - push_destination: true + set: + <....settings....> + tag_from_label: '{version}-{release}' + ContainerImageRegistryLogin: true + ContainerImageRegistryCredentials: + registry.redhat.io: + 4110881|osp16-undercloud: long-bearer-token-here +``` + +## Deploy the undercloud + +Shutdown the Undercloud VM instance and take a snapshot in Proxmox, call it 'pre\_undercloud\_deploy'. + +```sh +openstack undercloud install --dry-run +time openstack undercloud install +#time openstack undercloud install --verbose # if there are failing tasks + +########################################################## + +The Undercloud has been successfully installed. + +Useful files: + +Password file is at /home/stack/undercloud-passwords.conf +The stackrc file is at ~/stackrc + +Use these files to interact with OpenStack services, and +ensure they are secured. + +########################################################## + + +real 31m11.191s +user 13m28.211s +sys 3m15.817s +``` + +> If you need to change any configuration in the undercloud.conf you can rerun the install over the top and the node **should** reconfigure itself (network changes likely necessitate redeployment, changinf ipxe/inspection ranges seems to require redeployment of VM). + +```sh +# update undercloud configuration, forcing regeneration of passwords 'undercloud-passwords.conf' +openstack undercloud install --force-stack-update +``` + +## Output + +- undercloud-passwords.conf - A list of all passwords for the director services. +- stackrc - A set of initialisation variables to help you access the director command line tools. + +Load env vars specific to the undercloud for the openstack cli tool. + +```sh +source ~/stackrc +``` + +Check openstack undercloud endpoints, after a reboot always check the endpoints are up before performing actions. + +```sh +openstack endpoint list +``` \ No newline at end of file diff --git a/3) Overcloud Node Import.md b/3) Overcloud Node Import.md new file mode 100755 index 0000000..0cda691 --- /dev/null +++ b/3) Overcloud Node Import.md @@ -0,0 +1,512 @@ +## Obtain images for overcloud nodes RHEL/RHOSP Tripleo + +> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_installing-director-on-the-undercloud#proc_single-cpu-architecture-overcloud-images_overcloud-images + +Download images direct from Redhat and upload to undercloud swift API. +```sh +sudo su - stack +source ~/stackrc +sudo dnf install -y rhosp-director-images-ipa-x86_64 rhosp-director-images-x86_64 +mkdir ~/images +cd ~/images +for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.2.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.2.tar; do tar -xvf $i; done +openstack overcloud image upload --image-path /home/stack/images/ +openstack image list +ll /var/lib/ironic/httpboot # look for inspector ipxe config and the kernel and initramfs files +``` + +## Import bare metal nodes + +### Build node definition list + +This is commonly refered to as the `instackenv.json` file, Redhat references this as the node definition template nodes.json. + +> the schema reference for this file: +> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/environments/baremetal.html#instackenv + +Gather all IP addresses for the IPMI interfaces. +- `.[].ports.address` is the MAC address for iPXE boot, typically eth0. +- `.[].pm_addr` is the IP address of the IPMI adapter. +- If the IPMI interface is shared with the eth0 control plane interface the MAC address will be used for iPXE boot. +- If the IPMI interface and eth0 interface are not shared (have different MAC address) you may have a tedious task ahead of you searching through the XClarity out of band adapters or looking through the switch MAC table and then correlating the switch port to the node to enumerate the MAC address. +- University nodes do share a single interface for IPMI and iPXE but the MAC addresses are different. + +```sh +# METHOD 1 - will not work for University SR630 servers +# where the IPMI and PXE interfaces share the same MAC address ( NOTE this is not the case for the Lenovo SR630 with OCP network adapter working to bridge the XClarity/IPMI) + +# Scan the IPMI port of all hosts. +sudo dnf install nmap -y +nmap -p 623 10.122.1.0/24 + +# Query the arp table to return the MAC addresses of the IPMI(thus PXE) interfaces. +ip neigh show dev enp6s19 + +# controller 10-12, 20-21 networker, 30-77 compute, (54 temporary proxmox, 55-57 temporary storage nodes - remove from compute range) +#ipmitool -N 1 -R 0 -I lanplus -H 10.122.1.10 -U USERID -P Password0 lan print +for i in {10..80}; do j=10.122.1.$i ; ip --json neigh show dev enp6s19 | jq -r " .[] | select(.dst==\"$j\") | \"\(.dst) \(.lladdr)\""; done | grep -v null + +10.122.1.10 38:68:dd:4a:56:3c +10.122.1.11 38:68:dd:4a:55:94 +10.122.1.12 38:68:dd:4a:42:4c +10.122.1.20 38:68:dd:4a:4a:34 +10.122.1.21 38:68:dd:4a:52:1c +10.122.1.30 38:68:dd:4c:17:ec +10.122.1.31 38:68:dd:4c:17:b4 +10.122.1.32 38:68:dd:4d:1e:84 +10.122.1.33 38:68:dd:4d:0f:f4 +10.122.1.34 38:68:dd:4d:26:ac +10.122.1.35 38:68:dd:4d:1b:f4 +10.122.1.36 38:68:dd:4a:46:4c +10.122.1.37 38:68:dd:4d:16:7c +10.122.1.38 38:68:dd:4d:15:8c +10.122.1.39 38:68:dd:4d:1a:4c +10.122.1.40 38:68:dd:4a:75:94 +10.122.1.41 38:68:dd:4d:1c:fc +10.122.1.42 38:68:dd:4d:19:0c +10.122.1.43 38:68:dd:4a:43:ec +10.122.1.44 38:68:dd:4a:41:4c +10.122.1.45 38:68:dd:4d:14:24 +10.122.1.46 38:68:dd:4d:18:c4 +10.122.1.47 38:68:dd:4d:18:cc +10.122.1.48 38:68:dd:4a:41:8c +10.122.1.49 38:68:dd:4c:17:8c +10.122.1.50 38:68:dd:4c:17:2c +10.122.1.51 38:68:dd:4d:1d:cc +10.122.1.52 38:68:dd:4c:17:e4 +10.122.1.53 38:68:dd:4c:17:5c +10.122.1.54 38:68:dd:70:a8:e8 +10.122.1.55 38:68:dd:70:a0:84 +10.122.1.56 38:68:dd:70:a4:cc +10.122.1.57 38:68:dd:70:aa:cc +10.122.1.58 38:68:dd:70:a8:88 +10.122.1.59 38:68:dd:70:a5:bc +10.122.1.60 38:68:dd:70:a5:54 +10.122.1.61 38:68:dd:70:a2:e0 +10.122.1.62 38:68:dd:70:a2:b8 +10.122.1.63 38:68:dd:70:a7:10 +10.122.1.64 38:68:dd:70:a2:0c +10.122.1.65 38:68:dd:70:9f:38 +10.122.1.66 38:68:dd:70:a8:74 +10.122.1.67 38:68:dd:70:a2:ac +10.122.1.68 38:68:dd:70:a5:18 +10.122.1.69 38:68:dd:70:a7:88 +10.122.1.70 38:68:dd:70:a4:d8 +10.122.1.71 38:68:dd:70:a6:b0 +10.122.1.72 38:68:dd:70:aa:c4 +10.122.1.73 38:68:dd:70:9e:e0 +10.122.1.74 38:68:dd:70:a3:40 +10.122.1.75 38:68:dd:70:a2:08 +10.122.1.76 38:68:dd:70:a4:a0 +10.122.1.77 38:68:dd:70:a1:6c + +# METHOD 2 - used for University SR630 servers +# where the IPMI interface and eth0 interface are not shared (or have different MAC addresses) + +## install XClarity CLI +mkdir onecli +cd onecli +curl -o lnvgy_utl_lxce_onecli02a-3.5.0_rhel_x86-64.tgz https://download.lenovo.com/servers/mig/2022/06/01/55726/lnvgy_utl_lxce_onecli02a-3.5.0_rhel_x86-64.tgz +tar -xvzf lnvgy_utl_lxce_onecli02a-3.5.0_rhel_x86-64.tgz + +## XClarity CLI - find the MAC of the eth0 device +### find all config items +./onecli config show all --bmc USERID:Password0@10.122.1.10 --never-check-trust --nolog +### find specific item +./onecli config show IMM.HostIPAddress1 --bmc USERID:Password0@10.122.1.10 --never-check-trust --nolog --quiet +./onecli config show IntelREthernetConnectionX722for1GbE--OnboardLAN1PhysicalPort1LogicalPort1.MACAddress --never-check-trust --nolog --quiet + +### find MAC address for eth0 (assuming eth0 is connected) +#### for the origional SR630 University nodes +for i in {10..53}; do j=10.122.1.$i ; echo $j $(sudo ./onecli config show IntelREthernetConnectionX722for1GbE--OnboardLAN1PhysicalPort1LogicalPort1.MACAddress --bmc USERID:Password0@$j --never-check-trust --nolog --quiet | grep IntelREthernetConnectionX722for1GbE--OnboardLAN1PhysicalPort1LogicalPort1.MACAddress | awk -F '=' '{print $2}' | tr '[:upper:]' '[:lower:]'); done + +## SR630 +# controllers +10.122.1.10 38:68:dd:4a:56:38 +10.122.1.11 38:68:dd:4a:55:90 +10.122.1.12 38:68:dd:4a:42:48 +# networkers +10.122.1.20 38:68:dd:4a:4a:30 +10.122.1.21 38:68:dd:4a:52:18 +# compute +10.122.1.30 38:68:dd:4c:17:e8 +10.122.1.31 38:68:dd:4c:17:b0 +10.122.1.32 38:68:dd:4d:1e:80 +10.122.1.33 38:68:dd:4d:0f:f0 +10.122.1.34 38:68:dd:4d:26:a8 +10.122.1.35 38:68:dd:4d:1b:f0 +10.122.1.36 38:68:dd:4a:46:48 +10.122.1.37 38:68:dd:4d:16:78 +10.122.1.38 38:68:dd:4d:15:88 +10.122.1.39 38:68:dd:4d:1a:48 +10.122.1.40 38:68:dd:4a:75:90 +10.122.1.41 38:68:dd:4d:1c:f8 +10.122.1.42 38:68:dd:4d:19:08 +10.122.1.43 38:68:dd:4a:43:e8 +10.122.1.44 38:68:dd:4a:41:48 +10.122.1.45 38:68:dd:4d:14:20 +10.122.1.46 38:68:dd:4d:18:c0 +10.122.1.47 38:68:dd:4d:18:c8 +10.122.1.48 38:68:dd:4a:41:88 +10.122.1.49 38:68:dd:4c:17:88 +10.122.1.50 38:68:dd:4c:17:28 +10.122.1.51 38:68:dd:4d:1d:c8 +10.122.1.52 38:68:dd:4c:17:e0 +10.122.1.53 38:68:dd:4c:17:58 + +## SR630v2 node have a different OCP network adapter +for i in {54..77}; do j=10.122.1.$i ; echo $j $(sudo ./onecli config show IntelREthernetNetworkAdapterI350-T4forOCPNIC30--Slot4PhysicalPort1LogicalPort1.MACAddress --bmc USERID:Password0@$j --never-check-trust --nolog --quiet | grep IntelREthernetNetworkAdapterI350-T4forOCPNIC30--Slot4PhysicalPort1LogicalPort1.MACAddress | awk -F '=' '{print $2}' | tr '[:upper:]' '[:lower:]'); done + +10.122.1.54 6c:fe:54:32:b8:60 +10.122.1.55 6c:fe:54:33:4f:3c +10.122.1.56 6c:fe:54:33:55:74 +10.122.1.57 6c:fe:54:33:4b:5c +10.122.1.58 6c:fe:54:33:4f:d2 +10.122.1.59 6c:fe:54:33:53:ae +10.122.1.60 6c:fe:54:33:4f:7e +10.122.1.61 6c:fe:54:33:97:46 +10.122.1.62 6c:fe:54:33:57:18 +10.122.1.63 6c:fe:54:33:4e:fa +10.122.1.64 6c:fe:54:33:53:ea +10.122.1.65 6c:fe:54:33:4d:f8 +10.122.1.66 6c:fe:54:33:4d:2c +10.122.1.67 6c:fe:54:32:e8:4e +10.122.1.68 6c:fe:54:33:55:fe +10.122.1.69 6c:fe:54:33:4b:86 +10.122.1.70 6c:fe:54:33:55:56 +10.122.1.71 6c:fe:54:33:4e:b2 +10.122.1.72 6c:fe:54:33:57:12 +10.122.1.73 6c:fe:54:33:4e:d6 +10.122.1.74 6c:fe:54:33:51:98 +10.122.1.75 6c:fe:54:33:4d:62 +10.122.1.76 6c:fe:54:33:55:50 +10.122.1.77 6c:fe:54:32:f0:2a +``` + +Create each node configuration in the "nodes" list `/home/stack/instackenv.json`. + +```json +{ + "nodes": [ + { + "ports": [ + { + "address": "38:68:dd:4a:42:4c", + "physical_network": "ctlplane" + } + ], + "name": "osctl0", + "cpu": "4", + "memory": "6144", + "disk": "120", + "arch": "x86_64", + "pm_type": "ipmi", + "pm_user": "USERID", + "pm_password": "Password0", + "pm_addr": "10.122.1.10", + "capabilities": "profile:baremetal,boot_option:local", + "_comment": "rack - openstack - location - u5" + }, + { + "ports": [ + { + "address": "38:68:dd:4a:4a:34", + "physical_network": "ctlplane" + } + ], + "name": "osnet1", + "cpu": "4", + "memory": "6144", + "disk": "120", + "arch": "x86_64", + "pm_type": "ipmi", + "pm_user": "USERID", + "pm_password": "Password0", + "pm_addr": "10.122.1.21", + "capabilities": "profile:baremetal,boot_option:local", + "_comment": "rack - openstack - location - u9" + }, + { + "ports": [ + { + "address": "38:68:dd:4c:17:e4", + "physical_network": "ctlplane" + } + ], + "name": "oscomp1", + "cpu": "4", + "memory": "6144", + "disk": "120", + "arch": "x86_64", + "pm_type": "ipmi", + "pm_user": "USERID", + "pm_password": "Password0", + "pm_addr": "10.122.1.31", + "capabilities": "profile:baremetal,boot_option:local", + "_comment": "rack - openstack - location - u11" + } + ] +} +``` + +- Do not have to include capabilities, we later add these for the overcloud deployment. +- The capabilities 'profile:flavour' and 'boot_option:local' are good defaults, more capabilities will be automatically added during introspection and manually added when binding a node to a role. + +## Setup RAID + Legacy BIOS boot mode + +> IMPORTANT: UEFI boot does work on the SR650 as expected, however it can take a very long time to cycle through the interfaces to the PXE boot interface. +> On large deployments you may reach the timeout on the DHCP server entry, BIOS mode is quicker to get to the PXE rom. + +Use `/home/stack/instackenv.json` to start each node, login to each nodes XClarity web interface and setup a RAID1 array of the boot disks. + +```sh +# check nodes power state +for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis status | grep ^System;done + +# start all nodes +for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis power on ;done +for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis status | grep ^System;done + +# get IP login to XClarity web console +# configure RAID1 array on each node +# set boot option from UEFI to LEGACY/BIOS boot mode +for i in `jq -r .nodes[].pm_addr instackenv.json`; do echo $i ;done + +# stop all nodes +for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis power off ;done +``` + +## Import nodes into the undercloud + +> WARNING: the capabilities field keypair value 'node:compute-0, node:compute-1, node:compute-N' value must be contiguous, the University has a node with broken hardware 'oscomp9' that is not in the `instackenv.json` file. +> WARNING: Each capability keypair 'node:\-#' must be in sequence, with oscomp9 removed from the `instackenv.json` we add the keypairs as so: `oscomp8 = computeA-8 AND oscomp10 = computeA-9`. + +**Notice the Univerity cluster has 2 different server hardware types, with different network interface mappings, the node capabilities (computeA-0 VS node:computeB-0) will be used in the `scheduler_hints.yaml` to bind nodes to roles, there need to be 2 roles for the compute nodes to allow each server type to have a different 'associated' network interface mapping schemes.** + +```sh +# load credentials +source ~/stackrc + +# remove nodes if not first run +#for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node manage $i;done +#for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node delete $i;done + +# ping all nodes to update the arp cache +#for i in `jq -r .nodes[].pm_addr instackenv.json`; do sudo ping -c 3 -W 5 $i ;done +nmap -p 623 10.122.1.0/24 + +# import nodes +openstack overcloud node import instackenv.json + +# set nodes to use BIOS boot mode for overcloud installation +for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node set --property capabilities="boot_mode:bios,$(openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities | sed "s/boot_mode:[^,]*,//g")" $i; done + +# set nodes for baremetal profile for the schedule_hints.yaml to select the nodes as candidates +for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node set --property capabilities="profile:baremetal,$(openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities | sed "s/profile:baremetal[^,]*,//g")" $i; done + +## where some nodes cannot deploy +# oscomp4, oscomp7 have been removed from the instackenv.json owing to network card issues +# owing to the way we are setting the node capability using a loop index we will see that the oscomp8 will be named in openstack as computeA-6 +# +# openstack baremetal node show oscomp8 -f json -c properties | jq .properties.capabilities +# "node:computeA-6,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal" +# +# if you do not have a full compliment of nodes ensure templates/scheduler_hints_env.yaml has the correct amount of nodes, in this case 22 computeA nodes +# ControllerCount: 3 +# NetworkerCount: 2 +# #2 nodes removed owing to network card issues +# #ComputeACount: 24 +# ComputeACount: 22 +# ComputeBCount: 24 + +# set 'node:name' capability to allow scheduler_hints.yaml to match roles to nodes +## set capability for controller and networker nodes +openstack baremetal node set --property capabilities="node:controller-0,$(openstack baremetal node show osctl0 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osctl0 ;\ +openstack baremetal node set --property capabilities="node:controller-1,$(openstack baremetal node show osctl1 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osctl1 ;\ +openstack baremetal node set --property capabilities="node:controller-2,$(openstack baremetal node show osctl2 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osctl2 ;\ +openstack baremetal node set --property capabilities="node:networker-0,$(openstack baremetal node show osnet0 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osnet0 ;\ +openstack baremetal node set --property capabilities="node:networker-1,$(openstack baremetal node show osnet1 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osnet1 + +## capability for compute nodes +index=0 ; for i in {0..23}; do openstack baremetal node set --property capabilities="node:computeA-$index,$(openstack baremetal node show oscomp$i -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" oscomp$i && index=$((index + 1)) ;done + +## capability for *NEW* compute nodes (oscomp-24..27 are being used for temporary proxmox and ceph thus removed from the instackenv.json) - CHECK +index=0 ; for i in {24..47}; do openstack baremetal node set --property capabilities="node:computeB-$index,$(openstack baremetal node show oscomp$i -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" oscomp$i && index=$((index + 1)) ;done + +# check capabilities are set for all nodes +#for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do echo $i && openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities; done +for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities; done + +# output, notice the order of the nodes +#node:controller-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:controller-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:controller-2,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:networker-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:networker-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeA-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeA-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeA-2,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeA-3,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#... +#node:computeB-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeB-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeB-2,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#node:computeB-3,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal +#... +#node:computeB-23,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal + +# all in one command for inspection and provisioning +#openstack overcloud node introspect --all-manageable --provide + +# inspect all nodes hardware +for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node inspect $i;done + +# if a node fails inspection +openstack baremetal node maintenance unset oscomp9 +openstack baremetal node manage oscomp9 +openstack baremetal node power off oscomp9 # wait for node to power off +openstack baremetal node inspect oscomp9 + +# wait until all nodes are in a 'managable' state to continue, this may take around 15 minutes +openstack baremetal node list + +# set nodes to provide state and invokes node cleaning (uses the overcloud image) +for i in `openstack baremetal node list -f json | jq -r ' .[] | select(."Provisioning State" == "manageable") | .Name'`; do openstack baremetal node provide $i;done + +# if a node fails provision +openstack baremetal node maintenance unset osnet1 +openstack baremetal node manage osnet1 +openstack baremetal node provide osnet1 + +# wait until all nodes are in an 'available' state to deploy the overcloud +baremetal node list + +# set all nodes back to 'manage' state to rerun introspection/provide +# for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node manage $i;done +``` + +## Checking networking via inspection data + +Once the node inspections complete, we can check the list of network adapters in a chassis to assist with the network configuration in the deployment configuration files. + +```sh +# load credentials +source ~/stackrc + +# find the UUID of a sample node +openstack baremetal node list -f json | jq . + +# check collected metadata, commands will show all interfaces and if they have carrier signal +#openstack baremetal node show f409dad9-1c1e-4ca0-b8af-7eab1b7f878d -f json | jq -r . +#openstack baremetal introspection data save f409dad9-1c1e-4ca0-b8af-7eab1b7f878d | jq .inventory.interfaces +#openstack baremetal introspection data save f409dad9-1c1e-4ca0-b8af-7eab1b7f878d | jq .all_interfaces +#openstack baremetal introspection data save f409dad9-1c1e-4ca0-b8af-7eab1b7f878d | jq '.all_interfaces | keys[]' + +# origional server hardware SR630 (faedafa5-5fa4-432e-b3aa-85f7f30f10fb | oscomp23) +(undercloud) [stack@undercloud ~]$ openstack baremetal introspection data save faedafa5-5fa4-432e-b3aa-85f7f30f10fb | jq '.all_interfaces | keys[]' +"eno1" +"eno2" +"eno3" +"eno4" +"enp0s20f0u1u6" +"ens2f0" +"ens2f1" + +# new server hardware SR630v2 (b239f8b7-3b97-47f8-a057-4542ca6c7ab7 | oscomp28) +(undercloud) [stack@undercloud ~]$ openstack baremetal introspection data save b239f8b7-3b97-47f8-a057-4542ca6c7ab7 | jq '.all_interfaces | keys[]' +"enp0s20f0u1u6" +"ens2f0" +"ens2f1" +"ens4f0" +"ens4f1" +"ens4f2" +"ens4f3" +``` + +Interfaces are shown in the order that they are seen on the PCI bus, modern linux OS' have an interafce naming scheme triggered by udev. + +This naming scheme is often described as: +- Predictable Network Interface Names +- Consistent Network Device Naming +- Persistent names (https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/) + +```sh +# example interface naming scheme +enp0s10: +| | | +v | | --> virtual (qemu) +en| | --> ethernet + v | + p0| --> bus number (0) + v + s10 --> slot number (10) + f0 --> function (multiport card) +``` + +Openstack adopts a interface mapping scheme to help identify the network interfaces by the notation by notation 'nic1, nic2, nicN'. +Only interfaces with a carrier signal (connected to switch) will be participate in the interface mapping scheme. +For the University nodes we the following Openstack mapping scheme is created. + +Server classA: + +| mapping | interface | purpose | +| --- | --- | --- | +| nic1 | eno1 | Control Plane | +| nic2 | enp0s20f0u1u6 | USB ethernet, likely from the XClarity controller | +| nic3 | ens2f0 | LACP bond, guest/storage | +| nic4 | ens2f1 | LACP bond, guest/storage | + +Server classB: + +| mapping | interface | purpose | +| --- | --- | --- | +| nic1 | enp0s20f0u1u6 | USB ethernet, likely from the XClarity controller | +| nic2 | ens2f0 | Control Plane | +| nic3 | ens2f1 | LACP bond, guest/storage | +| nic4 | ens4f0 | LACP bond, guest/storage | + +The 'Server classA' nodes will be used for roles 'controller', 'networker' and 'compute'. the Server classB' hardware will be used for roles 'compute'. +The mapping 'nic1' is not consistent for 'Control Plane' network across both classes of server hardware, necessitating multiple roles (thus multiple network interface templates) for the compute nodes. + +You may notice some LLDP information (Cumulus switch must be running the LLDP service), this is very helpful to determine the switch port that the network interface is connected to and verify your point-to-point list. +Owing to the name of the switch we can quickly see this is the 100G cumulus switch. + +``` + "ens2f0": { + "ip": "fe80::d57c:2432:d78d:e15d", + "mac": "10:70:fd:24:62:e0", + "client_id": null, + "pxe": false, + "lldp_processed": { + "switch_chassis_id": "b8:ce:f6:18:c3:4a", + "switch_port_id": "swp9s0", + "switch_system_name": "sw100g0", + "switch_system_description": "Cumulus Linux version 4.2.0 running on Mellanox Technologies Ltd. MSN3700C", + "switch_capabilities_support": [ + "Bridge", + "Router" + ], + "switch_capabilities_enabled": [ + "Bridge", + "Router" + ], + "switch_mgmt_addresses": [ + "172.31.31.11", + "fe80::bace:f6ff:fe18:c34a" + ], + "switch_port_description": "swp9s0", + "switch_port_link_aggregation_enabled": false, + "switch_port_link_aggregation_support": true, + "switch_port_link_aggregation_id": 0, + "switch_port_autonegotiation_enabled": true, + "switch_port_autonegotiation_support": true, + "switch_port_physical_capabilities": [ + "1000BASE-T fdx", + "PAUSE fdx" + ], + "switch_port_mau_type": "Unknown" + } + }, +``` + + diff --git a/4) Ceph Cluster Setup.md b/4) Ceph Cluster Setup.md new file mode 100755 index 0000000..a63b7b2 --- /dev/null +++ b/4) Ceph Cluster Setup.md @@ -0,0 +1,1079 @@ +> RHOSP tripleo can also deploy Ceph +> To separate the storage deployment from the Openstack deployment to simplify any DR/Recovery/Redeployment we will create a stand-alone Ceph cluster and integrate with Openstack overcloud +> Opensource Ceph can be installed for further cost saving +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.1/html-single/integrating\_an\_overcloud\_with\_an\_existing\_red\_hat\_ceph\_cluster/index](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/integrating_an_overcloud_with_an_existing_red_hat_ceph_cluster/index) + +# Ceph pacific setup + +## Get access to ceph nodes + +- Rocky linux +- 3 Physical Nodes +- 1G Ethernet for Openstack control plane management network +- 2 x 25G Ethernet LACP bond for all service networks +- 4 disk per node, 2 OS RAID1 in BIOS, 2 Ceph 960GB +- Setup OS disk as LVM boot 1GB, root 240GB, swap 4GB +- Credentials - root:Password0 + +# Ceph architecture + +Ceph services: +| \-\-\- | +| 3 monitors | +| 3 managers | +| 6 osd | +| 3 mds (2 standby) - not being used | +| 3 rgw (2 standby - fronted by LBL) - not being used | + +Networks: + +- 'Ceph public network' (Ceph services) VLAN13, this is the same network as the 'Openstack storage network'. +- 'Ceph cluster network' (OSD replication+services) VLAN15. +- 'Openstack storage management network' VLAN14, this network is a prerequisite of the Openstack Tripleo installer, it may not be used in with an External Ceph installation, it is added to cover all bases. +- 'Openstack control plane network' VLAN1(native), this network will serve as the main ingress to the Ceph cluster nodes. +- 'Openstack external network' VLAN4, this network has an externally routable gateway. + +| Network | VLAN | Interface | IP Range | Gateway | DNS | +| --- | --- | --- | --- | --- | --- | +| Ceph public
(Openstack storage) | 13 | bond0 | 10.122.10.0/24 | NA | NA | +| Ceph cluster | 15 | bond0 | 10.122.14.0/24 | NA | NA | +| Openstack storage management | 14 | bond0 | 10.122.12.0/24 | NA | NA | +| Openstack control plane | 1(native) | ens4f0 | 10.122.0.0/24 | NA | NA | +| Openstack external | 1214 | bond0 | 10.121.4.0/24 | 10.121.4.1 | 144.173.6.71
1.1.1.1 | + +IP allocation: + +> For all ranges, addresses 7-13 in the last octet are reserved for Ceph, There are 3 spare IPs either for additional nodes or RGW/LoadBalancer services. + +| Node | ceph1 | ceph2 | ceph3 | +| --- | --- | --- | --- | +| Ceph public
(Openstack storage) | 10.122.10.7 | 10.122.10.8 | 10.122.10.9 | +| Ceph cluster | 10.122.14.7 | 10.122.14.8 | 10.122.14.9 | +| Openstack storage management | 10.122.12.7 | 10.122.12.8 | 10.122.12.9 | +| Openstack control plane | 10.122.0.7 | 10.122.0.8 | 10.122.0.9 | +| Openstack external | 10.122.4.7 | 10.122.4.8 | 10.122.4.9 | + +# Configure OS + +> Perform all actions on all nodes unless specified. +> Substitute IPs and hostnames appropriatley. + +## Configure networking + +Configure networking with the nmcli method. Connect to the console of the out of band interface and configure the management interface. + +``` +# likely have NetworkManager enabled on RHEL8 based OS +systemctl list-unit-files --state=enabled | grep -i NetworkManager + +# create management interface +# nmcli con add type ethernet ifname ens4f0 con-name openstack-ctlplane connection.autoconnect yes ip4 10.122.0.7/24 +nmcli con add type ethernet ifname ens9f0 con-name openstack-ctlplane connection.autoconnect yes ip4 10.122.0.7/24 +``` + +Connect via SSH to configure the bond and VLANS. + +``` +# create bond interface and add slave interfaces +nmcli con add type bond ifname bond0 con-name bond0 bond.options "mode=802.3ad, miimon=100, downdelay=0, updelay=0" connection.autoconnect yes ipv4.method disabled ipv6.method ignore +# nmcli con add type ethernet ifname ens2f0 master bond0 +# nmcli con add type ethernet ifname ens2f1 master bond0 +nmcli con add type ethernet ifname ens3f0 master bond0 +nmcli con add type ethernet ifname ens3f1 master bond0 +nmcli device status + +# create vlan interfaces +nmcli con add type vlan ifname bond0.13 con-name ceph-public id 13 dev bond0 connection.autoconnect yes ip4 10.122.10.7/24 +nmcli con add type vlan ifname bond0.15 con-name ceph-cluster id 15 dev bond0 connection.autoconnect yes ip4 10.122.14.7/24 +nmcli con add type vlan ifname bond0.14 con-name openstack-storage_mgmt id 14 dev bond0 connection.autoconnect yes ip4 10.122.12.7/24 +nmcli con add type vlan ifname bond0.1214 con-name openstack-external id 1214 dev bond0 connection.autoconnect yes ip4 10.121.4.7/24 gw4 10.121.4.1 ipv4.dns 144.173.6.71,1.1.1.1 ipv4.dns-search local + +# check all devices are up +nmcli device status +nmcli con show +nmcli con show bond0 + +# check LACP settings +cat /proc/net/bonding/bond0 + +# remove connection profiles +nmcli con show +nmcli con del openstack-ctlplane +nmcli con del ceph-public +nmcli con del ceph-cluster +nmcli con del openstack-storage_mgmt +nmcli con del openstack-external +nmcli con del bond-slave-ens2f0 +nmcli con del bond-slave-ens2f1 +nmcli con del bond0 +nmcli con show +nmcli device status +``` + +## Install useful tools and enable Podman + +```sh +dnf update -y ;\ +dnf install nano lvm2 chrony telnet traceroute wget tar nmap tmux bind-utils net-tools podman python3 mlocate ipmitool tmux wget yum-utils -y ;\ +systemctl enable podman ;\ +systemctl start podman +``` + +## Setup hostnames + +Cephadm install tool specific setup, Ceph prefers to talk to its peers using IP (FQDN requires more setup and is not recommended in the documentation). + +```sh +echo "10.122.10.7 ceph1" | tee -a /etc/hosts ;\ +echo "10.122.10.8 ceph2" | tee -a /etc/hosts ;\ +echo "10.122.10.9 ceph3" | tee -a /etc/hosts + +hostnamectl set-hostname ceph1 # this should not be an FQDN such as ceph1.local (as recommended in ceph documentation) +hostnamectl set-hostname --transient ceph1 +``` + +## Setup NTP + +``` +dnf install chrony -y +timedatectl set-timezone Europe/London +nano -cw /etc/chrony.conf + +server ntp.university.ac.uk iburst +pool 2.cloudlinux.pool.ntp.org iburst + +systemctl enable chronyd +systemctl start chronyd +``` + +## Disable annoyances + +``` +systemctl disable firewalld +systemctl stop firewalld + +# DO NOT DISABLE SELINUX - now a requirement of Ceph, containers will not start without SELINUX enforcing +#sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux +#getenforce +#setenforce 0 +#getenforce +``` + +## Reboot + +```sh +reboot +``` + +# Ceph install + +## Download cephadm deployment tool + +``` +#curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm +curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm +chmod +x cephadm +``` + +## Add the Ceph yum repo and install the cephadm tool to the system, then remove the installer. + +``` +# this may not be required with the pacific version of cephadm +# add rockylinux / almalinux to the accepted distributions in the installer +nano -cw cephadm + +class YumDnf(Packager): + DISTRO_NAMES = { + 'rocky' : ('centos', 'el'), + 'almalinux': ('centos', 'el'), + 'centos': ('centos', 'el'), + 'rhel': ('centos', 'el'), + 'scientific': ('centos', 'el'), + 'fedora': ('fedora', 'fc'), + } + +./cephadm add-repo --release pacific +./cephadm install +which cephadm +rm ./cephadm +``` + +## Bootstrap the first mon node + +> This action should be performed ONLY on ceph1. + +- Bootstrap the mon daemon on this node, using the mon network interface (referred to as the public network in ceph documentation). +- Bootstrap will pull the correct docker image and setup the host config files and systemd scripts (to start daemon containers). +- The /etc/ceph/ceph.conf config is populated with a unique cluster fsid ID and mon0 host connection profile. + +``` +mkdir -p /etc/ceph +cephadm bootstrap --mon-ip 10.122.10.7 --skip-mon-network --cluster-network 10.122.14.0/24 + +# copy the output of the command to file + +Ceph Dashboard is now available at: + + URL: https://ceph1:8443/ + User: admin + Password: Password0 + +Enabling client.admin keyring and conf on hosts with "admin" label +Enabling autotune for osd_memory_target +You can access the Ceph CLI as following in case of multi-cluster or non-default config: + + sudo /usr/sbin/cephadm shell --fsid 5b99e574-4577-11ed-b70e-e43d1a63e590 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring + +Or, if you are only running a single cluster on this host: + + sudo /usr/sbin/cephadm shell + +cat /etc/ceph/ceph.conf + +# minimal ceph.conf for 5b99e574-4577-11ed-b70e-e43d1a63e590 +[global] + fsid = 5b99e574-4577-11ed-b70e-e43d1a63e590 + mon_host = [v2:10.122.10.7:3300/0,v1:10.122.10.7:6789/0] +``` + +## Install the ceph cli on the first mon node + +> This action should be performed on ceph1. + +The cli can also be used via a container shell without installation, the cephadm installation method configures the cli tool to target the container daemons. + +``` +cephadm install ceph-common +ceph -v + +ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific (stable) + +# ceph status +ceph -s + + cluster: + id: 5b99e574-4577-11ed-b70e-e43d1a63e590 + health: HEALTH_WARN + OSD count 0 < osd_pool_default_size 3 + + services: + mon: 1 daemons, quorum ceph1 (age 2m) + mgr: ceph1.virprg(active, since 46s) + osd: 0 osds: 0 up, 0 in + + data: + pools: 0 pools, 0 pgs + objects: 0 objects, 0 B + usage: 0 B used, 0 B / 0 B avail + pgs: +``` + +## Push ceph ssh pub key to other ceph nodes + +> This action should be performed on ceph1. + +``` +ceph cephadm get-pub-key > ~/ceph.pub +for i in {2..3};do ssh-copy-id -f -i ~/ceph.pub root@ceph$i;done +``` + +Test connectivity of the ceph key. + +``` +ceph config-key get mgr/cephadm/ssh_identity_key > ~/ceph.pvt +chmod 0600 ~/ceph.pvt +ssh -i ceph.pvt root@ceph2 +ssh -i ceph.pvt root@ceph3 +``` + +## Add more mon nodes + +> This action should be performed on ceph1. +> `_admin` label populates the /etc/ceph config files to allow cli usage on each host. + +``` +ceph orch host add ceph2 10.122.10.8 --labels _admin +ceph orch host add ceph3 10.122.10.9 --labels _admin +``` + +## Install the ceph cli on the remaining nodes + +``` +ssh -i ceph.pvt root@ceph2 +cephadm install ceph-common +ceph -s +exit + +ssh -i ceph.pvt root@ceph3 +cephadm install ceph-common +ceph -s +exit +``` + +## Set the operating networks, the cluster network and public network are in the same network. + +> This action should be performed on ceph1. + +``` +ceph config set global public_network 10.122.10.0/24 +ceph config set global cluster_network 10.122.14.0/24 +ceph config dump +``` + +## Add all labels to the node + +> This action should be performed on ceph1. + +These are arbitrary label values to assist with service placement, however there are special labels with functionality such as '_admin'. + +> https://docs.ceph.com/en/latest/cephadm/host-management/ + +``` +ceph orch host label add ceph1 mon ;\ +ceph orch host label add ceph1 osd ;\ +ceph orch host label add ceph1 mgr ;\ +ceph orch host label add ceph1 mds ;\ +ceph orch host label add ceph1 rgw ;\ +ceph orch host label add ceph2 mon ;\ +ceph orch host label add ceph2 osd ;\ +ceph orch host label add ceph2 mgr ;\ +ceph orch host label add ceph2 mds ;\ +ceph orch host label add ceph2 rgw ;\ +ceph orch host label add ceph3 mon ;\ +ceph orch host label add ceph3 osd ;\ +ceph orch host label add ceph3 mgr ;\ +ceph orch host label add ceph3 mds ;\ +ceph orch host label add ceph3 rgw ;\ +ceph orch host ls + +HOST ADDR LABELS STATUS +ceph1 10.122.10.7 _admin mon osd mgr mds rgw +ceph2 10.122.10.8 _admin mon osd mgr mds rgw +ceph3 10.122.10.9 _admin mon osd mgr mds rgw +3 hosts in cluster +``` + +## Deploy core daemons to hosts + +> This action should be performed on ceph1. +> More daemons will be applied as they are added. +> https://docs.ceph.com/en/latest/cephadm/services/#orchestrator-cli-placement-spec + +``` +#ceph orch apply mon --placement="label:mon" --dry-run +ceph orch apply mon --placement="label:mon" +ceph orch apply mgr --placement="label:mgr" +ceph orch ls # keep checking until all services are up, should be <1 minute + +NAME PORTS RUNNING REFRESHED AGE PLACEMENT +alertmanager ?:9093,9094 1/1 25s ago 36m count:1 +crash 3/3 111s ago 36m * +grafana ?:3000 1/1 25s ago 36m count:1 +mgr 3/3 111s ago 43s label:mgr +mon 3/3 111s ago 50s label:mon +node-exporter ?:9100 3/3 111s ago 36m * +prometheus ?:9095 1/1 25s ago 36m count:1 +``` + +## Setup the mgr dashboard to listen on a specific IP (the only range in this case) + +> This action should be performed on ceph1. +> https://docs.ceph.com/en/latest/mgr/dashboard/ + +When adding multiple dashboards only one node will be the active dashboard and the others will be in standby status, should you connect to another hosts @https:8443 you will be redirected to the active dashboard node. + +```sh +# dashboard is not being run on the public_network, instead on the routable network, we also put the Openstack dashboard here +ceph config set mgr mgr/dashboard/ceph1/server_addr 10.121.4.7 ;\ +ceph config set mgr mgr/dashboard/ceph2/server_addr 10.121.4.8 ;\ +ceph config set mgr mgr/dashboard/ceph3/server_addr 10.121.4.9 + +# stop/start ceph +systemctl stop ceph.target;sleep 5;systemctl start ceph.target + +# check service endpoints, likely the mgr service is running on ceph1 with ceph2/3 acting as standby +ceph mgr services + +{ + "dashboard": "https://10.122.10.7:8443/", + "prometheus": "http://10.122.10.7:9283/" +} + +# the dashboard seems to listen on any interface +ss -taln | grep 8443 + +LISTEN 0 5 *:8443 *:* + +# config confims dashboard listening address +ceph config dump | grep "mgr/dashboard/ceph1/server_addr" + + mgr advanced mgr/dashboard/ceph1/server_addr 10.121.4.7 +``` + +Reset dashboard admin user password. + +``` +ceph dashboard ac-user-show +["admin"] + +echo 'Password0' > password.txt +ceph dashboard ac-user-set-password admin -i password.txt +rm -f password.txt +``` + +Netstat shows graphana is also listening on ceph1. + +> https://ceph1:8443/ Dashboard +> https://ceph1:3000/ Graphana +> http://ceph1:9283/ Prometheus + +## Ceph OSD + +#### Add OSD + +> drive-groups method is a new way to specify which disk is to be made an OSD, (types - data, db, wal), you can select disks by cluster node, by path, by serial number, by model or by size - this is useful for large estates and very fast. +> https://docs.ceph.com/en/latest/cephadm/services/osd/#drivegroups +> https://docs.ceph.com/en/pacific/rados/configuration/bluestore-config-ref/ +> https://docs.ceph.com/en/octopus/cephadm/drivegroups + +These instructions are fairly new but will work with OSDs nested on LVM volumes and full disks, as will probably be the standard in future. + +- Perform any disk prep if required +- Enter container shell. +- Seed keyring with OSD credential. +- Prepare OSD (import into mon map with keys etc). +- Signal to the host to create OSD daemon containers. + +For the Production cluster build each node will a create logical volume on each of the 8 spinning disks, the SSD disk will be carved into 8 logical volumes with each volume acting as the wal/db device for a spinning disk. + +Create the logical volumes on each node: + +``` +# find OSD disks +lsblk + +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +sda 8:0 0 223.5G 0 disk +├─sda1 8:1 0 600M 0 part /boot/efi +├─sda2 8:2 0 1G 0 part /boot +└─sda3 8:3 0 221.9G 0 part + ├─rl-root 253:0 0 217.9G 0 lvm / + └─rl-swap 253:1 0 4G 0 lvm [SWAP] +sdb 8:16 0 1.5T 0 disk +sdc 8:32 0 12.8T 0 disk +sdd 8:48 0 12.8T 0 disk +sde 8:64 0 12.8T 0 disk +sdf 8:80 0 12.8T 0 disk +sdg 8:96 0 12.8T 0 disk +sdh 8:112 0 12.8T 0 disk +sdi 8:128 0 12.8T 0 disk +sdj 8:144 0 12.8T 0 disk + +# create volume groups on each disk +vgcreate ceph-block-0 /dev/sdc ;\ +vgcreate ceph-block-1 /dev/sdd ;\ +vgcreate ceph-block-2 /dev/sde ;\ +vgcreate ceph-block-3 /dev/sdf ;\ +vgcreate ceph-block-4 /dev/sdg ;\ +vgcreate ceph-block-5 /dev/sdh ;\ +vgcreate ceph-block-6 /dev/sdi ;\ +vgcreate ceph-block-7 /dev/sdj + +# create logical volumes on each volume group +lvcreate -l 100%FREE -n block-0 ceph-block-0 ;\ +lvcreate -l 100%FREE -n block-1 ceph-block-1 ;\ +lvcreate -l 100%FREE -n block-2 ceph-block-2 ;\ +lvcreate -l 100%FREE -n block-3 ceph-block-3 ;\ +lvcreate -l 100%FREE -n block-4 ceph-block-4 ;\ +lvcreate -l 100%FREE -n block-5 ceph-block-5 ;\ +lvcreate -l 100%FREE -n block-6 ceph-block-6 ;\ +lvcreate -l 100%FREE -n block-7 ceph-block-7 + +# create volume groups on the SSD disk +vgcreate ceph-db-0 /dev/sdb + +# divide the SSD disk into 8 logical volumes to provide a DB device +lvcreate -L 180GB -n db-0 ceph-db-0 ;\ +lvcreate -L 180GB -n db-1 ceph-db-0 ;\ +lvcreate -L 180GB -n db-2 ceph-db-0 ;\ +lvcreate -L 180GB -n db-3 ceph-db-0 ;\ +lvcreate -L 180GB -n db-4 ceph-db-0 ;\ +lvcreate -L 180GB -n db-5 ceph-db-0 ;\ +lvcreate -L 180GB -n db-6 ceph-db-0 ;\ +lvcreate -L 180GB -n db-7 ceph-db-0 +``` + +Write the OSD service spec file and apply, this should only be run on a single _admin node, Ceph1. + +``` +# enter into a container with the toolchain and keys +cephadm shell -m /var/lib/ceph + +# pull credentials from the database to a file for the ceph-volume tool +ceph auth get-or-create client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring + +# If there is an issue ingesting an disk to an OSD, all partition structures can be destroyed with the following command +#ceph-volume lvm zap /dev/sdb +#sgdisk --zap-all /dev/sdb +# there are a few methods to rescan disk and have for kernel address, often a reboot is the quickest way to get OSDs recognised after schedule for ingestion +# exit +# reboot + +# Example methods of provisioning disk as OSD via the cli, **use the service spec yaml method** + +## for LVM +#ceph-volume lvm prepare --data /dev/almalinux/osd0 --no-systemd +#ceph cephadm osd activate ceph1 # magic command that creates the systemd unit file(s) on the host to bring up an OSD daemon container +#ceph-volume lvm list + +## for whole disk, manual method, this is probably a legacy method but is reliable +#ceph orch daemon add osd ceph1:/dev/sda +#ceph orch daemon add osd ceph1:/dev/sdb + +# **Prefered method of provision using service specification** + +## service spec method +## for whole disk or LVM, new drive-groups method with a single configuration and one-shot command +# only needs to be performed on one node, ceph1 +# you can perform this on the native operating system, this will help put the osd_spec.yml file in source control +# for LVM partitions on whole disk in University this was done in the cephadm shell (cephadm shell -m /var/lib/ceph) as theis is where the ceph orch command seemd to work + +# for use of any kind of discovery based auto selection of the disk you can query a disk to get traits, this should work on whole disk and LVMs alike +# ceph-volume inventory /dev/ceph-block-0/block-0 +# +# ====== Device report /dev/ceph-db-0/db-0 ====== +# +# path /dev/ceph-db-0/db-0 +# lsm data {} +# available False +# rejected reasons Device type is not acceptable. It should be raw device or partition +# device id +# --- Logical Volume --- +# name db-0 +# comment not used by ceph + +# create the service spec file, this will include multiple yaml documents delimited by ---, +vi osd_spec.yml + +--- +service_type: osd +service_id: block-0 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-0/block-0 + db_devices: + paths: + - /dev/ceph-db-0/db-0 +--- +service_type: osd +service_id: block-1 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-1/block-1 + db_devices: + paths: + - /dev/ceph-db-0/db-1 +--- +service_type: osd +service_id: block-2 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-2/block-2 + db_devices: + paths: + - /dev/ceph-db-0/db-2 +--- +service_type: osd +service_id: block-3 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-3/block-3 + db_devices: + paths: + - /dev/ceph-db-0/db-3 +--- +service_type: osd +service_id: block-4 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-4/block-4 + db_devices: + paths: + - /dev/ceph-db-0/db-4 +--- +service_type: osd +service_id: block-5 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-5/block-5 + db_devices: + paths: + - /dev/ceph-db-0/db-5 +--- +service_type: osd +service_id: block-6 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-6/block-6 + db_devices: + paths: + - /dev/ceph-db-0/db-6 +--- +service_type: osd +service_id: block-7 +placement: + hosts: + - ceph1 + - ceph2 + - ceph3 +spec: + data_devices: + paths: + - /dev/ceph-block-7/block-7 + db_devices: + paths: + - /dev/ceph-db-0/db-7 + +ceph orch apply -i osd_spec.yml # creates the systemd unit file(s) on the host to bring up OSD daemon containers (1 container per OSD) + +# exit the container + +# wait whilst OSDs are created, you will see a container per OSD +podman ps -a +ceph status + + cluster: + id: 5b99e574-4577-11ed-b70e-e43d1a63e590 + health: HEALTH_OK + + services: + mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 75m) + mgr: ceph1.fgnquq(active, since 75m), standbys: ceph2.whhrir, ceph3.mxipmg + osd: 24 osds: 24 up (since 2m), 24 in (since 3m) + + data: + pools: 1 pools, 1 pgs + objects: 0 objects, 0 B + usage: 4.2 TiB used, 306 TiB / 310 TiB avail + pgs: 1 active+clean + +# check OSD tree +ceph osd df tree + +ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME +-1 309.82068 - 310 TiB 4.2 TiB 19 MiB 0 B 348 MiB 306 TiB 1.36 1.00 - root default +-3 103.27356 - 103 TiB 1.4 TiB 6.3 MiB 0 B 116 MiB 102 TiB 1.36 1.00 - host ceph1 + 0 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.0 + 4 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.4 + 8 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.8 +11 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.11 +12 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.12 +16 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.16 +18 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.18 +23 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 1 up osd.23 +-5 103.27356 - 103 TiB 1.4 TiB 6.3 MiB 0 B 116 MiB 102 TiB 1.36 1.00 - host ceph2 + 1 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 1 up osd.1 + 3 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.3 + 6 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.6 + 9 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.9 +14 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.14 +15 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.15 +19 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.19 +22 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.22 +-7 103.27356 - 103 TiB 1.4 TiB 6.3 MiB 0 B 116 MiB 102 TiB 1.36 1.00 - host ceph3 + 2 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.2 + 5 hdd 12.90919 1.00000 13 TiB 180 GiB 804 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.5 + 7 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.7 +10 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 15 MiB 13 TiB 1.36 1.00 0 up osd.10 +13 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.13 +17 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.17 +20 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 1 up osd.20 +21 hdd 12.90919 1.00000 13 TiB 180 GiB 808 KiB 0 B 14 MiB 13 TiB 1.36 1.00 0 up osd.21 + TOTAL 310 TiB 4.2 TiB 19 MiB 0 B 348 MiB 306 TiB 1.36 +MIN/MAX VAR: 1.00/1.00 STDDEV: 0 +``` + +Deleting OSDs, at least one OSD should be left for metrics/config pools to function, removing all OSDs will tank an install and is only useful to remove a ceph cluster, usually you would rebuild fresh. + +``` +# remove all OSDs, this is only useful if you intend to destroy the ceph cluster - DANGEROUS +# doesnt really work when all OSDs are removed as key operating pools are destroyed not just degraded + +#!/bin/bash +for i in {0..12} +do + ceph osd out osd.$i + ceph osd down osd.$i + ceph osd rm osd.$i + ceph osd crush rm osd.$i + ceph auth del osd.$i + ceph osd destroy $i --yes-i-really-mean-it + ceph orch daemon rm osd.$i --force + ceph osd df tree +done +ceph osd crush rm ceph1 +ceph osd crush rm ceph2 +ceph osd crush rm ceph3 +``` + +## Enable autotune memory usage on OSD nodes + +> This action should be performed on ceph1. + +``` +ceph config set osd osd_memory_target_autotune true +ceph config get osd osd_memory_target_autotune +``` + +## Enable placement group autoscaling for any pool subsequently added + +> This action should be performed on ceph1. + +``` +ceph config set global osd_pool_default_pg_autoscale_mode on +ceph osd pool autoscale-status +``` + +# Erasure coding + +## Understanding EC + +The ruleset for EC is not so clear especially for small clusters, the following explanation/rules should be followed for a small 3 node Ceph cluster. In fact you have only one available scheme in reality, K=2, M=1. + +- K the number of chunks origional data is divided into +- M the extra codes (basically parity) used with the data +- N the number of chunks created for each piece of data K+M +- Crush failure domain - can be OSD, RACK, HOST (and a few more if listed in crushmap such as PDU, DATACENTRE), basically this dictates the dispersal of the M data (i would guess K data also, to allow for larger schemes RACK). +- Failure domains - OSD seems to be only for testing?, HOST seems to be the most typical use case, RACK seems very sensible but requires many nodes. +- **What you wont find documented clearly is that there need to be at least as many hosts as K+M when using the HOST scheme for resilliency.** +- A 3 node cluster can only support K=2,M=1. +- In a RACK failure domain, say there are 4 racks (with an equal amount of nodes and OSDs most likely), you will have K=3,M=1 allowing for 1 total rack failure. +- EC origionally supported RGW object storage only, RBD pools are now supported (using ec_overwrites) but the pool metadata must still reside on a replicated pool, Openstack has an undocumented setting to use the metadata/data pools. + +3 Nodes configuration OSD vs HOST, illustrate the failure domain scheme differences: + +- Using a K=2,M=1 and OSD failure domain could mean host1 gets K=1,M=1 and host2 gets K=1. If host1 goes down you wont be able to recreate the data. +- Using K=2,M=1 and HOST failure domain would mean host1 gets K=1, host2 gets K=1, host3 gets M=1 - each node gets a K or M, data and parity is dispursed equally and allows for 1 full node failure. + +Ceph supports many different K,M schemes, this doesnt mean they work or offer the protection you want, in some cases the pool creation will stall where the scheme is inadvisable. +It is recommended that you never use more than 80% of the capacity of the storage, above 80% will have performance penalties as data is shuffled about, 100% will set the cluster in read only mode and probably damage in flight data as in any filesystem. + +Redhat state that where there is K=4,M=2 you may use K=8,M=4 for greater resillency, they do not state that 12 nodes would be realistically required for this in HOST failure domain. +K=4,M=2 on a 12 node cluster in HOST failure domain mode would work just fine, it would use less CPU/RAM when writing the data chunks to disk, a client may get less read performance on a busy cluster as it would only pull from 50% of the cluster nodes. +Where K+M is an odd number and nodes is an even number (visa versa), data would not be equally distributed across the cluster, with large data files such as VM images the disparity maybe noticable even after automatic re-balancing. +In usual replication lets say there are 3 nodes and 2way replication has been set in the crush map, large files maybe written to two nodes filling them to capacity, whilst considerable free space will be shown available that is effectively unusable, re-balancing will not help. + +Redhat supports the following schemes with the jerasure EC plugin (this is the default algorithum): + +- K=8,M=3 (minimum 11 nodes using HOST failure domain) +- K=8,M=4 (minimum 12 nodes using HOST failure domain) +- K=4,M=2 (minimum 6 nodes using HOST failure domain) + +## EC usable space + +### Example 1 + +For illustration each node has 4 disks (OSDs) of 12TB thus 48TB raw disk, take the following example: + +- minimum 3 nodes K=2,M=1 - 144TB raw disk - (12 OSD * (2 K / ( 2 K + 1 M)) * 12TB OSD Size * 0.8 (80% capacity) ) - 76TB usable disk VS 3way replication ((144TB / 3) * 0.8) 38.4TB +- minimum 4 nodes K=3,M=1 - 192TB raw disk - (16 OSD * (3 K / (3 K + 1 M)) * 12TB OSD Size * 0.8) - 115TB usable disk VS 3way replication ((192TB / 3) * 0.8) 51.2TB +- minimum 12 nodes K=9,M=3 - 576TB raw disk - (48 OSD * (9 K / (9 K + 3 M)) * 12TB OSD Size * 0.8) - 345TB usable disk VS 3way replication ((576TB / 3) * 0.8) 153.6 + +### University Openstack + +3 nodes, 8 disks per node (excluding SSD db/wal), 14TB disks thus 336TB raw disk. +All possible storage schemes only allow for 1 failed HOST. + +- In a 3 way replication we have 336/3 = 112 * 0.8 = 89.6TB usable space +- In a 2 way replication (more prone to bitrot) we have 336/2 = 168 * 0.8 = 134.4TB usable space +- In a EC scheme of K=2,M=1 we have 24 * (2 / (2+1)) * 14 * 0.8 = 179TB usable space + +# Openstack RBD storage + +> CephFS/RGW are not being used on this cluster, it is purely to be used for VM image storage. +> For further Openstack CephFS/RGW integration see the OCF LAB notes, these are a much more involved Openstack deployment. + +- For RHOSP 16 the controller role must contain all of the ceph services for use with an Openstack provisioned or externally provisioned ceph cluster. +- The Roles created for the University deployment already contain the Ceph services. + +## Undercloud Ceph packages + +Ensure that your undercloud has the right version of `ceph-ansible` before any deployment. + +Get Ceph packages. + +> https://access.redhat.com/solutions/2045583 + +- Redhat Ceph 4.1 = Nautilus release +- Redhat Ceph 5.1 = Pacific release + +```sh +sudo subscription-manager repos | grep -i ceph + +# Nautilus +sudo subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms + +# Pacific (if you are using external Ceph from the opensource repos you will likely be using this) +#sudo dnf remove -y ceph-ansible +#sudo subscription-manager repos --disable=rhceph-4-tools-for-rhel-8-x86_64-rpms +sudo subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms + +# install +sudo dnf info ceph-ansible +sudo dnf install -y ceph-ansible +``` + +## Create Openstack pools - University uses EC pools, skip to the next section + +Listed are the recomended PG allocation using redhat defaults, this isnt very tuned and assumed 100PGs per OSD on a 3 node cluster with 9 disks, opensource Ceph is now up to 250PGs per OSD. + +As PG autoscaling is enabled, and as this release is later than Nautilus we can avoid specifying PGs, meaning each pool will be initially allocated 32PGs and scale from there. + +RBD pools. + +```sh +# Storage for OpenStack Block Storage (cinder) +#ceph osd pool create volumes 2048 +ceph osd pool create volumes + +# Storage for OpenStack Image Storage (glance) +#ceph osd pool create images 128 +ceph osd pool create images + +# Storage for instances +#ceph osd pool create vms 256 +ceph osd pool create vms + +# Storage for OpenStack Block Storage Backup (cinder-backup) +#ceph osd pool create backups 512 +ceph osd pool create backups + +# Storage for OpenStack Telemetry Metrics (gnocchi) +#ceph osd pool create metrics 128 +ceph osd pool create metrics + +# Check pools +ceph osd pool ls + +device_health_metrics +volumes +images +vms +backups +metrics +``` + +## Create Erasure Coded Openstack pools + +1. create EC profile (https://docs.ceph.com/en/latest/rados/operations/erasure-code/) +2. create metadata pools with normal 3 way replication (default replication rule in the crushmap) +3. create EC pools K=2,M=1,failure domain HOST + +metadata pool - replicated pool +data pool - EC pool (with ec_overwrites) + +| Metadata Pool | Data Pool | Usage | +| --- | --- | --- | +| volumes | volumes_data | Storage for OpenStack Block Storage (cinder) | +| images | images_data | Storage for OpenStack Image Storage (glance) | +| vms | vms_data | Storage for VM/Instance disk | +| backups | backups_data | Storage for OpenStack Block Storage Backup (cinder-backup) | +| metrics | metrics_data | Storage for OpenStack Telemetry Metrics (gnocchi) | + +Create pool example: + +```sh +# if you need to remove a pool, remember change back to false state after deletion +#ceph config set mon mon_allow_pool_delete true + +# create new erasure code profile (default will exist) +ceph osd erasure-code-profile set ec-21-profile k=2 m=1 crush-failure-domain=host +ceph osd erasure-code-profile ls +ceph osd erasure-code-profile get ec-21-profile + +crush-device-class= +crush-failure-domain=host +crush-root=default +jerasure-per-chunk-alignment=false +k=2 +m=1 +plugin=jerasure +technique=reed_sol_van +w=8 + +# delete an EC profile +#ceph osd erasure-code-profile rm ec-21-profile + +# create pool this will host metadata only after issuing the rbd_default_data_pool, by default the crushmap rule will set as replicated, include the parameter to illustrate the metadata must replicated not erasure coded +ceph osd pool create volumes replicated +ceph osd pool application enable volumes rbd +ceph osd pool application get volumes + +# create erasure code enabled data pool +ceph osd pool create volumes_data erasure ec-21-profile +ceph osd pool set volumes_data allow_ec_overwrites true # this must be set for RBD pools to make changes for constantly opened disk file +ceph osd pool application enable volumes_data rbd # Openstack will usually ensure the pool is RBD application enabled, when specifying a data disk we must explicitly set the usage/application mode +ceph osd pool application get volumes_data + +# set an EC data pool for the replicated pool, 'volumes' will subsequently only host metadata - THIS is a magic command not documented until 2022, typically in non-RHOSP each service has its own client. user and EC data pool override +rbd config pool set volumes rbd_default_data_pool volumes_data + +# If using CephFS with manilla the pool creation is the same, however dictation usage of the data pool is a little simpler and specified in the volume creation, allow_ec_overwrites must also be set for CephFS +#ceph fs new cephfs cephfs_metadata cephfs_data force + +# Check pools, notice the 3way replicated pool would consume a total of 97TB where EC efficienciy could now consume a total of 193TB, around 179TB usable at max performance according to the EC calculation previously explained in this document +ceph osd pool ls + +device_health_metrics +volumes +images +vms +backups +metrics + +ceph df +--- RAW STORAGE --- +CLASS SIZE AVAIL USED RAW USED %RAW USED +hdd 310 TiB 306 TiB 4.2 TiB 4.2 TiB 1.36 +TOTAL 310 TiB 306 TiB 4.2 TiB 4.2 TiB 1.36 + +--- POOLS --- +POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL +device_health_metrics 1 1 0 B 30 0 B 0 97 TiB +volumes_data 7 32 0 B 0 0 B 0 193 TiB +volumes 8 32 0 B 1 0 B 0 97 TiB +images 9 32 0 B 1 0 B 0 97 TiB +vms 10 32 0 B 1 0 B 0 97 TiB +backups 11 32 0 B 1 0 B 0 97 TiB +metrics 12 32 0 B 1 0 B 0 97 TiB +images_data 13 32 0 B 0 0 B 0 193 TiB +vms_data 14 32 0 B 0 0 B 0 193 TiB +backups_data 15 32 0 B 0 0 B 0 193 TiB +metrics_data 16 32 0 B 0 0 B 0 193 TiB +``` + +Once Openstack starts to consume disk the EC scheme is apparent. + +```sh +# we have created a single 10GB VM Instance, the 10GB is thin provisioned, this Instance uses 1.2GB of space +[Universityops@test ~]$ df -Th +Filesystem Type Size Used Avail Use% Mounted on +devtmpfs devtmpfs 959M 0 959M 0% /dev +tmpfs tmpfs 987M 0 987M 0% /dev/shm +tmpfs tmpfs 987M 8.5M 978M 1% /run +tmpfs tmpfs 987M 0 987M 0% /sys/fs/cgroup +/dev/vda2 xfs 10G 1.2G 8.9G 12% / +tmpfs tmpfs 198M 0 198M 0% /run/user/1001 + +# ceph shows some metadata usage (for the RBD disk image) and 1.3GB of data used in volumes_data, note under an EC scheme we see 2.0GB of consumed disk VS 3.9GB under a 3way replication scheme +[root@ceph1 ~]# ceph df +--- RAW STORAGE --- +CLASS SIZE AVAIL USED RAW USED %RAW USED +hdd 310 TiB 306 TiB 4.2 TiB 4.2 TiB 1.36 +TOTAL 310 TiB 306 TiB 4.2 TiB 4.2 TiB 1.36 + +--- POOLS --- +POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL +device_health_metrics 1 1 0 B 30 0 B 0 97 TiB +volumes_data 7 32 1.3 GiB 363 2.0 GiB 0 193 TiB +volumes 8 32 691 B 6 24 KiB 0 97 TiB +images 9 32 452 B 18 144 KiB 0 97 TiB +vms 10 32 0 B 1 0 B 0 97 TiB +backups 11 32 0 B 1 0 B 0 97 TiB +metrics 12 32 0 B 1 0 B 0 97 TiB +images_data 13 32 1.7 GiB 220 2.5 GiB 0 193 TiB +vms_data 14 32 0 B 0 0 B 0 193 TiB +backups_data 15 32 0 B 0 0 B 0 193 TiB +metrics_data 16 32 0 B 0 0 B 0 193 TiB + + +``` + +## Create RBD user for Openstack, assign capabilities and retrieve access token + +Openstack needs credentials to access disk. +Use method 3, generally this is the way Ceph administration is going. + +```sh +# 1. Redhat CLI method, one-shot command +# +ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics' + +# 2. Manual method, you can update caps this way however all caps must be added at once, they cannot be apended +# +ceph auth get-or-create client.openstack +ceph auth caps client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics' + +# Tighter mgr access, this should be fine but not tested with Openstack (official documentation does not cover tighter security model) +# +#ceph auth caps client.openstack mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics' mgr 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics' + +# 3. Config Method easier to script and backup/source-control +# +# 1) generate a keyring with no caps +# 2) add caps +# 3) import user +ceph-authtool --create-keyring ceph.client.openstack.keyring --gen-key -n client.openstack + +# NON EC profile +nano -cw ceph.client.openstack.keyring + +[client.openstack] + key = AQCC5z5jtOmJARAAiFaC2HB4f2pBYfMKWzkkkQ== + caps mon = 'profile rbd' + caps osd = 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics' + caps mgr = 'allow *' + +# EC profile + +[client.openstack] + key = AQCC5z5jtOmJARAAiFaC2HB4f2pBYfMKWzkkkQ== + caps mon = 'profile rbd' + caps osd = 'profile rbd pool=volumes, profile rbd pool=volumes_data, profile rbd pool=vms, profile rbd pool=vms_data, profile rbd pool=images, profile rbd pool=images_data, profile rbd pool=backups, profile rbd pool=backups_data, profile rbd pool=metrics, profile rbd pool=metrics_data' + caps mgr = 'allow *' + +ceph auth import -i ceph.client.openstack.keyring +ceph auth ls +``` \ No newline at end of file diff --git a/5) Overcloud Deployment.md b/5) Overcloud Deployment.md new file mode 100755 index 0000000..812dd23 --- /dev/null +++ b/5) Overcloud Deployment.md @@ -0,0 +1,1851 @@ +# Network isolation and `network_data.yaml` + +By default all openstack services will all run on the provisioning network, to separate out the various service types to their own networks (recommended), Openstack introduces the concept of network isolation. +To enable network isolation the deployment command must include the following templates. These templates require no modification. + +- `/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml` +- `/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml` + +To assign IPs/VLANs to the various operational networks edit the `network_data.yaml` file. + +- The overcloud installer command accepts a parameter `--networks-file` to reference the `network_data.yaml` for definitions of the networks, their ip allocation pools and vlan ID. +- The default reference template is located @ `/usr/share/openstack-tripleo-heat-templates/network_data.yaml`, if the installer command is run without the `--networks-file` parameter then the IP/VLAN scheme in this file will be used. + +Create the configuration, allocate the IP ranges and VLANs for the intended design. + +- This template references the IP ranges and VLANs for the University deployment. Whilst all IPs will be statically defined the installer requires the allocation_pools entries (used in flavours method). +- There is a default/standard storage management network included for Ceph integration, this is not used in an external Ceph configuration but required for the installer to complete. +- An additional non-standard network is included for Compute Instance-HA, this is IPMI network. + +```sh +mkdir /home/stack/templates +nano -cw /home/stack/templates/network_data.yaml + +- name: Storage + enabled: true + vip: true + vlan: 13 + name_lower: storage + ip_subnet: '10.122.10.0/24' + allocation_pools: [{'start': '10.122.10.30', 'end': '10.122.10.249'}] + mtu: 1500 +- name: StorageMgmt + name_lower: storage_mgmt + enabled: true + vip: true + vlan: 14 + ip_subnet: '10.122.12.0/24' + allocation_pools: [{'start': '10.122.12.30', 'end': '10.122.12.249'}] + mtu: 1500 +- name: InternalApi + name_lower: internal_api + enabled: true + vip: true + vlan: 12 + ip_subnet: '10.122.6.0/24' + allocation_pools: [{'start': '10.122.6.30', 'end': '10.122.6.249'}] + mtu: 1500 +- name: Tenant + name_lower: tenant + enabled: true + vip: false + vlan: 11 + ip_subnet: '10.122.8.0/24' + allocation_pools: [{'start': '10.122.8.30', 'end': '10.122.8.249'}] + mtu: 1500 +- name: External + name_lower: external + vip: true + vlan: 1214 + ip_subnet: '10.121.4.0/24' + gateway_ip: '10.121.4.1' + allocation_pools: [{'start': '10.121.4.30', 'end': '10.121.4.249'}] + mtu: 1500 +- name: IpmiNetwork + name_lower: ipmi_network + vip: false + vlan: 2 + ip_subnet: '10.122.1.0/24' + allocation_pools: [{'start': '10.122.1.80', 'end': '10.122.1.249'}] + mtu: 1500 +``` + +# Create custom roles + +The following custom roles will be created. + +- Controller role without the networker functions which are to be provided by the networker role. +- Controller role with additional network for IPMI fencing. +- Compute role for server hardware A. (this will include Instance-HA capability) +- Compute role for server hardware B. (this will include Instance-HA capability) + +## Controller role + +Find services required for a controller role without networker services. The `ControllerOpenstack.yaml` role contains only the controller core services, thus missing database and messenger/queue services, this is the base role to build upon. + +```sh +grep 'OS::TripleO::Services::' /usr/share/openstack-tripleo-heat-templates/roles/ControllerOpenstack.yaml > ~/ControllerOpenstack.txt ;\ +grep 'OS::TripleO::Services::' /usr/share/openstack-tripleo-heat-templates/roles/Database.yaml > ~/Database.txt ;\ +grep 'OS::TripleO::Services::' /usr/share/openstack-tripleo-heat-templates/roles/Messaging.yaml > ~/Messaging.txt ;\ +grep 'OS::TripleO::Services::' /usr/share/openstack-tripleo-heat-templates/roles/Networker.yaml > ~/Networker.txt ;\ +grep 'OS::TripleO::Services::' /usr/share/openstack-tripleo-heat-templates/roles/ControllerNoCeph.yaml > ~/ControllerNoCeph.txt +``` + +Find services required for Database, these are to be added to the custom Controller role. + +```sh +diff <(sort ~/ControllerOpenstack.txt) <(sort ~/Database.txt) | grep \> + +> - OS::TripleO::Services::Clustercheck +> - OS::TripleO::Services::MySQL +``` + +Find services required for Messaging, these are to be added to the custom Controller role. + +```sh +diff <(sort ~/ControllerOpenstack.txt) <(sort ~/Messaging.txt) | grep \> + +> - OS::TripleO::Services::OsloMessagingNotify +> - OS::TripleO::Services::OsloMessagingRpc +``` + +Find services required for Ceph storage, these are to be added to the custom Controller role forCeph deployments (specifically External Ceph integration). +(NOTE: ControllerOpenstack.txt and ControllerNoCeph.txt both contain Networker services) + +```sh +diff <(sort ~/ControllerNoCeph.txt) <(sort ~/ControllerOpenstack.txt) | grep \> + +> - OS::TripleO::Services::CephGrafana +> - OS::TripleO::Services::CephMds +> - OS::TripleO::Services::CephMgr +> - OS::TripleO::Services::CephMon +> - OS::TripleO::Services::CephRbdMirror +> - OS::TripleO::Services::CephRgw +``` + +We keep/add the client services to use external Ceph. +With RHOSP 16 when using external Ceph all of the Ceph services are still required, not just the following client services. + +```sh +< - OS::TripleO::Services::CephClient +< - OS::TripleO::Services::CephExternal +``` + +Find services required for Networker, these are to be removed from the custom Controller role, if you are using ControllerOpenstack.yaml as the base template these will not require removal (they are not present). + +```sh +diff <(sort ~/ControllerOpenstack.txt) <(sort ~/Networker.txt) | grep \> + +> - OS::TripleO::Services::IronicNeutronAgent +> - OS::TripleO::Services::NeutronDhcpAgent +> - OS::TripleO::Services::NeutronL2gwAgent +> - OS::TripleO::Services::NeutronL3Agent +> - OS::TripleO::Services::NeutronMetadataAgent +> - OS::TripleO::Services::NeutronML2FujitsuCfab +> - OS::TripleO::Services::NeutronML2FujitsuFossw +> - OS::TripleO::Services::NeutronOvsAgent +> - OS::TripleO::Services::NeutronVppAgent +> - OS::TripleO::Services::OctaviaHealthManager +> - OS::TripleO::Services::OctaviaHousekeeping +> - OS::TripleO::Services::OctaviaWorker +``` + +Create a custom roles directory, copy the default roles to the directory, these will be used as a base for generating the customised roles. + +```sh +mkdir /home/stack/templates/roles +cp -r /usr/share/openstack-tripleo-heat-templates/roles /home/stack/templates +mv /home/stack/templates/roles/Controller.yaml /home/stack/templates/roles/Controller.yaml.orig +cp /home/stack/templates/roles/ControllerOpenstack.yaml /home/stack/templates/roles/Controller.yaml +``` + +Create the new controller role with the services that are to be added/removed. +The 'name:' key (Controller) is referenced in the `scheduler_hints_env.yaml` by the entry '<role name>SchedulerHints' (ControllerSchedulerHints), this binds the role to the host. + +```sh +# change the 'Role:' description, the 'name:' and append/remove services listed + +nano -cw /home/stack/templates/roles/Controller.yaml + +############################################################################### +# Role: ControllerNoNetworkExtCeph # +############################################################################### +- name: Controller + description: | + Controller role that does not contain the networking + components. + +# add to role +> - OS::TripleO::Services::Clustercheck +> - OS::TripleO::Services::MySQL +# add to role +> - OS::TripleO::Services::OsloMessagingNotify +> - OS::TripleO::Services::OsloMessagingRpc +# check present/add to role +> - OS::TripleO::Services::CephGrafana +> - OS::TripleO::Services::CephMds +> - OS::TripleO::Services::CephMgr +> - OS::TripleO::Services::CephMon +> - OS::TripleO::Services::CephRbdMirror +> - OS::TripleO::Services::CephRgw +# check present/add to role +< - OS::TripleO::Services::CephClient +< - OS::TripleO::Services::CephExternal +# check present/remove from role +> - OS::TripleO::Services::IronicNeutronAgent +> - OS::TripleO::Services::NeutronDhcpAgent +> - OS::TripleO::Services::NeutronL2gwAgent +> - OS::TripleO::Services::NeutronL3Agent +> - OS::TripleO::Services::NeutronMetadataAgent +> - OS::TripleO::Services::NeutronML2FujitsuCfab +> - OS::TripleO::Services::NeutronML2FujitsuFossw +> - OS::TripleO::Services::NeutronOvsAgent +> - OS::TripleO::Services::NeutronVppAgent +> - OS::TripleO::Services::OctaviaHealthManager +> - OS::TripleO::Services::OctaviaHousekeeping +> - OS::TripleO::Services::OctaviaWorker +``` + +## Compute role + +- No customisation of services for the role is required. +- For an instance-HA deployment copy `ComputeInstanceHA.yaml` role to `computeA.yaml / computeB.yaml`. +- For a standard deployment copy `Compute.yaml` role to `computeA.yaml / computeB.yaml`. +- The 'name:' key (ComputeA) is referenced in the `scheduler_hints_env.yaml` by the entry '<role name>SchedulerHints' (ComputeASchedulerHints), this binds the role to the host. + +Instance-HA compute role. + +- Using the instance-HA compute roles without any of the environment files to enable the capability on the controllers seems to work fine, enabling instance-HA is covered further on in the document. + +```sh +cp /home/stack/templates/roles/ComputeInstanceHA.yaml /home/stack/templates/roles/ComputeA.yaml +cp /home/stack/templates/roles/ComputeInstanceHA.yaml /home/stack/templates/roles/ComputeB.yaml + +# edit the role +# 1) change the 'name:' key to match the scheduler hints +# 2) change the 'HostnameFormatDefault:' key to ensure hostnames do not clash for the compute instances +nano -cw /home/stack/templates/roles/ComputeA.yaml + +############################################################################### +# Role: ComputeInstanceHA # +############################################################################### +- name: ComputeA + description: | + Compute Instance HA Node role to be used with -e environments/compute-instanceha.yaml + CountDefault: 1 + networks: + InternalApi: + subnet: internal_api_subnet + Tenant: + subnet: tenant_subnet + Storage: + subnet: storage_subnet + #HostnameFormatDefault: '%stackname%-novacomputeiha-%index%' + HostnameFormatDefault: '%stackname%-computeA-%index%' + +# edit the role to change the 'name:' key to match the scheduler hints +nano -cw /home/stack/templates/roles/ComputeB.yaml + +############################################################################### +# Role: ComputeInstanceHA # +############################################################################### +- name: ComputeB + description: | + Compute Instance HA Node role to be used with -e environments/compute-instanceha.yaml + CountDefault: 1 + networks: + InternalApi: + subnet: internal_api_subnet + Tenant: + subnet: tenant_subnet + Storage: + subnet: storage_subnet + #HostnameFormatDefault: '%stackname%-novacomputeiha-%index%' + HostnameFormatDefault: '%stackname%-computeB-%index%' +``` + +Vanilla compute role. + +- Where you do not want instance-HA. There are some entries in the config files that can be ommited - see the Instance-HA section further on in this document. + +```sh +cp /home/stack/templates/roles/Compute.yaml /home/stack/templates/roles/ComputeA.yaml +cp /home/stack/templates/roles/Compute.yaml /home/stack/templates/roles/ComputeB.yaml + +# edit the role +# 1) change the 'name:' key to match the scheduler hints +# 2) change the 'HostnameFormatDefault:' key to ensure hostnames do not clash for the compute instances +nano -cw /home/stack/templates/roles/ComputeA.yaml + +############################################################################### +# Role: Compute # +############################################################################### +- name: ComputeA + description: | + Basic Compute Node role + CountDefault: 1 + # Create external Neutron bridge (unset if using ML2/OVS without DVR) + tags: + - external_bridge + networks: + InternalApi: + subnet: internal_api_subnet + Tenant: + subnet: tenant_subnet + Storage: + subnet: storage_subnet + #HostnameFormatDefault: '%stackname%-novacompute-%index%' + HostnameFormatDefault: '%stackname%-computeA-%index%' + +# edit the role to change the 'name:' key to match the scheduler hints +nano -cw /home/stack/templates/roles/ComputeB.yaml + +############################################################################### +# Role: Compute # +############################################################################### +- name: ComputeB + description: | + Basic Compute Node role + CountDefault: 1 + # Create external Neutron bridge (unset if using ML2/OVS without DVR) + tags: + - external_bridge + networks: + InternalApi: + subnet: internal_api_subnet + Tenant: + subnet: tenant_subnet + Storage: + subnet: storage_subnet + #HostnameFormatDefault: '%stackname%-novacompute-%index%' + HostnameFormatDefault: '%stackname%-computeB-%index%' +``` + +## Create custom `roles_data.yaml` + +Create the new `roles_data.yaml` with the updated controller and compute roles. This file is simply a concatenation of the role files (that we have just edited). +Note the command `openstack overcloud roles generate` references parameters `Controller Networker ComputeA ComputeB`, each of these refers to a roles file such as `/home/stack/templates/roles/ComputeB.yaml`. +After the `roles_data.yaml` has been generated it is safe to remove `/home/stack/templates/roles`, you will likely want to keep it until you get a successful deployment. + +```sh +# generate '/home/stack/templates/roles_data.yaml' +openstack overcloud roles generate \ + --roles-path /home/stack/templates/roles \ + -o /home/stack/templates/roles_data.yaml \ + Controller Networker ComputeA ComputeB + +# you can remove the /home/stack/templates/roles now, it is not required for the deployment command +``` + +Edit the new `/home/stack/templates/roles_data.yaml` to include a new IPMI service network to the controller role, this network will be required for instance-HA later in this document. + +- Instance-HA simply detects if an Openstack compute node is dead and migrates the VM instances to another compute node, this is a poor mans version of HA/DRS. +- For instance-HA the controller nodes must be able to communicate with the IPMI interfaces of the compute nodes (to check power status), we add an additional IPMI service network to only the controllers for this purpose. +- The controllers will use the IPMI interface of the compute nodes to assist with reboot and fencing, once a node is fenced (no VMs can be scheduled on the compute node) the 'active' controller node will send IPMI power commands to compute nodes and determine after reboot if they can re-join the cluster and then be un-fenced. +- The entry for the new IPMI network in the role file follows the naming convention defined in the `network_data.yaml`. (note the network cannot be named just 'IPMI', this is used as a functional variable in the heat templates and causes non diagnosable issues!) + +```sh +nano -cw /home/stack/templates/roles_data.yaml + +############################################################################### +# Role: ControllerNoNetwork # +############################################################################### +- name: Controller + description: | + Controller role that does not contain the networking + roles. + tags: + - primary + - controller + networks: + External: + subnet: external_subnet + InternalApi: + subnet: internal_api_subnet + Storage: + subnet: storage_subnet + StorageMgmt: + subnet: storage_mgmt_subnet + Tenant: + subnet: tenant_subnet + IpmiNetwork: + subnet: ipmi_network_subnet + default_route_networks: ['External'] +``` + +# Predictive IPs + +Using 'controlling node placement' method each node must have and IP for each defined network that it participates in. +Using 'controlling node placement', note that the `network_data.yaml` still requires IP ranges per defined per network for the installer to run even though IPs are statically assigned, in the 'flavours' method IPs from the various range would be dynamically allocated. +The addition of the ipmi_network for the controller nodes is for VM instance-ha later in this document. + +```sh +nano -cw /home/stack/templates/predictive_ips.yaml + +# There are 24 nodes available for the computeA role, 1 node has a TPMS issue and has not been imported +#ComputeAIPs +#10.122.1.39 38:68:dd:4a:41:48 # compute node cannot boot due to tpms issue + +# There are 24 nodes available for the computeB role, 3 nodes are being used whilst we await the Ceph nodes to be delivered +#ComputeBIPs +#10.122.1.55 6c:fe:54:33:4f:3c # temporary ceph1 +#10.122.1.56 6c:fe:54:33:55:74 # temporary ceph2 +#10.122.1.57 6c:fe:54:33:4b:5c # temporary ceph3 +# the IPs listed will cover the broken/repurposed nodes ready to intergrated into the cluster + +nano -cw /home/stack/templates/predictive_ips.yaml + +parameter_defaults: + ControllerIPs: + ipmi_network: + - 10.122.1.80 + - 10.122.1.81 + - 10.122.1.82 + external: + - 10.121.4.20 + - 10.121.4.21 + - 10.121.4.22 + internal_api: + - 10.122.6.30 + - 10.122.6.31 + - 10.122.6.32 + storage: + - 10.122.10.30 + - 10.122.10.31 + - 10.122.10.32 + tenant: + - 10.122.8.30 + - 10.122.8.31 + - 10.122.8.32 + ctlplane: + - 10.122.0.30 + - 10.122.0.31 + - 10.122.0.32 + NetworkerIPs: + internal_api: + - 10.122.6.40 + - 10.122.6.41 + tenant: + - 10.122.8.40 + - 10.122.8.41 + ctlplane: + - 10.122.0.40 + - 10.122.0.41 + ComputeAIPs: + internal_api: + - 10.122.6.50 + - 10.122.6.51 + - 10.122.6.52 + - 10.122.6.53 + - 10.122.6.54 + - 10.122.6.55 + - 10.122.6.56 + - 10.122.6.57 + - 10.122.6.58 + - 10.122.6.59 + - 10.122.6.60 + - 10.122.6.61 + - 10.122.6.62 + - 10.122.6.63 + - 10.122.6.64 + - 10.122.6.65 + - 10.122.6.66 + - 10.122.6.67 + - 10.122.6.68 + - 10.122.6.69 + - 10.122.6.70 + - 10.122.6.71 + - 10.122.6.72 + - 10.122.6.73 + storage: + - 10.122.10.50 + - 10.122.10.51 + - 10.122.10.52 + - 10.122.10.53 + - 10.122.10.54 + - 10.122.10.55 + - 10.122.10.56 + - 10.122.10.57 + - 10.122.10.58 + - 10.122.10.59 + - 10.122.10.60 + - 10.122.10.61 + - 10.122.10.62 + - 10.122.10.63 + - 10.122.10.64 + - 10.122.10.65 + - 10.122.10.66 + - 10.122.10.67 + - 10.122.10.68 + - 10.122.10.69 + - 10.122.10.70 + - 10.122.10.71 + - 10.122.10.72 + - 10.122.10.73 + tenant: + - 10.122.8.50 + - 10.122.8.51 + - 10.122.8.52 + - 10.122.8.53 + - 10.122.8.54 + - 10.122.8.55 + - 10.122.8.56 + - 10.122.8.57 + - 10.122.8.58 + - 10.122.8.59 + - 10.122.8.60 + - 10.122.8.61 + - 10.122.8.62 + - 10.122.8.63 + - 10.122.8.64 + - 10.122.8.65 + - 10.122.8.66 + - 10.122.8.67 + - 10.122.8.68 + - 10.122.8.69 + - 10.122.8.70 + - 10.122.8.71 + - 10.122.8.72 + - 10.122.8.73 + ctlplane: + - 10.122.0.50 + - 10.122.0.51 + - 10.122.0.52 + - 10.122.0.53 + - 10.122.0.54 + - 10.122.0.55 + - 10.122.0.56 + - 10.122.0.57 + - 10.122.0.58 + - 10.122.0.59 + - 10.122.0.60 + - 10.122.0.61 + - 10.122.0.62 + - 10.122.0.63 + - 10.122.0.64 + - 10.122.0.65 + - 10.122.0.66 + - 10.122.0.67 + - 10.122.0.68 + - 10.122.0.69 + - 10.122.0.70 + - 10.122.0.71 + - 10.122.0.72 + - 10.122.0.73 + ComputeBIPs: + internal_api: + - 10.122.6.80 + - 10.122.6.81 + - 10.122.6.82 + - 10.122.6.83 + - 10.122.6.84 + - 10.122.6.85 + - 10.122.6.86 + - 10.122.6.87 + - 10.122.6.88 + - 10.122.6.89 + - 10.122.6.90 + - 10.122.6.91 + - 10.122.6.92 + - 10.122.6.93 + - 10.122.6.94 + - 10.122.6.95 + - 10.122.6.96 + - 10.122.6.97 + - 10.122.6.98 + - 10.122.6.99 + - 10.122.6.100 + - 10.122.6.101 + - 10.122.6.102 + - 10.122.6.103 + storage: + - 10.122.10.80 + - 10.122.10.81 + - 10.122.10.82 + - 10.122.10.83 + - 10.122.10.84 + - 10.122.10.85 + - 10.122.10.86 + - 10.122.10.87 + - 10.122.10.88 + - 10.122.10.89 + - 10.122.10.90 + - 10.122.10.91 + - 10.122.10.92 + - 10.122.10.93 + - 10.122.10.94 + - 10.122.10.95 + - 10.122.10.96 + - 10.122.10.97 + - 10.122.10.98 + - 10.122.10.99 + - 10.122.10.100 + - 10.122.10.101 + - 10.122.10.102 + - 10.122.10.103 + tenant: + - 10.122.8.80 + - 10.122.8.81 + - 10.122.8.82 + - 10.122.8.83 + - 10.122.8.84 + - 10.122.8.85 + - 10.122.8.86 + - 10.122.8.87 + - 10.122.8.88 + - 10.122.8.89 + - 10.122.8.90 + - 10.122.8.91 + - 10.122.8.92 + - 10.122.8.93 + - 10.122.8.94 + - 10.122.8.95 + - 10.122.8.96 + - 10.122.8.97 + - 10.122.8.98 + - 10.122.8.99 + - 10.122.8.100 + - 10.122.8.101 + - 10.122.8.102 + - 10.122.8.103 + ctlplane: + - 10.122.0.80 + - 10.122.0.81 + - 10.122.0.82 + - 10.122.0.83 + - 10.122.0.84 + - 10.122.0.85 + - 10.122.0.86 + - 10.122.0.87 + - 10.122.0.88 + - 10.122.0.89 + - 10.122.0.90 + - 10.122.0.91 + - 10.122.0.92 + - 10.122.0.93 + - 10.122.0.94 + - 10.122.0.95 + - 10.122.0.96 + - 10.122.0.97 + - 10.122.0.98 + - 10.122.0.99 + - 10.122.0.100 + - 10.122.0.101 + - 10.122.0.102 + - 10.122.0.103 +``` + +# VIPs + +PublicVirtualFixedIPs - very significant for TLS and external access + +```sh +nano -cw /home/stack/templates/vips.yaml + +parameter_defaults: + ControlFixedIPs: [{'ip_address':'10.122.0.14'}] + PublicVirtualFixedIPs: [{'ip_address':'10.121.4.14'}] + InternalApiVirtualFixedIPs: [{'ip_address':'10.122.6.14'}] + RedisVirtualFixedIPs: [{'ip_address':'10.122.6.15'}] + OVNDBsVirtualFixedIPs: [{'ip_address':'10.122.6.16'}] + StorageVirtualFixedIPs: [{'ip_address':'10.122.10.14'}] +``` + +# Scheduler hints + +Using 'controlling node placement' method the various node type 'counts' must match the number servers, there must be >= IPs available for each server in the `predictive_ips.yaml`. + +```sh +# view the capabilities key/value pair for a node on the undercloud +source ~/stackrc +openstack baremetal node show osctl0 -f json -c properties | jq -r .properties.capabilities + +node:controller-0,profile:baremetal,cpu_vt:true,cpu_aes:true,cpu_hugepages:true,cpu_hugepages_1g:true,cpu_txt:true + +# view node name as hint to match, will need this list for the hostname map override +for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities | awk -F "," '{sub(/node:/,"",$1);print $1}'; done + +#controller-0 +#controller-1 +#controller-2 +#networker-0 +#networker-1 +#computeA-0 +#..... +#computeA-22 +#computeB-0 +#..... +#computeB-19 +``` + +The `OvercloudFlavor:` entry relates to the undercloud node 'capabilities' key/value pair `profile:baremetal`. +The `SchedulerHints:` entry relates to the undercloud node 'capabilities' key/value pair `node:controller-0`. +The `SchedulerHints:` entry inteligently maps the name of the role to be used in the `roles_data.yaml` using the entry `- name: `. + +CAUTION: +If you start renaming your roles the heat templates get in a mess quickly. (the 'composable role' documentation will not take you further than a simple example and not explain that heat templates are full of functional variable names that can clash) + +- For example if you name your role `- name: ControllerNoNetworkingNoCeph`. +- Necessitating `ControllerNoNetworkingNoCephSchedulerHints:` and `OvercloudControllerNoNetworkingNoCephFlavor:`. +- The Heat templates may not correctly attribute the role to a node if it gets too complicated. +- The named ComputeA / ComputeB roles although basic do not cause issue. + +```sh +nano -cw /home/stack/templates/scheduler_hints_env.yaml + +parameter_defaults: + ControllerSchedulerHints: + 'capabilities:node': 'controller-%index%' + NetworkerSchedulerHints: + 'capabilities:node': 'networker-%index%' + ComputeASchedulerHints: + 'capabilities:node': 'computeA-%index%' + ComputeBSchedulerHints: + 'capabilities:node': 'computeB-%index%' + OvercloudControllerFlavor: baremetal + OvercloudNetworkerFlavor: baremetal + OvercloudComputeAFlavor: baremetal + OvercloudComputeBFlavor: baremetal + ControllerCount: 3 + NetworkerCount: 2 + ComputeACount: 22 + ComputeBCount: 21 + +# UPDATE THIS 24 + 24 nodes on final build +``` + +# Node root password set + +During the deployment each node will setup the OS then the network then bootstrap all the various service containers. +After network setup stage the undercloud node will push its own public SSH key to the nodes for user `ssh heat-admin@`. +The hostnames/IPs for the control plane interfaces are writen to the undercloud `/etc/hosts`. + +When building the cluster often it is useful to get onto a node for debug via the out of band management adapter (XClarity remote console for University), this is especially useful when using custom network interfaces (that maybe failing), luckily the password is set before the interface customisation commences. + +```sh +nano -cw /home/stack/templates/userdata_root_password.yaml + +resource_registry: + OS::TripleO::NodeUserData: /usr/share/openstack-tripleo-heat-templates/firstboot/userdata_root_password.yaml + +parameter_defaults: + NodeRootPassword: 'Password0' +``` + +Update the deployment command to include `-e /home/stack/templates/userdata_root_password.yaml`. + +# Custom network interface templates + +From checking the undercloud inspection data we worked out the following network scheme will be used in the templates. + +Server classA: (controller, networker and computeA) + +| mapping | interface | purpose | +| --- | --- | --- | +| nic1 | eno1 | Control Plane - VLAN1 native, IPMI - VLAN2 | +| nic2 | enp0s20f0u1u6 | USB ethernet, likely from the XClarity controller | +| nic3 | ens2f0 | LACP bond, guest/storage | +| nic4 | ens2f1 | LACP bond, guest/storage | + +Server classB: (computeB) + +| mapping | interface | purpose | +| --- | --- | --- | +| nic1 | enp0s20f0u1u6 | USB ethernet, likely from the XClarity controller | +| nic2 | ens2f0 | Control Plane - VLAN1 native, IPMI - VLAN2 | +| nic3 | ens2f1 | LACP bond, guest/storage | +| nic4 | ens4f0 | LACP bond, guest/storage | + +> Set the interface name instead of the mapping in the network interface templates, this is to assist with the two different server types and the LACP bond configuration which can be unreliable without carrier signal on both ports. + +Custom network interface templates are required for the following reasons in the University deployment. + +1. IPMI network interface (type VLAN) on the Controller nodes for Instance-HA fencing. +2. Specifying the 25G Ethernet interfaces for the LACP bond to host the majority of the VLAN interfaces for the various Openstack networks. +3. two classes of server hardware - where the 'nic1, nicN' interface mappings are not consistent for different server hardware interface enumeration. + +Render and edit custom network interface templates for os-net-config runtime of node deployment, these will be included in the 'openstack overcloud deploy' command via environment file `custom-network-configuration.yaml`. + +```sh +# render all heat templates, you can cherry pick the custom-nics files that would usually be dynamically rendered on deployment +cd /usr/share/openstack-tripleo-heat-templates +./tools/process-templates.py -o /home/stack/openstack-tripleo-heat-templates-rendered -n /home/stack/templates/network_data.yaml -r /home/stack/templates/roles_data.yaml + +# create custom nics directory, copy the rendered custom-nics config files into place +# we are use the 'single-nic-vlans' template in the LAB and the 'bond-with-vlans' template as a basis for University +mkdir /home/stack/templates/custom-nics ;\ +cp /home/stack/openstack-tripleo-heat-templates-rendered/network/config/bond-with-vlans/controller.yaml /home/stack/templates/custom-nics/ ;\ +cp /home/stack/openstack-tripleo-heat-templates-rendered/network/config/bond-with-vlans/networker.yaml /home/stack/templates/custom-nics/ ;\ +cp /home/stack/openstack-tripleo-heat-templates-rendered/network/config/bond-with-vlans/computea.yaml /home/stack/templates/custom-nics/computeA.yaml ;\ +cp /home/stack/openstack-tripleo-heat-templates-rendered/network/config/bond-with-vlans/computeb.yaml /home/stack/templates/custom-nics/computeB.yaml + +# check that the controller custom network interface config includes the new IPMI service network in the controller.yaml +# remove the IPMI VLAN interface from the ovs_bridge and put directly under the single 1G network interface used for the control plane traffic +nano -cw /home/stack/templates/custom-nics/controller.yaml + + - type: interface + #name: nic1 + name: eno1 + mtu: + get_param: ControlPlaneMtu + use_dhcp: false + addresses: + - ip_netmask: + list_join: + - / + - - get_param: ControlPlaneIp + - get_param: ControlPlaneSubnetCidr + routes: + list_concat_unique: + - get_param: ControlPlaneStaticRoutes + - type: vlan + mtu: + get_param: IpmiNetworkMtu + vlan_id: + get_param: IpmiNetworkNetworkVlanID + #device: nic1 + device: eno1 + addresses: + - ip_netmask: + get_param: IpmiNetworkIpSubnet + routes: + list_concat_unique: + - get_param: IpmiNetworkInterfaceRoutes + +# set the interface name scheme and LACP bond options for 'controller', 'networker' and 'computeA' +# eno1 is a single physical interface with an IP on the native/untagged VLAN1 for control plane traffic +# ens2f0/1 are in an ovs bond (LACP) attached to an ovs bridge (br-ex once named by the installer process) +# bond options dont seem to set correctly in the parameters section of the template (BondInterfaceOvsOptions), instead set directly under 'ovs_options:' +nano -cw /home/stack/templates/custom-nics/controller.yaml +nano -cw /home/stack/templates/custom-nics/networker.yaml +nano -cw /home/stack/templates/custom-nics/computeA.yaml + + - type: interface + #name: nic1 + name: eno1 + mtu: + get_param: ControlPlaneMtu + use_dhcp: false + addresses: + - ip_netmask: + list_join: + - / + - - get_param: ControlPlaneIp + - get_param: ControlPlaneSubnetCidr + routes: + list_concat_unique: + - get_param: ControlPlaneStaticRoutes + + + - type: ovs_bridge + name: bridge_name + dns_servers: + get_param: DnsServers + domain: + get_param: DnsSearchDomains + members: + - type: ovs_bond + name: bond1 + mtu: + get_attr: [MinViableMtu, value] + ovs_options: + #get_param: BondInterfaceOvsOptions + "bond_mode=balance-slb lacp=active other-config:lacp-fallback-ab=true other_config:lacp-time=fast other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100 other_config:bond_updelay=1000 other_config:bond-rebalance-interval=10000" + members: + - type: interface + #name: nic3 + name: ens2f0 + mtu: + get_attr: [MinViableMtu, value] + primary: true + - type: interface + #name: nic4 + name: ens2f1 + mtu: + get_attr: [MinViableMtu, value] + +# set the interface name scheme and LACP bond options for 'computeB' +# ens4f0 is a single physical interface with an IP on the native/untagged VLAN1 for control plane traffic +# ens2f0/1 are in an ovs bond (LACP) attached to an ovs bridge (br-ex once named by the installer process) +# bond options dont seem to set correctly in the parameters section of the template (BondInterfaceOvsOptions), instead set directly under 'ovs_options:' +nano -cw /home/stack/templates/custom-nics/computeB.yaml + + - type: interface + #name: nic4 + name: ens4f0 + mtu: + get_param: ControlPlaneMtu + use_dhcp: false + addresses: + - ip_netmask: + list_join: + - / + - - get_param: ControlPlaneIp + - get_param: ControlPlaneSubnetCidr + routes: + list_concat_unique: + - get_param: ControlPlaneStaticRoutes + + + - type: ovs_bridge + name: bridge_name + dns_servers: + get_param: DnsServers + domain: + get_param: DnsSearchDomains + members: + - type: ovs_bond + name: bond1 + mtu: + get_attr: [MinViableMtu, value] + ovs_options: + #get_param: BondInterfaceOvsOptions + "bond_mode=balance-slb lacp=active other-config:lacp-fallback-ab=true other_config:lacp-time=fast other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100 other_config:bond_updelay=1000 other_config:bond-rebalance-interval=10000" + members: + - type: interface + #name: nic2 + name: ens2f0 + mtu: + get_attr: [MinViableMtu, value] + primary: true + - type: interface + #name: nic3 + name: ens2f1 + mtu: + get_attr: [MinViableMtu, value] + +# set the path for the net-os-config script that gets pushed to the nodes and subsequently provisions the network config +nano -cw /home/stack/templates/custom-nics/controller.yaml +nano -cw /home/stack/templates/custom-nics/networker.yaml +nano -cw /home/stack/templates/custom-nics/computeA.yaml +nano -cw /home/stack/templates/custom-nics/computeB.yaml + OsNetConfigImpl: + type: OS::Heat::SoftwareConfig + properties: + group: script + config: + str_replace: + template: + #get_file: ../../scripts/run-os-net-config.sh + get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh + +# create an environment file referencing the custom nics config files for inclusion +nano -cw /home/stack/templates/custom-network-configuration.yaml + +resource_registry: + OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/custom-nics/controller.yaml + OS::TripleO::Networker::Net::SoftwareConfig: /home/stack/templates/custom-nics/networker.yaml + OS::TripleO::ComputeA::Net::SoftwareConfig: /home/stack/templates/custom-nics/computeA.yaml + OS::TripleO::ComputeB::Net::SoftwareConfig: /home/stack/templates/custom-nics/computeB.yaml + +# deployment with new environment file +# +# new environment file to include in the deployment command +#-e /home/stack/templates/custom-network-configuration.yaml +# +# omit the environment file referencing the dynamically created network interface configs +#-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml +``` + +# All functional config files to this point + +``` +. +├── containers-prepare-parameter.yaml +├── instackenv.json +├── templates +│   ├── custom-network-configuration.yaml +│   ├── custom-nics +│   │   ├── computeA.yaml +│   │   ├── computeB.yaml +│   │   ├── controller.yaml +│   │   └── networker.yaml +│   ├── network_data.yaml +│   ├── predictive_ips.yaml +│   ├── roles_data.yaml +│   ├── scheduler_hints_env.yaml +│   ├── userdata_root_password.yaml +│   └── vips.yaml +└── undercloud.conf +``` + +# Deployment command to this point + +```sh +# ensure you are in the stack home directory +cd ~/ +source ~/stackrc +time openstack overcloud deploy --templates \ +--networks-file /home/stack/templates/network_data.yaml \ +-e /home/stack/templates/scheduler_hints_env.yaml \ +-e /home/stack/templates/predictive_ips.yaml \ +-e /home/stack/templates/vips.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ +-e /home/stack/templates/custom-network-configuration.yaml \ +-e /home/stack/containers-prepare-parameter.yaml \ +-e /home/stack/templates/userdata_root_password.yaml \ +--roles-file /home/stack/templates/roles_data.yaml + +# if a new stack fails to deploy (not due to configuration issue) the deployment command can be run again to finish off the provision + +# check deployment completed +openstack overcloud status + +# remove failed deployment +openstack stack list +openstack stack delete overcloud + +# check all nodes are back to 'available' state before trying another deployment +openstack baremetal node list + +# if not updating the stack but deploying a 'failing' fresh stack, you may need to tidy up: +# - remove the '~/overcloudrc' file +# - overcloud node entries from the /etc/hosts file '# START_HOST_ENTRIES_FOR_STACK: overcloud' +# - do not remove the undercloud host entries '# START_HOST_ENTRIES_FOR_STACK: undercloud' +``` + +# Deployment problems + +## Logging + +- Logging in tripleo Openstack is not very clear, undercloud and overcloud deployment logging is hit and miss. +- Deployment failures with RHOSP are generally configuration issues or roles based, with opensource tripleo the heat templates may be broken or containers failing QA. +- Once you have a running overcloud you often need to start testing services and creating client networks, the logs for the services are generated inside podman containers and exported to the host with the `k8s-file` log driver. +- Container logs can be found @ `/var/log/container/`, run `podman ps` to determine the name of the container for the service. +- Most services will run on the controller nodes, as these are pacemaker controlled (VIP service API endpoint) it is often the case that the services only run one controller at a time, you may have to search all 3 controllers to find the active/realtime log for any given service. +- There are heat template parameters to extend the logging of core services and enable debug, the parameters file can be included in the deploy command with an environment configuration file `-e /home/stack/templates/debug.yaml`. + +## Enabling debug + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.0/html/advanced\_overcloud\_customization/chap-debug\_modes](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/chap-debug_modes) +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.0/html/overcloud\_parameters/debug-parameters](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/overcloud_parameters/debug-parameters) + +Create the heat template environment (parameters) file. + +> Note we are enabling debug on only one service, debug on all services can be enabled, log files will be much larger but container logs do get log-rotated and compressed. + +```sh +# set all of the following to 'False', 'CinderDebug: True' is for illustration +nano -cw /home/stack/templates/debug.yaml + +parameter_defaults: + # Enable debugging on all services + Debug: False + # Run configuration management (e.g. Puppet) in debug mode + ConfigDebug: False + # Enable debug on individual services + BarbicanDebug: False + CinderDebug: True + ConfigDebug: False + GlanceDebug: False + HeatDebug: False + HorizonDebug: False + IronicDebug: False + KeystoneDebug: False + ManilaDebug: False + NeutronDebug: False + NovaDebug: False + SaharaDebug: False +``` + +It is prudent to include the debug environment file with all debug set to 'False' in the deployment command to aide any future update that may require debug enabling. +Debug can be applied during initial deployment OR as an update. +When updating an existing overcloud you must run the **EXACT** same deployment command as before with the addition of this file. + +## Updating configuration + +As mentioned when enabling debug you can update the configuration without a redeployment, you must run the **EXACT** same deployment command, `openstack stack status` will show an `UPDATED` status. +The caveat with this is that physical network changes (physical->logical service network maping) will not apply in many cases, there are instructions to add tags to parameters to force redeployment `[UPDATE], [CREATE]` of the network bridges and configuration but you are taking a risk at this point and should have tested identical hardware in the old->new configuration states before running these on a production system. +Network changes are not recommended on a production customer system, try and steer the action towards a redeployment to minimise outages. + +# Ceph config + +## Openstack configuration for (external) Ceph RBD storage + +The deployment command requires some additional heat templates, these set overrides for various storage backends. + +```sh + -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ + -e /home/stack/templates/ceph-config.yaml \ +``` + +Install the ceph-ansible package on the undercloud/director node. + +```sh +# The correct version has already been installed in the 'Undercloud Deployment' document +#sudo dnf install -y ceph-ansible +``` + +Create a custom environments file `ceph-config.yaml` and provide parameters unique to your external ceph cluster. + +- Find the value for 'CephClusterFSID' from the command `ceph status`. +- The 'openstack' user (with Capabilities) has already been created, use the command `ceph auth get client.openstack` to find the 'CephClientKey'. +- The 'CephExternalMonHost' host IPs are the cluster 'public' network IPs for each ceph cluster node. + +```sh +nano -cw /home/stack/templates/ceph-config.yaml + +parameter_defaults: + CinderEnableIscsiBackend: false + CinderEnableRbdBackend: true + CinderEnableNfsBackend: false + NovaEnableRbdBackend: true + GlanceBackend: rbd + CinderRbdPoolName: 'volumes' + NovaRbdPoolName: 'vms' + GlanceRbdPoolName: 'images' + CinderBackupRbdPoolName: 'backups' + GnocchiRbdPoolName: 'metrics' + CephClusterFSID: 5b99e574-4577-11ed-b70e-e43d1a63e590 + CephExternalMonHost: 10.122.10.7,10.122.10.8,10.122.10.9 + CephClientKey: 'AQCC5z5jtOmJARAAiFaC2HB4f2pBYfMKWzkkkQ==' + CephClientUserName: 'openstack' + ExtraConfig: + ceph::profile::params::rbd_default_features: '1' +``` + +## Openstack deployment command with Ceph RBD + +```sh +source ~/stackrc +time openstack overcloud deploy --templates \ +--networks-file /home/stack/templates/network_data.yaml \ +-e /home/stack/templates/scheduler_hints_env.yaml \ +-e /home/stack/templates/predictive_ips.yaml \ +-e /home/stack/templates/vips.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ +-e /home/stack/templates/custom-network-configuration.yaml \ +-e /home/stack/containers-prepare-parameter.yaml \ +-e /home/stack/templates/userdata_root_password.yaml \ +-e /home/stack/templates/debug.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ +-e /home/stack/templates/ceph-config.yaml \ +--roles-file /home/stack/templates/roles_data.yaml + +# check deployment completed +openstack overcloud status +``` + +# VM Instance HA fencing configuration + +All configuration to this point has included changes to support instance-ha. + +- `templates/network_data.yaml` = additional IPMI network defined +- `templates/roles_data.yaml` = include Instance-ha roles for ComputeA / ComputeB +- `templates/predictive_ips.yaml` = includes IPMI range for controllers +- `templates/custom-nics/controller.yaml` = has VLAN interface for IPMI + To set the functionality active, additional environment files must be included in the deployment command to ensure 'watcher/fencing/migration' processes start. + +## Create the VM instance HA fencing configuration file + +- The parameter `tripleo::instanceha::no_shared_storage` must be set to 'true' if local controller backend storage is used, such as controllers presenting (non shared storage) LVM based disk over iscsi to compute nodes. The LAB is using Ceph so set to 'false'. The documentation is fairly confusing, the parameter is set 'true' by default. +- This config actually references the controller, networker and compute nodes, documentation states that all nodes should be added even if not used in fencing (controller/networker nodes do not participate in Instance HA). + +The `tripleo::instanceha::no_shared_storage` is a seeming simple parameter but can cause a lot of hassle whilst trying to debug failing HA, digging through the puppet module you will find the default value to be 'true'. + +A Ceph RBD backend (much like NFS) is considered shared storage, Cinder is configured for the Ceph back end, the documentation is a little confusing and you may incorrectly consider Cinder as a non shared resource with a shared back end. For Ceph explicitly set the heat template parameter `tripleo::instanceha::no_shared_storage: false`. + +Beware the following confusing statement from Redhat, **Ceph IS a shared storage backend for Cinder**: + +*However, if all your instances are configured to boot from an OpenStack Block Storage (cinder) volume, you do not need to configure shared storage for the disk image of the instances, and you can evacuate all instances using the no-shared-storage option.* + +The following link shows how the migration works, essentially a pacemaker resource configuration runs a script that loops checking for compute nodes with a non responsive libvirt daemon, this script is configured with parameters such as `--no_shared_storage=true` which are used in the messages/commands issued to the nova API endpoint on the control nodes. When a non-responsive libvirt daemon is detected a call is made to determine which VM Instances reside on the broken hypervisor, another call is made to determine which other hypervisors have capacity for each VM Instance, then a `nova evacuate ` command is issued for each VM Instance. + +To quickly accertain if you have a shared storage issue run `openstack server show test-failover -f json` and look for the error '\[Error: Invalid state of instance files on shared storage\]', you may also find this on the controller with the external API endpoint ip (192.168.101.190 in the LAB) when checking the nova container logs `/var/log/containers/`. + +> https://access.redhat.com/solutions/2022873 + +### RHOSP helper script method + +RHOSP includes a nice script to build the fencing configuration directly from the `instackenv.json`, this seems to include more fields than listed in the documentation and has proven to be the working configuration. + +```sh +cd +source ~/stackrc +openstack overcloud generate fencing --ipmi-lanplus --ipmi-level administrator --output /home/stack/templates/fencing.yaml /home/stack/instackenv.json + +# add the no_shared_storage parameter under the parameters statement. +nano -cw /home/stack/templates/fencing.yaml + +parameter_defaults: + ExtraConfig: + tripleo::instanceha::no_shared_storage: false + EnableFencing: true + FencingConfig: + devices: + - agent: fence_ipmilan + host_mac: 38:68:dd:4a:42:48 + params: + ipaddr: 10.122.1.10 + lanplus: true + login: USERID + passwd: Password0 + privlvl: administrator + - agent: fence_ipmilan + host_mac: 38:68:dd:4a:55:90 + params: + ipaddr: 10.122.1.11 + lanplus: true + login: USERID + passwd: Password0 + privlvl: administrator +... +``` + +### Deployment command + +Additional environment files to be included: + +- INCLUDE `-e /home/stack/templates/fencing.yaml`. +- INCLUDE `-e /usr/share/openstack-tripleo-heat-templates/environments/compute-instanceha.yaml`. +- INCLUDE `-e /home/stack/templates/custom-network-configuration.yaml` This includes the IPMI VLAN network interface for the controller nodes and is already part of the current deployment command above. + +```sh +source ~/stackrc +time openstack overcloud deploy --templates \ +--networks-file /home/stack/templates/network_data.yaml \ +-e /home/stack/templates/scheduler_hints_env.yaml \ +-e /home/stack/templates/predictive_ips.yaml \ +-e /home/stack/templates/vips.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ +-e /home/stack/templates/custom-network-configuration.yaml \ +-e /home/stack/containers-prepare-parameter.yaml \ +-e /home/stack/templates/userdata_root_password.yaml \ +-e /home/stack/templates/debug.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ +-e /home/stack/templates/ceph-config.yaml \ +-e /home/stack/templates/fencing.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/compute-instanceha.yaml \ +--roles-file /home/stack/templates/roles_data.yaml + +# check deployment completed +openstack overcloud status +``` + +# TLS endpoint (Dashboard/API) configuration + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.2/html-single/advanced\_overcloud\_customization/index#sect-Enabling\_SSLTLS\_on\_the_Overcloud](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/advanced_overcloud_customization/index#sect-Enabling_SSLTLS_on_the_Overcloud) +> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html + +The general steps for TLS cert creation follow: + +- Create a certificate authority. (this is a a basic internal certificate with the CA signing 'usage') +- Create a certificate/key combination for the External endpoint. (this will technically be a SAN certificate, it will validate the DNS FQDN and IP of the endpoint, thus will include common\_name (CN) and subject alt\_names (SAN)) +- The common name will be whatever you want to include in the estate wide DNS server. `stack.university.ac.uk`, this maps to the parameter `CloudName:` in the `/home/stack/templates/custom-domain.yaml` template +- The alt_name is the IP (you could have N entries for more IPs or FQDN if you required some further integration/legacy reasons) listed in `templates/predictive_ips.yaml` entry `PublicVirtualFixedIPs`. +- Create a certificate signing request, sign the certificate with the CA that has been created. + +## Set the DNS and overcloud name attributes + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.2/html-single/advanced\_overcloud\_customization/index#configuring\_dns_endpoints](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/advanced_overcloud_customization/index#configuring_dns_endpoints) + +The University external domain is 'university.ac.uk'. + +> NOTE: If HostnameMap is used in the `/home/stack/templates/scheduler_hints_env.yaml` configuration, ensure any override of the node names fit the following endpoint hostname scheme. + +The `CloudName: stack.university.ac.uk` key relates to the public VIP pointing to the external API endpoint `PublicVirtualFixedIPs`, if using estate wide DNS (i.e laptops need to get to the overcloud console) an A record for this IP/NAME combo should be set. + +```sh +dig stack.university.ac.uk @144.173.6.71 + +;; ANSWER SECTION: +stack.university.ac.uk. 86400 IN A 10.121.4.14 + +# Prefereably a PTR record should be in place, University do not have this set +dig -x 10.121.4.14 @144.173.6.71 +``` + +Create the override for the endpoint naming scheme: + +```sh +cp /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml /home/stack/templates/ +nano -cw /home/stack/templates/custom-domain.yaml + +parameter_defaults: + # The DNS domain used for the hosts. This must match the overcloud_domain_name configured on the undercloud. + CloudDomain: university.ac.uk + + # The DNS name of this cloud. E.g. ci-overcloud.tripleo.org + CloudName: stack.university.ac.uk + + # The DNS name of this cloud's provisioning network endpoint. E.g. 'ci-overcloud.ctlplane.tripleo.org'. + CloudNameCtlplane: stack.ctlplane.university.ac.uk + + # The DNS name of this cloud's internal_api endpoint. E.g. 'ci-overcloud.internalapi.tripleo.org'. + CloudNameInternal: stack.internalapi.university.ac.uk + + # The DNS name of this cloud's storage endpoint. E.g. 'ci-overcloud.storage.tripleo.org'. + CloudNameStorage: stack.storage.university.ac.uk + + # The DNS name of this cloud's storage_mgmt endpoint. E.g. 'ci-overcloud.storagemgmt.tripleo.org'. + CloudNameStorageManagement: stack.storagemgmt.university.ac.uk + + DnsServers: ["144.173.6.71", "1.1.1.1"] +``` + +## Create Certificate Authority + +Rather than creating a self signed cert we can create a CA to sign any generated certificates. +This simplifies client validation of *any* certs signed by this CA cert, the CA cert can be imported onto Linux client machines @ /etc/pki/tls/certs/ca-bundle.crt or the trust store on MS machines. +Alternatively you could generate a CSR, submit to a public/verified CA (via the University security department) to then receive a certificate for the cluster, the certificate may require building into a PEM format (with any passphrases removed) depending on what is returned and the config files will likely need the full trust chain including Intermiatory CA certs. + +### Install cfssl + +``` +sudo curl -s -L -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 +sudo curl -s -L -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 +sudo curl -s -L -o /usr/local/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 +sudo chmod +x /usr/local/bin/cfssl* +``` + +### Generate CA + +``` +cd +mkdir -p ~/CA/config +mkdir -p ~/CA/out +nano -cw ~/CA/config/ca-csr.json + +{ + "CA": { + "expiry": "87600h", + "pathlen": 0 + }, + "CN": "University Openstack CA", + "key": { + "algo": "rsa", + "size": 4096 + }, + "names": [ + { + "C": "GB", + "O": "UOE", + "OU": "Cloud", + "L": "University", + "ST": "England" + } + ] +} + +cfssl gencert -initca ~/CA/config/ca-csr.json | cfssljson -bare ~/CA/out/ca - +``` + +### Generate the external Dashboard/API endpoint certificate. + +Use the same `cfssl-profile.json` to configure a new certificate. +The undercloud host may already have the CA certificate imported if a Quay registry was setup as per the LAB setup. + +``` +cd ~/CA/config + +# create a cfssl configuration profile with a 10 year expiry that allows for certificates with multiple usage +nano -cw cfssl-profile.json + +{ + "signing": { + "default": { + "expiry": "87600h" + }, + "profiles": { + "server": { + "usages": ["signing", "digital signing", "key encipherment", "server auth"], + "expiry": "87600h" + } + } + } +} + +# create a certificate CSR profile for the overcloud.local endpoint +# Openstack is a bit picky around using both a CN (context name) and a SAN (subject alternate name), populate both CN and 'hosts' entries +# use the VIP for PublicVirtualFixedIPs as part of the SAN +nano -cw overcloud-csr.json + +{ + "CN": "stack.university.ac.uk", + "hosts": [ + "stack.university.ac.uk", + "10.121.4.14" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [ + { + "C": "GB", + "L": "University", + "ST": "England" + } + ] +} + +# generate certificate signed by the CA, there no need to create a certificate chain with an intermediatory certificate in this scenario +cfssl gencert -ca ../out/ca.pem -ca-key ../out/ca-key.pem -config ./cfssl-profile.json -profile=server ./overcloud-csr.json | cfssljson -bare ../out/overcloud + +# you will get an error message '[WARNING] This certificate lacks a "hosts" field.' owing to the CA signing cert not being able to certify a website, this is not an issue. + +# check cert +cfssl-certinfo -cert ../out/overcloud.pem +{ + "subject": { + "common_name": "stack.university.ac.uk", + "country": "GB", + "locality": "University", + "province": "England", + "names": [ + "GB", + "England", + "University", + "stack.university.ac.uk" + ] + }, + "issuer": { + "common_name": "University Openstack CA", + "country": "GB", + "organization": "UOE", + "organizational_unit": "Cloud", + "locality": "University", + "province": "England", + "names": [ + "GB", + "England", + "University", + "Cloud", + "University Openstack CA" + ] + }, + "serial_number": "63583601022960320621656457322685669356580690922", + "sans": [ + "stack.university.ac.uk", + "10.121.4.14" + ], + "not_before": "2022-07-14T11:19:00Z", + "not_after": "2032-07-11T11:19:00Z", + "sigalg": "SHA512WithRSA", + "authority_key_id": "58:5F:BC:63:BF:22:34:5C:D1:FE:3F:61:DF:7C:FC:E6:C8:34:2D:45", + "subject_key_id": "4D:75:7D:60:CE:11:9:46:7D:6E:69:1E:96:4D:4C:5A:92:36:D7:E3", + "pem": "-----BEGIN CERTIFICATE----- + -----END CERTIFICATE-----\n" +} + +# check cert, query the pem directly with openssl toolchain +openssl x509 -in ../out/overcloud.pem -text -noout + + Subject: C = GB, ST = England, L = University, CN = stack.university.ac.uk + + X509v3 Subject Alternative Name: + DNS:stack.university.ac.uk, IP Address:10.121.4.14 + +# list generated certificate/key pair +ll ../out/ + +-rw-r--r--. 1 stack stack 1704 Jun 27 18:47 ca.csr +-rw-------. 1 stack stack 3243 Jun 27 18:47 ca-key.pem +-rw-rw-r--. 1 stack stack 2069 Jun 27 18:47 ca.pem +-rw-r--r--. 1 stack stack 1013 Jun 27 18:51 overcloud.csr +-rw-------. 1 stack stack 1679 Jun 27 18:51 overcloud-key.pem +-rw-rw-r--. 1 stack stack 1728 Jun 27 18:51 overcloud.pem +``` + +## Configure the undercloud to be able to validate the `PublicVirtualFixedIPs` endpoint + +Ensure the DNS server (144.173.6.71) has the following A record (preferably also with PTR record). `stack.university.ac.uk -> 10.121.4.14` + +``` +# add host entry where DNS is not available +# update the deployment hard codes this entry automatically +#echo "10.121.4.14 stack.university.ac.uk" >> /etc/hosts +``` + +Import the certificate authority to the undercloud, when deploying the overcloud the undercloud will check (at the end of deployment) if the public endpoint is up, if it cannot validate the SSL certificate the installer will fail (you will not know if just the endpoint was not validated or if there were other deployment issues). + +``` +sudo cp /home/stack/CA/out/ca.pem /etc/pki/ca-trust/source/anchors/ +sudo update-ca-trust extract + +trust list | grep label | wc -l +147 + +trust list | grep label | grep -i university + label: University Openstack CA +``` + +## Set the config files for TLS + +Openstack external API endpoint certificate configuration file. + +```sh +cp /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml /home/stack/templates/ +# edit /home/stack/templates/enable-tls.yaml +# insert the contents of /home/stack/CA/out/overcloud.pem to the SSLCertificate section entry (4 space indent) +# insert the content of the /home/stack/CA/out/overcloud-key.pem to the SSLKey section entry (4 space indent) +# set PublicTLSCAFile to the path of the CA cert /etc/pki/ca-trust/source/anchors/ca.pem +# ensure the DeployedSSLCertificatePath is set to /etc/pki/tls/private/overcloud_endpoint.pem, this will be populated/updated on deployment + +nano -cw /home/stack/templates/enable-tls.yaml + +parameter_defaults: + HorizonSecureCookies: True + + PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/ca.pem' + + SSLCertificate: | + -----BEGIN CERTIFICATE----- + MIIE4zCCAsugAwIBAgIUZUNyk+eV4aidYikN21GRWbsndJ0wDQYJKoZIhvcNAQEN + ................................................................ + N/FgTMHNQ4qylQCRwdchkBADyjIh+dC7mwnBEY4XLaMcCh3F0dgDdp/VZX0mk9UW + jpJD93nbqA== + -----END CERTIFICATE----- + + SSLIntermediateCertificate: '' + + SSLKey: | + -----BEGIN RSA PRIVATE KEY----- + MIIEowIBAAKCAQEA0JacewbcVu37MGpAopX9pRakBMp+6xFPUSDEWASFx50V6VJF + ................................................................ + VcBZsDDVEvzWQIc7d3fkRxO+r/QeSIw8IJ6aPRS7xegAEMNwD8ZXzFjEXOdN/LsM + oUgYstUl1OwL/uupELwFpR5LdtjRszd3BoprI5ZdW0WuYmGm+YPw + -----END RSA PRIVATE KEY----- + + DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem +``` + +Certificate authority configuration file. + +```sh +cp /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml /home/stack/templates/ +# edit /home/stack/templates/inject-trust-anchor-hiera.yaml +# insert the contents of the /home/stack/CA/out/ca.pem to the CAMap key, multiple CAs can be added, in this case only a single CA is used +# the certificate key name under CAMap is arbritrary, by default these are named 'first-ca-name', 'second-ca-name' for ease (8 space indent) + +nano -cw /home/stack/templates/inject-trust-anchor-hiera.yaml + +parameter_defaults: + CAMap: + first-ca-name: + content: | + -----BEGIN CERTIFICATE----- + MIIFzDCCA7SgAwIBAgIUXS9uFGJSbVPt1Tj0Oc82XwlmfQMwDQYJKoZIhvcNAQEN + ................................................................ + FkExys4JyWK3bFz3KAzYKfNb/forqoXPVEtE+v+Io3Da8yf207VchE5iOdxgNJiH + -----END CERTIFICATE----- +``` + +## Configure the public API endpoint to accept inbound connections by IP or DNS + +> DNS entry and valid Certificate (with SAN entries) will allow a browser to use the dashboard by FQDN 'https://stack.university.ac.uk:443/dashboard' or 'https://10.121.4.15:443/dashboard'. + +Include + +- If you use a DNS name for accessing the public endpoints, use `/usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml` +- If you use *only* IP address for accessing the public endpoints, use `/usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml` + +A client connecting to the overcloud (such as a laptop) will need to have the CA certificate in the local trust store and make DNS requests to a server with an A record entry for `stack.university.ac.uk`. +To avoid this, use a University wide certificate authority (frequently all University machines will have their own CA certificate pushed via group policy) or use a Public certificate authority that have their CA/Intermediatory certificate chains distributed with the OS/browser by default. + +## Deployment command to this point + +Additional environment files to be included: + +- INCLUDE `-e /home/stack/templates/custom-domain.yaml`. +- INCLUDE `-e /home/stack/templates/enable-tls.yaml`. +- INCLUDE `/home/stack/templates/inject-trust-anchor-hiera.yaml` +- INCLUDE `-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml`. + +```sh +time openstack overcloud deploy --templates \ +--networks-file /home/stack/templates/network_data.yaml \ +-e /home/stack/templates/scheduler_hints_env.yaml \ +-e /home/stack/templates/predictive_ips.yaml \ +-e /home/stack/templates/vips.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ +-e /home/stack/templates/custom-network-configuration.yaml \ +-e /home/stack/containers-prepare-parameter.yaml \ +-e /home/stack/templates/userdata_root_password.yaml \ +-e /home/stack/templates/debug.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ +-e /home/stack/templates/ceph-config.yaml \ +-e /home/stack/templates/fencing.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/compute-instanceha.yaml \ +-e /home/stack/templates/custom-domain.yaml \ +-e /home/stack/templates/enable-tls.yaml \ +-e /home/stack/templates/inject-trust-anchor-hiera.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml \ +--roles-file /home/stack/templates/roles_data.yaml + +# check deployment completed +openstack overcloud status +``` + +# LDAP + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.2/html-single/integrate\_openstack\_identity\_with\_external\_user\_management\_services](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/integrate_openstack_identity_with_external_user_management_services) + +The University AD servers use SSL (not TLS) and have a certificate signed by a Public CA with a certificate that already exists in the trust store of a vanilla RedHat/Centos installation, for this reason no certificated need to be imported onto the hosts running the keystone services (controllers). + +## Check University domain connectivity + +```sh +# find all groups +ldapsearch -LLL -o ldif-wrap=no -x \ +-w "Password0" \ +-b "OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk" \ +-D "svc_iscalookup@university.ac.uk" \ +-H "ldaps://secureprodad.university.ac.uk" \ +"(objectClass=group)" \ +cn distinguishedName name sAMAccountName objectClass + +# find all members of the openstack group +ldapsearch -LLL -o ldif-wrap=no -x \ +-w "Password0" \ +-b "OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk" \ +-D "svc_iscalookup@university.ac.uk" \ +-H "ldaps://secureprodad.university.ac.uk" \ +"(&(objectClass=group)(cn=ISCA-Openstack-Users))" \ +member + +# number of openstack users +ldapsearch -LLL -o ldif-wrap=no -x \ +-w "Password0" \ +-b "OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk" \ +-D "svc_iscalookup@university.ac.uk" \ +-H "ldaps://secureprodad.university.ac.uk" \ +"(&(objectClass=group)(cn=ISCA-Openstack-Users))" \ +member | grep -v ^dn: | wc -l + +# find simon/ocf account (assuming there was an account created at project inception) +ldapsearch -LLL -o ldif-wrap=no -x -w "Password0" -b "OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk" -D "svc_iscalookup@university.ac.uk" -H "ldaps://secureprodad.university.ac.uk" "(&(objectClass=group)(cn=ISCA-Openstack-Users))" member | grep -v ^dn: | sed 's/member: //' | sed '/^$/d' > search.txt +while read i; do ldapsearch -LLL -o ldif-wrap=no -x -w "Password0" -b "$i" -D "svc_iscalookup@university.ac.uk" -H "ldaps://secureprodad.university.ac.uk" "(objectClass=user)" cn displayName mail uid ;done < search.txt > search1.txt +grep -i simon search1.txt +grep -i ocf search1.txt +rm -f search*txt +# no account was created +``` + +## Create config file + +```sh +cp /usr/share/openstack-tripleo-heat-templates/environments/services/keystone_domain_specific_ldap_backend.yaml /home/stack/templates/ +nano -cw /home/stack/templates/keystone_domain_specific_ldap_backend.yaml + +parameter_defaults: + KeystoneLDAPDomainEnable: true + KeystoneLDAPBackendConfigs: + ldap: + # AD domain + url: ldaps://secureprodad.university.ac.uk:636 + user: CN=svc_iscalookup,OU=Machine Accounts,OU=Service Accounts,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk + password: Password0 + suffix: DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk + # scope can be set to one OR sub, one level down the tree or the entire subtree + # the University directory has many tiers and many thousands of objects requiring defined tree locations for users and groups, without these targets objects will not be returned (timeout) and performance is poor + query_scope: sub + # user lookup + user_tree_dn: OU=People,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk + user_filter: (memberOf=CN=ISCA-Openstack-Users,OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk) + user_objectclass: person + user_id_attribute: sAMAccountName + #user_name_attribute: sAMAccountName + user_name_attribute: cn + user_mail_attribute: mail + # enable user enable/disable from LDAP field + user_enabled_attribute: userAccountControl + user_enabled_mask: 2 + user_enabled_default: 512 + # keystone attributes to ignore on create/update, (tenant ~ project) + # when a user is autocreated on login the typical keystone fields listed below will not be populated, for example password is provided/passthrough by LDAP + user_attribute_ignore: password,tenant_id,tenants + # group lookup + group_tree_dn: OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk + group_objectclass: group + group_id_attribute: sAMAccountName + group_name_attribute: cn + group_member_attribute: member + group_desc_attribute: cn + # The University LDAPS connection is using SSL (not TLS) with a public CA cert that already exists in the OS trust store + use_tls: False + tls_cacertfile: "" +``` + +## All functional config files to this point + +``` +. +├── CA +│   ├── config +│   │   ├── ca-csr.json +│   │   ├── cfssl-profile.json +│   │   └── overcloud-csr.json +│   └── out +│   ├── ca.csr +│   ├── ca-key.pem +│   ├── ca.pem +│   ├── overcloud.csr +│   ├── overcloud-key.pem +│   └── overcloud.pem +├── containers-prepare-parameter.yaml +├── instackenv.json +├── templates +│   ├── ceph-config.yaml +│   ├── custom-domain.yaml +│   ├── custom-network-configuration.yaml +│   ├── custom-nics +│   │   ├── computeA.yaml +│   │   ├── computeB.yaml +│   │   ├── controller.yaml +│   │   └── networker.yaml +│   ├── debug.yaml +│   ├── enable-tls.yaml +│   ├── fencing.yaml +│   ├── inject-trust-anchor-hiera.yaml +│   ├── keystone_domain_specific_ldap_backend.yaml +│   ├── network_data.yaml +│   ├── predictive_ips.yaml +│   ├── roles_data.yaml +│   ├── scheduler_hints_env.yaml +│   ├── userdata_root_password.yaml +│   └── vips.yaml +└── undercloud.conf +``` + +## Deployment command to this point + +Additional environment files to be included: + +- INCLUDE `-e /home/stack/templates/keystone_domain_specific_ldap_backend.yaml`. +- Wrap the command in a script, every time the deployment command changes include in the 'deploy.sh' script to ensure a record is kept. + +```sh +# ensure you are the stack user in the $HOME directory to run the deployment command +cd +touch overcloud-deploy.sh +chmod +x overcloud-deploy.sh +nano -cw overcloud-deploy.sh + +source /home/stack/stackrc +time openstack overcloud deploy --templates \ +--networks-file /home/stack/templates/network_data.yaml \ +-e /home/stack/templates/scheduler_hints_env.yaml \ +-e /home/stack/templates/predictive_ips.yaml \ +-e /home/stack/templates/vips.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ +-e /home/stack/templates/custom-network-configuration.yaml \ +-e /home/stack/containers-prepare-parameter.yaml \ +-e /home/stack/templates/userdata_root_password.yaml \ +-e /home/stack/templates/debug.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ +-e /home/stack/templates/ceph-config.yaml \ +-e /home/stack/templates/fencing.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/compute-instanceha.yaml \ +-e /home/stack/templates/custom-domain.yaml \ +-e /home/stack/templates/enable-tls.yaml \ +-e /home/stack/templates/inject-trust-anchor-hiera.yaml \ +-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml \ +-e /home/stack/templates/keystone_domain_specific_ldap_backend.yaml \ +--roles-file /home/stack/templates/roles_data.yaml +``` + +## Check LDAP connectivity + +```sh +# load environment/credentials for the CLI to interact with the Overcloud API +source ~/overcloudrc + +# check for the creation of the 'ldap' domain (present i) +openstack domain list ++----------------------------------+------------+---------+--------------------+ +| ID | Name | Enabled | Description | ++----------------------------------+------------+---------+--------------------+ +| 70ddca13588744a9ab3af718abaf70dc | heat_stack | True | | +| acbc81b55cc242e198648230625fbc0b | ldap | True | | +| default | Default | True | The default domain | ++----------------------------------+------------+---------+--------------------+ + +# get ID of the 'ldap' openstack domain +openstack domain show ldap -f json | jq -r .id +acbc81b55cc242e198648230625fbc0b + +# get ID of default admin user (this is in the default openstack domain) +openstack user list --domain default -f json | jq -r '.[] | select(."Name" == "admin") | .ID' +1050084350ed4b55b43a929d29e64ac1 + +# get the ID of the admin role +openstack role list -f json | jq -r '.[] | select(."Name" == "admin") | .ID' +db4388c489dd4afc97dbbfea0b1dd0ac + +# setup 'admin' access to the Overcloud/Cluster +# bind the (default) Openstack admin user to the admin role for the 'ldap' domain +openstack role add --domain acbc81b55cc242e198648230625fbc0b --user 1050084350ed4b55b43a929d29e64ac1 db4388c489dd4afc97dbbfea0b1dd0ac +``` + +Query Keystone for AD users/groups: + +```sh +openstack user list --domain ldap | head -n 10 ++------------------------------------------------------------------+----------+ +| ID | Name | ++------------------------------------------------------------------+----------+ +| 06bb55f37d07e62a1309cfa5bf86feec8b0af5e1c28fb64e789629fb901b485b | ptfrost | +| 1c3629955a3d5d6e90005dea89aee86970a912ed95a6a0e6c6f6eabbdf0bfdec | mcw204 | +| f1c146517abe61f67b6e89c7ee2a7a31ea5958394f0bc5e0859e5e4ed51ea3c2 | snfieldi | +| 07eeb10cfe28a37677f5001d35ce18012e3594f1028265532a3496c67c5e9bd5 | jh288 | +| 278820da9a686e2102aff305c81dac330d41060d74020cef3d2c8437d3dd4a7c | rnb203 | +| 37d181f770cd61f090d377f536a67c99286ca66ba9157afb1b31940db71a46ff | kebrown | +| ae2ec9344c05939f3a28aafc27466e3197d78fa7936c0c1cc43a01e7974e46ba | arichard | + +openstack user list --domain ldap | wc -l +1851 + +openstack user show ptfrost --domain ldap ++---------------------+------------------------------------------------------------------+ +| Field | Value | ++---------------------+------------------------------------------------------------------+ +| description | Staff | +| domain_id | c0543515d22f45f88a69008b5b884ebf | +| email | P.T.Frost@university.ac.uk | +| enabled | True | +| id | 06bb55f37d07e62a1309cfa5bf86feec8b0af5e1c28fb64e789629fb901b485b | +| name | ptfrost | +| options | {} | +| password_expires_at | None | ++---------------------+------------------------------------------------------------------+ + +openstack group list --domain ldap ++------------------------------------------------------------------+----------------------+ +| ID | Name | ++------------------------------------------------------------------+----------------------+ +| 90c99abaec2f579937a6a3be1d66e35de635c28391bfaf6656e92d305e4a2660 | ISCA-Openstack-Users | +| 52120554370b8678c1893dcf2b3033c7eae27f345acb3da5aff7f7a0f5e01861 | ISCA-Admins | +| 4dc6714669c04de77b2507488930ade9acf82ceb6f69098dc8b6c36e917b8a9d | ISCA-Users | +| ec75b5f9bdcebf6681f0bc83b52f2b02f26be22a46f432f50ea7f903b047168c | ISCA-module-stata | ++------------------------------------------------------------------+----------------------+ + +openstack group show ISCA-Openstack-Users --domain ldap ++-------------+------------------------------------------------------------------+ +| Field | Value | ++-------------+------------------------------------------------------------------+ +| description | ISCA-Openstack-Users | +| domain_id | c0543515d22f45f88a69008b5b884ebf | +| id | 90c99abaec2f579937a6a3be1d66e35de635c28391bfaf6656e92d305e4a2660 | +| name | ISCA-Openstack-Users | ++-------------+------------------------------------------------------------------+ +``` + +# Dashboard login + +```sh +# get Openstack 'admin' user password +grep OS_PASSWORD ~/overcloudrc | awk -F "=" '{print $2}' +Password0 +``` + +Browse to `https://stack.university.ac.uk/dashboard`. + +- user: admin +- password: Password0 +- domain: default (for AD login the domain is 'ldap') \ No newline at end of file diff --git a/6) Multi-tenancy.md b/6) Multi-tenancy.md new file mode 100755 index 0000000..364a533 --- /dev/null +++ b/6) Multi-tenancy.md @@ -0,0 +1,1207 @@ +# Foreword + +All operations via the CLI can generally be achieved through the web admin console, the subset of commands listed in this document are here to provide context and assist with understanding the usage model, the web interface can be confusing. +When creating objects via the CLI checking back with the web console for the new item, to clarify how to create, use and navigate items. +The CLI commands may then be used in a scripted manner to quickly create projects, networks, instances, security groups and user access patterns to get the end-user up and running with a new environment without much admin overhead. + +# Load environment variables to use the Overcloud CLI + +Just like the environment required for the undercloud (~/stackrc), the overcloud requires its own variables (~/overcloudrc). + +```sh +[stack@undercloud ~]$ source ~/stackrc +(undercloud) [stack@undercloud ~]$ source ~/overcloudrc +(overcloud) [stack@undercloud ~]$ +``` + +# Domains, Projects, Roles, Users and Groups + +> https://docs.openstack.org/security-guide/identity/domains.html + +Components of the access model. + +- (keystone) Domains are high level containers for projects, users and groups. The keystone authentication provider can manage multiple domains for top level logical segregation of a cluster and allowing for different authentication backends (LDAP) per domain. A fresh cluster deployment has one domain 'Default' unless it has been joined to a directory service. +- Projects in OpenStack (also known as tenants or accounts), are organizational units in the cluster to which you can assign users (zero or more users). A user can be a member of one or more projects. +- Roles are used to define which actions a user can perform on one or more projects, they are the glue between users and projects, limiting the scope of permissions (access) to compute/network/storage resources, the permissions model is described as role-based access control (RBAC), simply, roles define which actions users can perform. + +There are three main predefined roles in OpenStack. + +- admin : This is an administrative role that enables non-admin users to administer the environment. +- member: Default role assigned to new users. This gets attached to a tenant. +- reader: Mostly used for read-only APIs and operations. + +> Many of the following steps can be performed in the web admin console, however it is easy to script your unique access model with the CLI commands. Often when adding users you will want to add the user to multiple projects and set quotas in oneshot. + +## List domains + +```sh +openstack domain list ++----------------------------------+------------+---------+--------------------+ +| ID | Name | Enabled | Description | ++----------------------------------+------------+---------+--------------------+ +| 08e3b578ac4042838f05149543813d94 | heat_stack | True | | +| bdea557c7baf43ad92239a420255d7ec | ldap | True | | +| default | Default | True | The default domain | ++----------------------------------+------------+---------+--------------------+ + +# check users available in default domain +openstack user list --domain 'default' | head -n 10 ++----------------------------------+-----------+ +| ID | Name | ++----------------------------------+-----------+ +| e2ea49d4ae1d4670b8546aab65deba2b | admin | +| 23a89a9d1e394a2ebd46a472ffda5246 | cinder | +| b35722d148bd41a68dcdc02b5819096d | cinderv2 | +| d8889253c81441fb9c4b6ed092aaf387 | cinderv3 | +| af34adf270d1489d8a778e3b590e4ffc | glance | +| 3386b28253814d0cb885810464bd7c81 | heat | +| 15a87bed8b1646888911e19cb7bc2d0c | heat-cfn | + +# check users available in the 'ldap' domain +openstack user list --domain 'ldap' | head -n 10 ++------------------------------------------------------------------+----------+ +| ID | Name | ++------------------------------------------------------------------+----------+ +| 9bf2aa8c4fc59c5c58cb3269444676e213f490a03953bfa32bc071b188db7069 | ptfrost | +| cb6e3861d3f0958d1f921d4c24cd55710bc7e62583b3b8c0ce70e76a1e016c55 | mcw204 | +| 2d6ae76eecf2ab0352d00c8ebfd02a19df42858201d677d939e8225dd9bd7eac | snfieldi | +| 30c37dfef0e95e5aeab3a2c20aaa34cfe9211b5dfb705ed093dca9b2b7a83dcb | jh288 | +| b81e267c54ec6ae8c4d3bd678cdc95d74ddb249649e4d736dd2a1771c5060f28 | rnb203 | +| 591c1fed34ebc33fb7d2fe7a27be732904bda493c059af0c2cf26e4384b0660a | kebrown | +| 190e55d7af652fdac505463ed3beedc8f40e97235c21c7321fa87705ffe20bdb | arichard | +``` + +## Create project + +Create a a project in the 'ldap' domain to enable AD users access. +The default 'service' and 'admin' projects were created in the deployment. + +```sh +openstack project create --domain 'ldap' --description "University Guest Project" guest +openstack project list ++----------------------------------+---------+ +| ID | Name | ++----------------------------------+---------+ +| 45e6f96ee6cc4ba3a348c38a212fd8b8 | guest | +| 98df2c2796ba41c09f314be1a83c9aa9 | service | +| 9c7f7d54441841a6b990e928c8e08b8a | admin | ++----------------------------------+---------+ +openstack project list --domain 'ldap' ++----------------------------------+-------+ +| ID | Name | ++----------------------------------+-------+ +| 45e6f96ee6cc4ba3a348c38a212fd8b8 | guest | ++----------------------------------+-------+ +``` + +### Useful project commands + +```sh +## rename +openstack project set --name newprojectname + +## disable +openstack project set --disable +openstack project set --enable + +## delete and accociated instances TEST TEST +openstack project delete +``` + +## Testing - Create local user/group + +To assist with testing access control create a native keystone user that is not in AD/LDAP that has access to the guest project, for AD users this is not required. +A local keystone user can interact with resources owned by AD users/groups, think of keystone as an AD sysnc/caching layer + +> It is important to add valid email addresses for functionality and to chase owners of virtual machines. +> the --project parameter does not add access to the project it only sets it as the users default project, it is likely that you would set all users to a default/guest project in a self service environment. + +```sh +# inline password +#openstack user create \ +# --project guest \ +# --password 'Password0' \ +# --email toby.n.seed@gmail.com \ +#toby.n.seed + +# interactive password with no project (can add later) +openstack user create \ + --password-prompt \ + --email tseed@ocf.co.uk \ + tseed + +# change password for a local user (will not work for a domain user) +openstack user set --domain 'default' --password-prompt tseed + +# get users ID +openstack user list --domain 'default' | tail -n 2 +| 0c4c66edb7ca4f899620a500af1546c9 | tseed | ++----------------------------------+-----------+ + +# set the default project in the web console for the user tseed +openstack user set --project guest tseed +openstack project show $(openstack user show tseed --domain 'default' -f json | jq -r .default_project_id) -f json | jq -r .description + +University Guest Project +``` + +### Assign role to the local user + +Give user 'tseed' member access to: + +- The default 'admin' project +- The new 'guest' project + +```sh +openstack role add --project 'guest' --user 'tseed' 'member' +openstack role add --project 'admin' --user 'tseed' 'member' +``` + +The projects are in different domains but the user is able to switch between projects using the toggle at the top right of the web console. +Browse to `https://stack.university.ac.uk/dashboard`. + +- user: tseed +- password: Password0 +- domain: default + +The local user can also be set with a role for the entire domain encompassing all projects in the domain, typically this would only be performed with the 'admin' role. + +```sh +# Where objects have the same name, the unique ID can be used, generally with admin permissions use the object ID for safety + +# get ID of the 'ldap' openstack domain +#openstack domain show 'ldap' -f json | jq -r .id +#c0543515d22f45f88a69008b5b884ebf + +# get ID of the 'tseed' user +#openstack user list --domain 'default' -f json | jq -r '.[] | select(."Name" == "tseed") | .ID' +#fa1fc5885a074a64b2d41958d3fc9dcf + +# get the ID of the 'admin' role +#openstack role list -f json | jq -r '.[] | select(."Name" == "admin") | .ID' +#5730ea7153a84c77adb9350293ea1ed9 + +# bind the 'tseed' local user to the 'admin' role for the entire 'ldap' domain +#openstack role add --domain c0543515d22f45f88a69008b5b884ebf --user fa1fc5885a074a64b2d41958d3fc9dcf d5bb1123771a45229adc57787709d3eb +``` + +Remove the role ready to use group based role assignment instead. + +```sh +openstack role remove --project 'guest' --user 'tseed' 'member' +openstack role remove --project 'admin' --user 'tseed' 'member' +``` + +### Assign role to local group (assigning roles to groups rather than users) + +Create local group in the default domain. + +```sh +# (underscores in object names are generally more compatible with AD in a unix type environments) +openstack group create --domain 'Default' --description 'local group access to guest project' guest_member +openstack group list --long ++----------------------------------+--------------+-----------+-------------------------------------+ +| ID | Name | Domain ID | Description | ++----------------------------------+--------------+-----------+-------------------------------------+ +| 1e0cc781c0684920a020d1f57d5f2f60 | guest_member | default | local group access to guest project | ++----------------------------------+--------------+-----------+-------------------------------------+ + +# add local user to group +# note the group-domain and user-domain parameters, this can facilitate users from different domains access resources in a parallel keystone domains +openstack group add user --group-domain 'Default' --user-domain 'Default' guest_member tseed + +# find groups that a user belongs to +openstack group list --user tseed ++----------------------------------+--------------+ +| ID | Name | ++----------------------------------+--------------+ +| 1e0cc781c0684920a020d1f57d5f2f60 | guest_member | ++----------------------------------+--------------+ + +# simple group membership check that can be easily incorporated into scripts +openstack group contains user guest_member tseed + +tseed in group guest_member + +# add role for group members to access the guest and admin projects +openstack role add --group-domain 'Default' --group guest_member --project guest --project-domain 'ldap' member +openstack role add --group-domain 'Default' --group guest_member --project admin --project-domain 'Default' member +``` + +Selecting an AD user from the domain 'ldap' should show automatic AD group membership, however the limitation is that the user object ID must be used in the query. + +```sh +openstack group list --user $(openstack user show kmgoodin --domain 'ldap' -f json | jq -r .id) --domain 'ldap' ++------------------------------------------------------------------+----------------------+ +| ID | Name | ++------------------------------------------------------------------+----------------------+ +| 7052afb8e616072c4f30e989b381e1a9e9cb012d19851774e6fa96ccd618a12f | ISCA-Openstack-Users | ++------------------------------------------------------------------+----------------------+ +``` + +## Assign roles - AD/LDAP + +Users can be individually assigned roles (admin/member/reader) for domains or projects as illustrated above. +Alternatively a group can be created and assigned the role with users being members of the group. +Use of groups is more convenient where the keystone service uses an LDAP/AD back end. + +> With LDAP/AD typically you would create an AD group, add AD members to the AD group, then create a project (for ease name the same as the AD group), create an internal network+router for the project and finally add a 'member' role to bind the AD group to the project. You may select an AD user (typically but not necessarily in the AD group) to also have the 'admin' role for the associated project to act as caretaker. + +Individual user role assignments, add an AD user to a new project + +```sh +# find user in the 'ldap' domain +openstack user list --domain 'ldap' | head -n 10 + +# add users as members of the guest project +openstack role add --user-domain 'ldap' --user kmgoodin --project-domain 'ldap' --project guest member + +# check role assignment +openstack role assignment list --user kmgoodin --user-domain 'ldap' --names ++--------+---------------+-------+------------+--------+--------+-----------+ +| Role | User | Group | Project | Domain | System | Inherited | ++--------+---------------+-------+------------+--------+--------+-----------+ +| member | kmgoodin@ldap | | guest@ldap | | | False | ++--------+---------------+-------+------------+--------+--------+-----------+ + +# remove, we will likely want to use AD group based role assignment +openstack role remove --user-domain 'ldap' --user kmgoodin --project-domain 'ldap' --project guest member +``` + +Group based role assignments. + +- Note that the groups are searched from the AD tree at a specific level, this is set by parameter 'group\_tree\_dn' in the environment file 'keystone\_domain\_specific\_ldap\_backend.yaml' +- group\_tree\_dn: OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk +- Create your Openstack groups at this location in the AD tree. +- With large ADs you need to set the 'group\_tree\_dn' for performance, with a directory the size of University when not setting the parameter the lookup queries will actually timeout and never enumerate groups. + +```sh +# check groups in 'ldap' domain +openstack group list --domain 'ldap' ++------------------------------------------------------------------+----------------------+ +| ID | Name | ++------------------------------------------------------------------+----------------------+ +| 7052afb8e616072c4f30e989b381e1a9e9cb012d19851774e6fa96ccd618a12f | ISCA-Openstack-Users | +| 6f7f42957f5919ba70b407328ee7695d2c9100e2debb95fb3ea82f4d1ad73693 | ISCA-Admins | +| 824be73c53019246778e3f22f4a77895a4755abe6b6620df3ee80715a5a42471 | ISCA-Users | +| 91979c969386a984b70e63c81c2779e01c05cafa44c33f989c498a6655f94c06 | ISCA-module-stata | ++------------------------------------------------------------------+----------------------+ + +# add group with 'member' role assignment to the 'guest' project +# the 'ISCA-Openstack-Users' AD group contains all the AD users with potential access to the Openstack cluster +# we want to add every user as a member to the guest project, they will each be able to create a small VM instance +openstack role add --group-domain 'ldap' --group 'ISCA-Openstack-Users' --project-domain 'ldap' --project guest member + +# check role assignment +openstack role assignment list --group 'ISCA-Openstack-Users' --group-domain 'ldap' --names ++--------+------+---------------------------+------------+--------+--------+-----------+ +| Role | User | Group | Project | Domain | System | Inherited | ++--------+------+---------------------------+------------+--------+--------+-----------+ +| member | | ISCA-Openstack-Users@ldap | guest@ldap | | | False | ++--------+------+---------------------------+------------+--------+--------+-----------+ + +# add the default Openstack 'admin' user with an admin role to the guest project (to assist with housekeeping) +openstack role add --user-domain 'Default' --user admin --project-domain 'ldap' --project guest admin +openstack role assignment list --user admin --names ++-------+---------------+-------+---------------+--------+--------+-----------+ +| Role | User | Group | Project | Domain | System | Inherited | ++-------+---------------+-------+---------------+--------+--------+-----------+ +| admin | admin@Default | | admin@Default | | | False | +| admin | admin@Default | | guest@ldap | | | False | +| admin | admin@Default | | | | all | False | ++-------+---------------+-------+---------------+--------+--------+-----------+ +``` + +A small 'flavour' (the spec of a VM Instance) will be available to members of the 'guest' project, these will reside on an internal/private Openstack network that will be created for the project. + +# Provider network + +The provider network serves as the external traffic route for VM Instances (by various methods), it is a routable range within the network, it is likely a private network but could be a public/DMZ network. + +Typically a single provider network is required but multiple provider networks are often used. + +In a vanilla Openstack deployment multiple provider networks are served via VLANs on the same physical interface(s) owing to the underlying OVS bridge named 'br-ex' having a special 'datacenter' tag to denote placement of provider networks. When creating a network with parameter `--external` the bridge interface for the network is placed on the OVS bridge with the 'datacenter' tag. To change this behaviour and bind a *new/separate* physical interface to an OVS bridge for external access, not only considerably changes the network interface templates but also falls beyond the RHOSP support model. + +Provider networks assign routable IPs to the following objects: + +- Routers (virtual), for the best segregation you will have a virtual router with an interface on the provider network (that is routable to the wider customer network) and another interface to an Openstack internal network, you can link (route) multiple internal networks to a single router with an interface on the provider network. VM instances in the internal network will use the virtual router as a gateway for egress traffic. +- Floating IPs, typically you assign a floating IP to a VM instance (1:1 NAT) to gain direct access to the VM instance from the customer estate networks. +- VM Instances, can have a provider network IP assigned directly to their network interfaces. + +Provider networks can be assigned to serve the following functions where additional parameters are used: + +- virtual router only + (use parameter `--service-type=network:router_gateway` when creating the subnet for the provider network) +- virtual router + floating IP (1:1 NAT is performed on a virtual router in the provider network thus we cannot have only floating IP) + (use parameters `--service-type=network:router_gateway --service-type=network:floatingip` when creating the subnet for the provider network) +- VM Instances only + (use parameter `--service-type=compute:nova`) + +Provider networks can be 'shared' allowing virtual routers (also floating IP or direct VM Instance) from any project to bind an interface into the network. +Likewise provider networks can be allocated to a domain or a project. + +- Domain = top level organizational unit, bound to the keystone authentication zone, the default domain is used without LDAP, when using LDAP a new domain is automatically created. +- Project = used to be refered to as tenants, they are an organizational unit under domain holding virtual networks and VM instances. + +Provider network IPs are valuable and likely limited in the customer environment, especially within a DMZ. +Provider networks can be scoped i.e 10.121.4.130-254 however they cannot be carved up into smaller CIDR ranges where the provider network gateway would lay outside of the address range of the CIDR. +The scarcity of external/provider IPs and strategies to manage are highlighted in the following spec post: + +> https://specs.openstack.org/openstack/neutron-specs/specs/newton/subnet-service-types.html + +A Provider network with a /24 range can become crowded quickly, it is best to have have a large provider network (/16) or ideally multiple correctly sized provider networks dedicated classes of usage/departments on a project basis. The reasoning behind this is that many customers will want to built as least 1 VM per user that is routable by the wider network as well as the typical (internal) virtual network per project. + +Q: How do you access VM instances on an internal Openstack network? (that does not have an IP routable from the customer estate via floating IP or directly assigned from a provider network). +A: You likely have a jump host with a floating/native IP on the provider network that is dual homed to the Openstack internal network(s) hosting the VM instances, however in this model multiple internal networks cannot have overlapping IP ranges. + +## Create the University provider network + +Create provider network on VLAN 1214, the network team allocated/routable external network. + +- --share allows the network to be used by any project. +- --external denotes the network can route to outbound networks. + +```sh +# VLAN network using the external bridge +openstack network create provider --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 1214 --share +``` + +Create provider subnet. + +- --dns-nameserver allows use of an estate wide dns service that will be key for permanent service identification and/or when issuing CA certs / SSL certificates. +- Typically the dns servers provided by the subnet inbuilt DHCP service will select the gateway IP, by specifying a DNS server - DHCP will present the --dns-nameserver first and then the gateway IP. +- The range 10.121.4.30-254 is used, IPs 1-30 are used for access to proxmox/undercloud/ceph/switches, the remaining IPs in the range are free for Openstack to use. + +```sh +# create the subnet for the virtual router(s) external interface using an external DNS service +openstack subnet create provider-subnet --network provider --dhcp --allocation-pool start=10.121.4.30,end=10.121.4.254 --gateway 10.121.4.1 --subnet-range 10.121.4.0/24 --dns-nameserver=144.173.6.71 --dns-nameserver=1.1.1.1 + +openstack network list ++--------------------------------------+----------+--------------------------------------+ +| ID | Name | Subnets | ++--------------------------------------+----------+--------------------------------------+ +| 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | provider | ce3acd5d-606e-4b59-9a16-8966b4ab9d3c | ++--------------------------------------+----------+--------------------------------------+ + +openstack subnet list ++--------------------------------------+-----------------+--------------------------------------+---------------+ +| ID | Name | Network | Subnet | ++--------------------------------------+-----------------+--------------------------------------+---------------+ +| ce3acd5d-606e-4b59-9a16-8966b4ab9d3c | provider-subnet | 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | 10.121.4.0/24 | ++--------------------------------------+-----------------+--------------------------------------+---------------+ +``` + +## Create the University default guest network + +Create a virtual router to link the provider and internal networks. + +```sh +# create a virtual router for the provider network +openstack router create guest-router --project guest +openstack router list ++--------------------------------------+--------------+--------+-------+----------------------------------+ +| ID | Name | Status | State | Project | ++--------------------------------------+--------------+--------+-------+----------------------------------+ +| 66643aa6-ae44-4f7e-a3ca-afacda8c3acc | guest-router | ACTIVE | UP | 45e6f96ee6cc4ba3a348c38a212fd8b8 | ++--------------------------------------+--------------+--------+-------+----------------------------------+ + +# add gateway interface to the provider network +openstack router set guest-router --external-gateway provider + +# check the IP of the router +openstack router show guest-router -f json | jq .external_gateway_info + +{ + "network_id": "4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc", + "external_fixed_ips": [ + { + "subnet_id": "ce3acd5d-606e-4b59-9a16-8966b4ab9d3c", + "ip_address": "10.121.4.88" + } + ], + "enable_snat": true +} +``` + +Create isolated internal guest network. + +```sh +# create an isolated virtual network and subnet named 'guest' and 'guest-subnet' for the virtual machines that will use this router +openstack network create guest --internal --no-share --project guest +openstack subnet create guest-subnet --project guest --network guest --gateway 172.16.0.1 --subnet-range 172.16.0.0/16 --dhcp +``` + +Attach the guest subnet to the virtual router. + +```sh +# add router interface on 'guest-router' to subnet 'guest-subnet' +openstack router add subnet guest-router guest-subnet + +# Get interface IPs of the router for the provider network subnet and guest network subnet +openstack router show guest-router -f json | jq -r .external_gateway_info.external_fixed_ips[].ip_address + +10.121.4.88 + +openstack router show guest-router -f json | jq -r .interfaces_info[].ip_address + +172.16.0.1 + +# list all network objects +openstack network list ++--------------------------------------+----------+--------------------------------------+ +| ID | Name | Subnets | ++--------------------------------------+----------+--------------------------------------+ +| 2c1b7587-94f2-43f9-97ab-ae3b80ab59be | guest | 3917c7de-2855-41fd-acbb-63cc87d65fc7 | +| 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | provider | ce3acd5d-606e-4b59-9a16-8966b4ab9d3c | ++--------------------------------------+----------+--------------------------------------+ + +openstack subnet list ++--------------------------------------+-----------------+--------------------------------------+---------------+ +| ID | Name | Network | Subnet | ++--------------------------------------+-----------------+--------------------------------------+---------------+ +| 3917c7de-2855-41fd-acbb-63cc87d65fc7 | guest-subnet | 2c1b7587-94f2-43f9-97ab-ae3b80ab59be | 172.16.0.0/16 | +| ce3acd5d-606e-4b59-9a16-8966b4ab9d3c | provider-subnet | 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | 10.121.4.0/24 | ++--------------------------------------+-----------------+--------------------------------------+---------------+ + +openstack router list ++--------------------------------------+--------------+--------+-------+----------------------------------+ +| ID | Name | Status | State | Project | ++--------------------------------------+--------------+--------+-------+----------------------------------+ +| 66643aa6-ae44-4f7e-a3ca-afacda8c3acc | guest-router | ACTIVE | UP | 45e6f96ee6cc4ba3a348c38a212fd8b8 | ++--------------------------------------+--------------+--------+-------+----------------------------------+ +``` + +Remove router, subnet and network. + +```sh +#openstack router remove subnet guest-router guest-subnet +#openstack subnet delete guest-subnet +#openstack network delete guest +#openstack router delete guest-router +``` + +# Quotas + +Quotas can be set project wide to ensure resource usage has a hard limit. +User specific quotas per project for Nova(compute) can also be set. +Block storage quotas can be set per project but not for user by project unfortunately. + +## Example guest project + +- The guest project is a proof of concept area for each Openstack user to create a single small VM Instance, the project has many limits to enforce this. +- The guest project is available to members of the AD group 'ISCA-Openstack-Users', this group has ~1800 user accounts, some of these are service accounts and some are disabled, limits will be set for 2000 small VM Instances (with no backups or snapshots allowed). +- The VM Instance will be provided by a VM flavour, the spec of the flavour will be 1 core, 2GB ram, 5GB disk. +- For production this example is likely far too much resource to set aside on the cluster as this allocates 4TB of 6TB, an AD 'guest' group with a subset of the users from AD group 'ISCA-Openstack-Users' would likely be used. + +Project based quotas: + +```sh +# show default quotas for 'guest' project +openstack quota show --fit-width --default guest ++-----------------------+----------------------------------------------------------------------------+ +| Field | Value | ++-----------------------+----------------------------------------------------------------------------+ +| backup-gigabytes | 1000 | +| backups | 10 | +| cores | 20 | +| floating-ips | 50 | +| gigabytes | 1000 | + +# set project wide quotas, RAM = Megabytes +#openstack quota set --QUOTA_NAME QUOTA_VALUE PROJECT_NAME +openstack quota set --instances 2000 guest ;\ +openstack quota set --cores 4000 guest ;\ +openstack quota set --ram 4096000 guest ;\ +openstack quota set --gigabytes 10000 guest ;\ +openstack quota set --volumes 2000 guest ;\ +openstack quota set --backups 0 guest ;\ +openstack quota set --snapshots 0 guest ;\ +openstack quota set --key-pairs 6000 guest ;\ +openstack quota set --floating-ips 2000 guest ;\ +openstack quota set --networks 1 guest ;\ +openstack quota set --routers 1 guest ;\ +openstack quota set --subnets 1 guest ;\ +openstack quota set --secgroups 250 guest ;\ +openstack quota set --secgroup-rules 2000 guest + +# show applied quotas for 'guest' project +## NOTE: no --default parameter +openstack quota show --fit-width guest +``` + +User per project based quotas: + +- These quotas are set by nova scheduler, the user and tenant(project) objects must be specified by unique ID rather than name. +- There does not seem to be an inbuilt/dynamic way to set a predefined user quota template for a project (quota classes are not yet fully supported). + +```sh +# the nova cli has a slightly different syntax for help +nova help quota-update + +# show user specific quota per project, notice the project wide quotas are shown +nova quota-show --user $(openstack user show tseed -f json | jq -r .id) --tenant $(openstack project show guest -f json | jq -r .id) ++----------------------+---------+ +| Quota | Limit | ++----------------------+---------+ +| instances | 2000 | +| cores | 4000 | +| ram | 4096000 | +| metadata_items | 128 | +| key_pairs | 6000 | +| server_groups | 10 | +| server_group_members | 10 | ++----------------------+---------+ + +# set user specific quotas per project +#nova quota-update --user $projectUser --instance 12 $project +nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --instance 1 $(openstack project show guest -f json | jq -r .id) +nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --cores 2 $(openstack project show guest -f json | jq -r .id) +nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --ram 2048 $(openstack project show guest -f json | jq -r .id) # Megabytes +nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --key-pairs 1 $(openstack project show guest -f json | jq -r .id) + +#check quotas +nova quota-show --user $(openstack user show tseed -f json | jq -r .id) --tenant $(openstack project show guest -f json | jq -r .id) ++----------------------+-------+ +| Quota | Limit | ++----------------------+-------+ +| instances | 1 | +| cores | 2 | +| ram | 2048 | +| metadata_items | 128 | +| key_pairs | 1 | +| server_groups | 10 | +| server_group_members | 10 | ++----------------------+-------+ + +# user tseed is also a member of the 'admin' project, no user quotas have changed for this project +nova quota-show --user $(openstack user show tseed -f json | jq -r .id) --tenant $(openstack project show admin -f json | jq -r .id) ++----------------------+-------+ +| Quota | Limit | ++----------------------+-------+ +| instances | 10 | +| cores | 20 | +| ram | 51200 | +| metadata_items | 128 | +| key_pairs | 100 | +| server_groups | 10 | +| server_group_members | 10 | ++----------------------+-------+ +``` + +## Apply user-per-project quotas for each user in the AD group + +There does not seem to be a dynamic way of applying per user quotas for a project as users are added to a group/project. +Unfortunately per user quotas cannot exceed the per project quota, for example if the project quota is 500 instances and there are 2000 users with a plan to quota 1 instance per user - after 500 users have had a quota applied to them the API will return error. +Following is a rough script to update the per user quota for each member of an LDAP group, you could periodically run the script as new users are added to the group. + +```sh +sudo dnf install openldap-clients -y +touch project_quota_per_user.sh +chmod +x project_quota_per_user.sh +nano -cw project_quota_per_user.sh + +#!/bin/bash + +# ldapsearch required: sudo dnf install openldap-clients +#set -x +source /home/stack/overcloudrc + +LDAP_SEARCH_BIND_PASS="3gB=dR=gAfu6CXxx" +LDAP_SEARCH_BASE="OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk" +LDAP_SEARCH_BIND_DN="svc_iscalookup@university.ac.uk" +LDAP_SEARCH_HOST="ldaps://secureprodad.university.ac.uk" +LDAP_SEARCH_FILTER="(&(objectClass=group)(cn=ISCA-Openstack-Users))" +LDAP_SEARCH_FIELDS="member" +OPENSTACK_DOMAIN="ldap" +OPENSTACK_PROJECT="guest" +USERS=() + +function search () { +for i in $(echo -e $1 | awk -F "member:" '{for (i = 1; i <= NF; i++) print $i}' \ + | grep -v ^dn\: \ + | awk -F "," '{gsub(/CN=/,"", $1); print $1}') + do + USERS+=($i) + done +} + +search "$( ldapsearch -LLL -o ldif-wrap=no -x \ + -w "$LDAP_SEARCH_BIND_PASS" \ + -b "$LDAP_SEARCH_BASE" \ + -D "$LDAP_SEARCH_BIND_DN" \ + -H "$LDAP_SEARCH_HOST" \ + $LDAP_SEARCH_FILTER \ + $LDAP_SEARCH_FIELDS)" + +function quota () { +PROJECT_ID=$(openstack project show $OPENSTACK_PROJECT -f json | jq -r .id) +for i in "${USERS[@]}" +do + USER_ID=$(openstack user show --domain $OPENSTACK_DOMAIN $i -f json | jq -r .id) + if [ ! -z "$USER_ID" ] + then + nova quota-update --user $USER_ID --instance 1 $PROJECT_ID + nova quota-update --user $USER_ID --cores 2 $PROJECT_ID + nova quota-update --user $USER_ID --ram 2048 $PROJECT_ID + nova quota-update --user $USER_ID --key-pairs 1 $PROJECT_ID + nova quota-show --user $USER_ID --tenant $PROJECT_ID + fi +done +} + +quota +``` + +# Import disk images to Glance image service + +- The images have cloud-init enabled to ensure the ssh key can be pushed to the image and any metadata can be accessed and used to perform custom bootstrap actions. +- Images are uploaded with public status meaning any user can use the image, it could be private or public(shared), have metadata or be pushed to only be used by a single project. + +```sh +# download the ubuntu image and make available to all projects +wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img +openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./bionic-server-cloudimg-amd64.img ubuntu_18.04 + +# download the alma and make available to all projects +wget https://repo.almalinux.org/almalinux/8/cloud/x86_64/images/AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2 +openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2 alma_8.6 + +# download the rocky and make available to all projects +wget https://download.rockylinux.org/pub/rocky/8.6/images/Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2 +openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2 rocky_8.6 + +# download the cirros test image (only useful to ping/traceroute/curl) to the admin project +wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img +openstack image create --disk-format qcow2 --container-format bare --private --project admin --property os_type=linux --file ./cirros-0.5.1-x86_64-disk.img cirros-0.5.1 + +# check the format of the images to determine if they are qcow format +file cirros-0.5.1-x86_64-disk.img + cirros-0.5.1-x86_64-disk.img: QEMU QCOW Image (v3), 117440512 bytes +file bionic-server-cloudimg-amd64.img + bionic-server-cloudimg-amd64.img: QEMU QCOW Image (v2), 2361393152 bytes +file AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2 + AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2: QEMU QCOW Image (v3), 10737418240 bytes +file Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2 + Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2: QEMU QCOW Image (v3), 3492806656 bytes + +# list image attributes +openstack image list --long --fit-width ++------------------------------+--------------+-------------+------------------+-----------+------------------------------+--------+------------+-----------+--------------------------------+------+ +| ID | Name | Disk Format | Container Format | Size | Checksum | Status | Visibility | Protected | Project | Tags | ++------------------------------+--------------+-------------+------------------+-----------+------------------------------+--------+------------+-----------+--------------------------------+------+ +| 633641ac-6686-4a2e-bfec-0459 | alma_8.6 | qcow2 | bare | 555876352 | c7c15ec93e48399187783be828cc | active | public | False | 9c7f7d54441841a6b990e928c8e08b | | +| b41c1e65 | | | | | 1be2 | | | | 8a | | +| 26c0b4ac-0de2-448d-b695-1f43 | cirros-0.5.1 | qcow2 | bare | 16338944 | 1d3062cd89af34e419f7100277f3 | active | private | False | 9c7f7d54441841a6b990e928c8e08b | | +| c2612efb | | | | | 8b2b | | | | 8a | | +| 6535678f-37b3-49a0-ae10-3a5f | rocky_8.6 | qcow2 | bare | 857604096 | 062b60cb6f7cdfe4c5e4d4624b0b | active | public | False | 9c7f7d54441841a6b990e928c8e08b | | +| 15742607 | | | | | a8c3 | | | | 8a | | +| db826067-0bf8-4494-8837-b707 | ubuntu_18.04 | qcow2 | bare | 389808128 | 3cdb7bbbabdcd466002ff23cdd94 | active | public | False | 9c7f7d54441841a6b990e928c8e08b | | +| 0bb8f1c1 | | | | | 8e2b | | | | 8a | | ++------------------------------+--------------+-------------+------------------+-----------+------------------------------+--------+------------+-----------+--------------------------------+------+ +``` + +# Create instance sizes (flavours) + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.1/html/director\_installation\_and\_usage/assembly_performing-overcloud-post-installation-tasks#sect-Creating-basic-overcloud-flavors](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/assembly_performing-overcloud-post-installation-tasks#sect-Creating-basic-overcloud-flavors) + +Create a single flavour for only the guest project. + +- The flavour is set private and bound to a single project + +```sh +openstack flavor create guest.tiny --ram 2048 --disk 5 --vcpus 2 --private --project guest +openstack flavor list --all ++--------------------------------------+------------+------+------+-----------+-------+-----------+ +| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | ++--------------------------------------+------------+------+------+-----------+-------+-----------+ +| afbb704c-41dd-4165-9c92-c7af79f44d8b | guest.tiny | 2048 | 5 | 0 | 2 | False | ++--------------------------------------+------------+------+------+-----------+-------+-----------+ +``` + +To create instance flavours for all projects the following seems like a good sizing scheme. + +```sh +#openstack flavor create m1.tiny --ram 512 --disk 5 --vcpus 1 +#openstack flavor create m1.smaller --ram 1024 --disk 5 --vcpus 1 +#openstack flavor create m1.small --ram 2048 --disk 10 --vcpus 1 +#openstack flavor create m1.medium --ram 3072 --disk 10 --vcpus 2 +#openstack flavor create m1.large --ram 8192 --disk 10 --vcpus 4 +#openstack flavor create m1.xlarge --ram 8192 --disk 10 --vcpus 8 + +openstack flavor list ++--------------------------------------+------------+------+------+-----------+-------+-----------+ +| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | ++--------------------------------------+------------+------+------+-----------+-------+-----------+ +| 0b4b8b07-7ff3-4d75-974d-899e19fa5a8b | m1.small | 2048 | 10 | 0 | 1 | True | +| 2ba59dbb-c2f8-40f3-90ee-a29a634280e3 | m1.medium | 3072 | 10 | 0 | 2 | True | +| 78b79341-6c28-440a-9f04-c6b0f81e8ac6 | m1.tiny | 512 | 5 | 0 | 1 | True | +| 7c9600d8-7b95-4749-a74b-75033cc94bbd | m1.xlarge | 8192 | 10 | 0 | 8 | True | +| a91ab2d7-5412-4adc-b3cf-824902814098 | m1.smaller | 1024 | 5 | 0 | 1 | True | +| c90fe5c6-a5a1-4ba5-8fe3-7343d2942858 | m1.large | 8192 | 10 | 0 | 4 | True | ++--------------------------------------+------------+------+------+-----------+-------+-----------+ +``` + +# Delete disk volumes + +Storage capacity maintenance can be a hands on task in Openstack owing to the following: + +- note the available volumes below, these are previous volumes that should be deleted, preferably in a self service environment users will select to 'delete disk on termination' when creating the VM instance +- finding disk content after termination becomes difficult, especially where there is no description or tag +- adding descriptions is good end-user practise, encourage users to add a description to their disk, maybe add their email to the description and a preference for any disk intended to outlive any VM Instance +- admins may have a policy that disk is deleted if not 'in-use' (thus 'available') and no description is set, as this is generally the state of a disk automatically created with now-decomissioned VM instance +- project based quotas can mitigate wider capacity issues for a cluster with many orphaned disks + +```sh +openstack volume list --project bioinformatics --long ++--------------------------------------+------+-----------+------+---------+----------+---------------------------------------------------------------+------------+ +| ID | Name | Status | Size | Type | Bootable | Attached to | Properties | ++--------------------------------------+------+-----------+------+---------+----------+---------------------------------------------------------------+------------+ +| 4f26d90b-4aa1-4150-96b6-aa019761bedd | | in-use | 100 | tripleo | true | Attached to b3692490-39af-45dc-8c4c-d9679ae51fca on /dev/vda | | +| 5dd9ca41-90de-4658-81a4-adfffea99deb | | in-use | 100 | tripleo | true | Attached to 451ccb19-979a-40f9-94da-804bb94d4e04 on /dev/vda | | +| bad3afd7-db09-4a14-bb87-903d4361fa55 | | available | 100 | tripleo | true | | | +| 8ad76e95-b46f-4a49-8517-081f78f14997 | | available | 100 | tripleo | true | | | +| e5061dbb-2f12-4e66-81ca-7900baa24570 | | available | 100 | tripleo | true | | | ++--------------------------------------+------+-----------+------+---------+----------+---------------------------------------------------------------+------------+ +``` + +Basis of a script to periodically run to remove orphaned disks. + +```sh +touch delete_orphaned_disk.sh +chmod +x delete_orphaned_disk.sh +nano -cw delete_orphaned_disk.sh + +#!/bin/bash + +#set -x +source /home/stack/overcloudrc +older_than_days=17 + +# key 'Status' with value 'available' indicates a disk is not attached to a VM instance +# key 'Name' with an empty value indicates the disk was created when provisioning a VM instance, users should add a meaningful to their disk if they value the data +# when a disk is provisioned independently of a VM instance the ID will be selected by the user rather than being an autogenerated UUID +# you could use any of these fields and behaviours to qualify whether a disk should be selected for deletion +for i in $(openstack volume list --project bioinformatics -f json | jq -r '.[] | select((.Status == "available") and .Name == "").ID') +do + doc="$doc $(openstack volume show $i -f json | jq '. | {"volume_id": .id, "last_used": .updated_at, "status": .status, "user_id": .user_id}')" +done +doc=$(echo $doc | jq -s .) + +list_items=$(echo $doc | jq '. | length') +for ((i=0;i<=$(echo $list_items -1);i++)) +do + #echo $doc | jq .[$i] + user=$(echo $doc | jq -r .[$i].user_id) + get_last_used=$(echo $doc | jq -r .[$i].last_used) + # get last used date in iso8601 format, convert to unixtime + unixtime_last_used=$(date -d $get_last_used +"%s") + unixtime_older_than=$(date -d "now -$older_than_days days" +"%s") + user_info=$(openstack user show $user -f json | jq '. | {"name": .name, "email": .email}') + get_name=$(echo $user_info | jq -r .name) + get_email=$(echo $user_info | jq -r .email) + if [ $unixtime_last_used -lt $unixtime_older_than ] + then + schedule_removal=true + else + schedule_removal=false + fi + doc1="$doc1 $(echo $doc | jq .[$i] | jq --argjson input1 '{ "name":"'$get_name'", "email":"'$get_email'", "unixtime_last_used":"'$unixtime_last_used'", "schedule_removal":"'$schedule_removal'" }' '. = $input1 + .')" +done +doc=$(echo $doc1 | jq -s .) + +# add your logic here to print or email report/output, add logic to accept input file to delete orphaned disk based on 'schedule_removal' +# document content +#[ +# { +# "name": "tseed", +# "email": "tseed@ocf.co.uk", +# "unixtime_last_used": "1657093948", +# "schedule_removal": "true", +# "volume_id": "339a923e-4861-4654-9f25-a729b03c7f86", +# "last_used": "2022-07-06T08:52:28.000000", +# "status": "available", +# "user_id": "fa1fc5885a074a64b2d41958d3fc9dcf" +# } +#] + +volumes=$(echo $doc | jq -r '.[] | select(.schedule_removal == "true").volume_id') +for i in $volumes +do + echo $i + openstack volume delete --purge $i +done +``` + +# More CLI commands + +## ssh key commands + +SSH keys can be created in the web console, on creation a .pem private keyfile will be downloaded automatically in the web browser. + +Create a new key, note the private key is presented, you will not be able to retrieve this so ensure you copy this to a safe location. + +```sh +# when issuing this command the private key will be displayed, make note of this as it cannot be retrieved +openstack keypair create test +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEArNhyzyD2/ZA5WgnkNp9dSDEl0XjoAx/yfF77dt6NO6iXB3Os +vqAUVJsnPz8faDm1i8qYM7P61ZrUD4FvnK9SfyIIU/jZuByaNi2/M3DL1Cyj5NCH +ORcRYDyz66X2uIJkPTFr6XVXOEQYWTv7dpuxVAoPle3sQ0UbG9pjD7w2sIEjxtbN +DVcLJfXb7L6ZYHKP/AYQTJNSOJTYSemUkAc7lQvR8q9RZnNcXjLC6j5Boy6VXZuX ++MsDdiWdkeebsc0of64xUK8UjY3eVP0VfOufZejrsCiOjqZhSvgP/AJMt0SnCYZW +1Dl5LlQ+t3RTCcYkTrWjcmuf/dNzewIguCb3RQIDAQABAoIBAQCpY/qoGTNVbll2 +bwkzqty9WkUo06f1IAMBdghVB2g8Fk3k5K1fp/wkqmU9K3x5JU1RMXwV94WUfwbi +J0SdtohPxaeJu/CK6aUMAatHG3z2c8UvAlnzTjMeMH9XKq/vRQI9okiSZAfVQY7n +LMyVAaI4rR93HNOVXY1ir5SzoA2szVV/vP6Ki0WlUZ6AIULX2uAD8PPrDCdMr9Qc +HUjdXHHX6hBjN7UFcE4uYdDPXc3TvFSG4q5PEn3fXGY9D0NyNcvXxmC4w16zb0d3 +8hxVFxcwFcVTvlsTORnTKJ91DBN6jiSY5ABpLriZgZij9T3i0qsouJZ7k2hsb3q/ +zGR4GzMFAoGBAOBBOoEtlWdvhOAaLbXr6iTmiAMLudIXjIKofokjBdgPXucK/r0S +rp/uS68g4s9id6RPOUj9mWq0k4JOtRLb5nXCkQ8eF84PJ6XCnJZL2xcnFfYErqED +CL1BFGhJ7i/ExiHWtD8Ew4oYiWlvfvWxVbrgJsElJ1VZHNDmM8uVIWuDAoGBAMVQ +MrCZQFA+Cxb9vOBn0rYYOhDCyNKAYsZHesTZe8IGieysc5UyEA/7Z2Owvv0L3Y03 +qi5KSDJfMtR82M+L/oykwFc5l/2wUoLjJexpVdZX/KqDq7VERKTtK1qysr+RY141 +a8pof1JN5ojHOTl9BvEnJf/K5clqFfPHuIhmpq+XAoGAFcshBWbJqziyQBkrMg/Q +PG/O7gTYtSsms5fuXCN0MPAld+ygnv1OzSoaXtWiVScrm2M7nPVQUIdmAnblsASA +3BbhhAeXpqXgY4KLNyv+Cbz5rGP+GJWz5riJZC0zIZ9M5gL4l1s+KZCC4iU8wGHQ +hA2+lmym6utzGnYUuIcwrUMCgYBwJi9JlTGq6jjfboVmf1ySx55pXG1MyFBcJtCv +BnaDR7gpX7Oqf3QFwX14ekN0DMR2ucbu3KXAi7+WawfIn+elBReV/FRZi1i6sGUj +xJNXa1dfi8uTEiR6IZvcx2k13Ws/ZtnHiDGmFEUORT5PYLMLapb8ltSY8MVddI18 +aewgLQKBgQCCTYwfPuo67Ujla5CxXr9POZCPQXNnPNJsAOAK3IYoizHUHpUmge/G +sQs+IQY774LKv4ZxT5o1qrNQ491oLk6vamyXTBa59cECTTcvIiZW5stWI5j2zWgm +2XE7Am3MnghnLJdyZ7HA/MT9GGrVHyinojmtM9FWEsKwQ1PJWMQwMQ== +-----END RSA PRIVATE KEY----- +``` + +Import an existing public key, ensure the key is followed by an identifier such as your email, do not use user@server unless it is a single use key for a specifically named VM Instance. + +```sh +pubkey="ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAm+l9n70tSvow56eOLhDZT8VLCmU9MCjUa7d2v0fH2ix/mdWy+RUo9c24U9WJmBlxpAmMDpSxlFcOpBwk1y+tWC/24YJ+m0/6YGWTzbl84GCjdBfrWcTuV5MFYvkYfq8lx3VESyZrYVmoC9Shwtj825YjfVpWqWvFw2kJznyOHWSGv60j6AJyzoT8rWCt4tSusEVzwup7UWF8TDIB6GXO3hqBZcCo3mfyuWkAswkEbX8SKIXqlNUZWMsxdS5ZpodigG6pj9fIsob8P+PxXF7YQiPo4W1uDHGoh0033oLb2fQULs4VjwqNVUE4dKkruFdNupBNCY3BJWHMT/mDOnUiww== tseed@ocf.co.uk" +echo $pubkey > /tmp/pubkey.txt +openstack keypair create --public-key /tmp/pubkey.txt tseed + ++-------------+-------------------------------------------------+ +| Field | Value | ++-------------+-------------------------------------------------+ +| fingerprint | 8b:ae:ed:4c:63:12:cb:5b:a4:7a:5a:bc:08:83:fc:6c | +| name | tseed | +| type | ssh | +| user_id | e2ea49d4ae1d4670b8546aab65deba2b | ++-------------+-------------------------------------------------+ + +rm -Rf /tmp/pubkey.txt +``` + +Keypair operations. + +```sh +openstack keypair -h + +Command "keypair" matches: + keypair create + keypair delete + keypair list + keypair show + +openstack keypair list + ++-------+-------------------------------------------------+ +| Name | Fingerprint | ++-------+-------------------------------------------------+ +| test | 79:e7:10:53:13:fd:ec:47:0e:3e:61:19:3b:84:2b:0a | +| tseed | 8b:ae:ed:4c:63:12:cb:5b:a4:7a:5a:bc:08:83:fc:6c | ++-------+-------------------------------------------------+ + +openstack keypair show test / openstack keypair show test -f json + ++-------------+-------------------------------------------------+ +| Field | Value | ++-------------+-------------------------------------------------+ +| created_at | 2022-10-12T09:08:59.000000 | +| deleted | False | +| deleted_at | None | +| fingerprint | 79:e7:10:53:13:fd:ec:47:0e:3e:61:19:3b:84:2b:0a | +| id | 2 | +| name | test | +| type | ssh | +| updated_at | None | +| user_id | e2ea49d4ae1d4670b8546aab65deba2b | ++-------------+-------------------------------------------------+ + +openstack keypair show --public-key test + +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCs2HLPIPb9kDlaCeQ2n11IMSXReOgDH/J8Xvt23o07qJcHc6y+oBRUmyc/Px9oObWLypgzs/rVmtQPgW+cr1J/IghT+Nm4HJo2Lb8zcMvULKPk0Ic5FxFgPLPrpfa4gmQ9MWvpdVc4RBhZO/t2m7FUCg+V7exDRRsb2mMPvDawgSPG1s0NVwsl9dvsvplgco/8BhBMk1I4lNhJ6ZSQBzuVC9Hyr1Fmc1xeMsLqPkGjLpVdm5f4ywN2JZ2R55uxzSh/rjFQrxSNjd5U/RV8659l6OuwKI6OpmFK+A/8Aky3RKcJhlbUOXkuVD63dFMJxiROtaNya5/903N7AiC4JvdF Generated-by-Nova +``` + +## security group commands (firewall rules) + +The SG mechanism is very flexible and intuative to use. + +```sh +# list security groups +# on a fresh system you will see 2 Default SGs per project, there is the service project (a builtin SG for functional resources like routers) and a default project named admin, until we start to add our own projects we will use the admin project + +openstack security group list ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| ID | Name | Description | Project | Tags | ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| cc3e3172-66ff-48ae-8b92-96bd43fbbc65 | default | Default security group | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] | +| ff5c38eb-96fd-40f9-b90a-4c1b31745438 | default | Default security group | 9c7f7d54441841a6b990e928c8e08b8a | [] | ++--------------------------------------+---------+------------------------+----------------------------------+------+ + +openstack project list # note we also have the guest project we created in the example above) ++----------------------------------+---------+ +| ID | Name | ++----------------------------------+---------+ +| 45e6f96ee6cc4ba3a348c38a212fd8b8 | guest | +| 98df2c2796ba41c09f314be1a83c9aa9 | service | +| 9c7f7d54441841a6b990e928c8e08b8a | admin | ++----------------------------------+---------+ + +openstack security group list --project admin ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| ID | Name | Description | Project | Tags | ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| ff5c38eb-96fd-40f9-b90a-4c1b31745438 | default | Default security group | 9c7f7d54441841a6b990e928c8e08b8a | [] | ++--------------------------------------+---------+------------------------+----------------------------------+------+ + +# check a security group, json output is easier to read +openstack security group show ff5c38eb-96fd-40f9-b90a-4c1b31745438 -f json + +# find the rules associated to the security group +openstack security group show ff5c38eb-96fd-40f9-b90a-4c1b31745438 -f json | jq -r .rules[].id +0dbe030c-d556-4553-bf2f-86b2d8f003a3 +2de6e7cb-67b8-4df8-9cbd-35de055490b7 +72b479d8-e52e-4e3c-ab52-c9645bedb267 +f59c6050-ba70-4567-a094-8d026f0be586 + +# list all rules associated with a security group +# notice we can bind rules with ingress/egress and other security groups, much like AWS we can attach VM Instances to SGs and inherit rules this way +openstack security group rule list ff5c38eb-96fd-40f9-b90a-4c1b31745438 ++--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+ +| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | ++--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+ +| 0dbe030c-d556-4553-bf2f-86b2d8f003a3 | None | IPv6 | ::/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 | +| 2de6e7cb-67b8-4df8-9cbd-35de055490b7 | None | IPv4 | 0.0.0.0/0 | | None | +| 72b479d8-e52e-4e3c-ab52-c9645bedb267 | None | IPv6 | ::/0 | | None | +| f59c6050-ba70-4567-a094-8d026f0be586 | None | IPv4 | 0.0.0.0/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 | ++--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+ + +# add a simple ssh access rule to the SG 'default' in the 'admin' project +openstack security group rule create \ +--ingress \ +--protocol tcp \ +--ethertype IPv4 \ +--remote-ip '0.0.0.0/0' \ +--dst-port 22 \ +ff5c38eb-96fd-40f9-b90a-4c1b31745438 + +# the output of our last command showed a rule created with ID 8e78f3ea-7e07-4db7-ab22-6e59935f76a9 +openstack security group rule list ff5c38eb-96fd-40f9-b90a-4c1b31745438 + ++--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+ +| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | ++--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+ +| 0dbe030c-d556-4553-bf2f-86b2d8f003a3 | None | IPv6 | ::/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 | +| 2de6e7cb-67b8-4df8-9cbd-35de055490b7 | None | IPv4 | 0.0.0.0/0 | | None | +| 72b479d8-e52e-4e3c-ab52-c9645bedb267 | None | IPv6 | ::/0 | | None | +| 8e78f3ea-7e07-4db7-ab22-6e59935f76a9 | tcp | IPv4 | 0.0.0.0/0 | 22:22 | None | +| f59c6050-ba70-4567-a094-8d026f0be586 | None | IPv4 | 0.0.0.0/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 | ++--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+ +``` + +Create your own security group for the 'guest' project, a new VM instance will require any non default secuirty group binding to the instance, typically when a user creates a VM Instance they will select the security group from a dropdown menu in the web console. + +```sh +# create a new SG for your custom application in the guest project +openstack security group create --project guest MYAPP + ++-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ +| created_at | 2022-10-12T10:00:04Z | +| description | MYAPP | +| id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | +| location | cloud='', project.domain_id=, project.domain_name=, project.id='45e6f96ee6cc4ba3a348c38a212fd8b8', project.name=, region_name='regionOne', zone= | +| name | MYAPP | +| project_id | 45e6f96ee6cc4ba3a348c38a212fd8b8 | +| revision_number | 1 | +| rules | created_at='2022-10-12T10:00:04Z', direction='egress', ethertype='IPv6', id='9c64924f-644a-4234-8026-8239fac14c16', updated_at='2022-10-12T10:00:04Z' | +| | created_at='2022-10-12T10:00:04Z', direction='egress', ethertype='IPv4', id='db49e625-da33-4c1e-aab8-9ce4a10cf4f9', updated_at='2022-10-12T10:00:04Z' | +| tags | [] | +| updated_at | 2022-10-12T10:00:04Z | ++-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ + +# add some rules to the 'MYAPP' SG + +# inbound access from anywhere to port 2000 +openstack security group rule create \ +--ingress \ +--protocol tcp \ +--ethertype IPv4 \ +--remote-ip '0.0.0.0/0' \ +--dst-port 2000 \ +03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 + ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| created_at | 2022-10-12T10:00:35Z | +| description | | +| direction | ingress | +| ether_type | IPv4 | +| id | bfaa42da-8573-4490-8098-45e7befa57f4 | +| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='9c7f7d54441841a6b990e928c8e08b8a', project.name='admin', region_name='regionOne', zone= | +| name | None | +| port_range_max | 2000 | +| port_range_min | 2000 | +| project_id | 9c7f7d54441841a6b990e928c8e08b8a | +| protocol | tcp | +| remote_group_id | None | +| remote_ip_prefix | 0.0.0.0/0 | +| revision_number | 0 | +| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | +| tags | [] | +| updated_at | 2022-10-12T10:00:35Z | ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +# inbound access from VM instances only on a local network to port range 3000-4000, the ip range is the guest network subnet +openstack security group rule create \ +--ingress \ +--protocol tcp \ +--ethertype IPv4 \ +--remote-ip '172.16.0.0/16' \ +--dst-port 3000:4000 \ +03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 + ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| created_at | 2022-10-12T10:01:05Z | +| description | | +| direction | ingress | +| ether_type | IPv4 | +| id | 7685e615-3e1b-4ea1-82e9-1131daf11f69 | +| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='9c7f7d54441841a6b990e928c8e08b8a', project.name='admin', region_name='regionOne', zone= | +| name | None | +| port_range_max | 4000 | +| port_range_min | 3000 | +| project_id | 9c7f7d54441841a6b990e928c8e08b8a | +| protocol | tcp | +| remote_group_id | None | +| remote_ip_prefix | 172.16.0.0/16 | +| revision_number | 0 | +| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | +| tags | [] | +| updated_at | 2022-10-12T10:01:05Z | ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +# list new security group +openstack security group list --project guest ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| ID | Name | Description | Project | Tags | ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | MYAPP | MYAPP | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] | +| cc3e3172-66ff-48ae-8b92-96bd43fbbc65 | default | Default security group | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] | ++--------------------------------------+---------+------------------------+----------------------------------+------+ + +# list rules in security group +openstack security group rule list 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 ++--------------------------------------+-------------+-----------+---------------+------------+-----------------------+ +| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | ++--------------------------------------+-------------+-----------+---------------+------------+-----------------------+ +| 7685e615-3e1b-4ea1-82e9-1131daf11f69 | tcp | IPv4 | 172.16.0.0/16 | 3000:4000 | None | +| 9c64924f-644a-4234-8026-8239fac14c16 | None | IPv6 | ::/0 | | None | +| bfaa42da-8573-4490-8098-45e7befa57f4 | tcp | IPv4 | 0.0.0.0/0 | 2000:2000 | None | +| db49e625-da33-4c1e-aab8-9ce4a10cf4f9 | None | IPv4 | 0.0.0.0/0 | | None | ++--------------------------------------+-------------+-----------+---------------+------------+-----------------------+ + +# show rule +openstack security group rule show 7685e615-3e1b-4ea1-82e9-1131daf11f69 ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| created_at | 2022-10-12T10:01:05Z | +| description | | +| direction | ingress | +| ether_type | IPv4 | +| id | 7685e615-3e1b-4ea1-82e9-1131daf11f69 | +| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='9c7f7d54441841a6b990e928c8e08b8a', project.name='admin', region_name='regionOne', zone= | +| name | None | +| port_range_max | 4000 | +| port_range_min | 3000 | +| project_id | 9c7f7d54441841a6b990e928c8e08b8a | +| protocol | tcp | +| remote_group_id | None | +| remote_ip_prefix | 172.16.0.0/16 | +| revision_number | 0 | +| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | +| tags | [] | +| updated_at | 2022-10-12T10:01:05Z | ++-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +# notice when a SG is created some default outbound egress rules are created to allow access anywhere, these rules are present in the 'default' security group so typically do not need to be included +# they are present incase this is the only SG applied to the VM Instance +# often where multiple SGs are bound to a host this default outbound rule will be duplicated, this is not an issue +# however if you want to control egress traffic it maybe easier to have only one SG containing egress rules +openstack security group rule show db49e625-da33-4c1e-aab8-9ce4a10cf4f9 ++-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ +| created_at | 2022-10-12T10:00:04Z | +| description | None | +| direction | egress | +| ether_type | IPv4 | +| id | db49e625-da33-4c1e-aab8-9ce4a10cf4f9 | +| location | cloud='', project.domain_id=, project.domain_name=, project.id='45e6f96ee6cc4ba3a348c38a212fd8b8', project.name=, region_name='regionOne', zone= | +| name | None | +| port_range_max | None | +| port_range_min | None | +| project_id | 45e6f96ee6cc4ba3a348c38a212fd8b8 | +| protocol | None | +| remote_group_id | None | +| remote_ip_prefix | 0.0.0.0/0 | +| revision_number | 0 | +| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | +| tags | [] | +| updated_at | 2022-10-12T10:00:04Z | ++-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ + +# delete the rules and security group + +# if there are multiple SGs named MYAPP it maybe hard to determine the correct SG, using UUID values is safer +#openstack security group show MYAPP -f json | jq -r .id +#03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 + +openstack security group list --project guest ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| ID | Name | Description | Project | Tags | ++--------------------------------------+---------+------------------------+----------------------------------+------+ +| 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | MYAPP | MYAPP | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] | +| cc3e3172-66ff-48ae-8b92-96bd43fbbc65 | default | Default security group | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] | ++--------------------------------------+---------+------------------------+----------------------------------+------+ + +#openstack security group show 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 -f json | jq -r .rules[].id +#7685e615-3e1b-4ea1-82e9-1131daf11f69 +#9c64924f-644a-4234-8026-8239fac14c16 +#bfaa42da-8573-4490-8098-45e7befa57f4 +#db49e625-da33-4c1e-aab8-9ce4a10cf4f9 + +openstack security group rule list 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 ++--------------------------------------+-------------+-----------+---------------+------------+-----------------------+ +| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | ++--------------------------------------+-------------+-----------+---------------+------------+-----------------------+ +| 7685e615-3e1b-4ea1-82e9-1131daf11f69 | tcp | IPv4 | 172.16.0.0/16 | 3000:4000 | None | +| 9c64924f-644a-4234-8026-8239fac14c16 | None | IPv6 | ::/0 | | None | +| bfaa42da-8573-4490-8098-45e7befa57f4 | tcp | IPv4 | 0.0.0.0/0 | 2000:2000 | None | +| db49e625-da33-4c1e-aab8-9ce4a10cf4f9 | None | IPv4 | 0.0.0.0/0 | | None | ++--------------------------------------+-------------+-----------+---------------+------------+-----------------------+ + +# remove rules +openstack security group rule delete 7685e615-3e1b-4ea1-82e9-1131daf11f69 +openstack security group rule delete 9c64924f-644a-4234-8026-8239fac14c16 +openstack security group rule delete bfaa42da-8573-4490-8098-45e7befa57f4 +openstack security group rule delete db49e625-da33-4c1e-aab8-9ce4a10cf4f9 + +# remove SG +openstack security group delete 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 +``` \ No newline at end of file diff --git a/7) Example Project.md b/7) Example Project.md new file mode 100755 index 0000000..45c7efb --- /dev/null +++ b/7) Example Project.md @@ -0,0 +1,360 @@ +# Example of a new project + +The following example exclusively uses the CLI administration, this helps clarity the componets in play and their interdependencies. All steps can be be performed in the web console. + +## Load environment variables to use the Overcloud CLI + +```sh +[stack@undercloud ~]$ source ~/stackrc +(undercloud) [stack@undercloud ~]$ source ~/overcloudrc +(overcloud) [stack@undercloud ~]$ +``` + +## Create project + +```sh +# create project +openstack project create --domain 'ldap' --description "Bioinformatics Project" bioinformatics +``` + +## Create an internal Openstack network/subnet for the project + +```sh +openstack network create bioinformatics-network --internal --no-share --project bioinformatics +openstack subnet create bioinformatics-subnet --project bioinformatics --network bioinformatics-network --gateway 172.16.1.1 --subnet-range 172.16.1.0/16 --dhcp +``` + +## Create a router for the project + +```sh +openstack router create bioinformatics-router --project bioinformatics +openstack router set bioinformatics-router --external-gateway provider +``` + +## Add an interface to the provider network to the project network + +```sh +openstack router add subnet bioinformatics-router bioinformatics-subnet +``` + +## Create a security group named 'linux-default' to allow inbound ssh for VM instances + +- a new security group injects rules on creation to allow outbound traffic by default, where multiple security groups are attached these default rules may be removed + +```sh +openstack security group create --project bioinformatics linux-default +openstack security group rule create \ +--ingress \ +--protocol tcp \ +--ethertype IPv4 \ +--remote-ip '0.0.0.0/0' \ +--dst-port 22 \ +$(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(.Name == "linux-default").ID') + +# list security group rules +openstack security group rule list $(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(."Name" == "default") | .ID') +openstack security group rule list $(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(."Name" == "linux-default") | .ID') --long ++--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+ +| ID | IP Protocol | Ethertype | IP Range | Port Range | Direction | Remote Security Group | ++--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+ +| 99210e25-4b7f-4125-93bb-7abea3eddf07 | None | IPv4 | 0.0.0.0/0 | | egress | None | +| adc21371-52bc-4c63-8e23-8e55a119407c | None | IPv6 | ::/0 | | egress | None | +| d327baac-bdaa-437c-b506-b90659e92833 | tcp | IPv4 | 0.0.0.0/0 | 22:22 | ingress | None | ++--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+ +``` + +## Set quotas for the scope of the entire project + +```sh +openstack quota set --instances 50 bioinformatics ;\ +openstack quota set --cores 300 bioinformatics ;\ +openstack quota set --ram 204800 bioinformatics ;\ +openstack quota set --gigabytes 5000 bioinformatics ;\ +openstack quota set --volumes 500 bioinformatics ;\ +openstack quota set --key-pairs 50 bioinformatics ;\ +openstack quota set --floating-ips 50 bioinformatics ;\ +openstack quota set --networks 10 bioinformatics ;\ +openstack quota set --routers 5 bioinformatics ;\ +openstack quota set --subnets 10 bioinformatics ;\ +openstack quota set --secgroups 100 bioinformatics ;\ +openstack quota set --secgroup-rules 1000 bioinformatics +``` + +## Create flavours for the project + +- flavours are pre-scoped specs of the instances + +```sh +openstack flavor create small --ram 2048 --disk 10 --vcpus 2 --private --project bioinformatics ;\ +openstack flavor create medium --ram 3072 --disk 10 --vcpus 4 --private --project bioinformatics ;\ +openstack flavor create large --ram 8192 --disk 10 --vcpus 8 --private --project bioinformatics ;\ +openstack flavor create xlarge --ram 16384 --disk 10 --vcpus 16 --private --project bioinformatics ;\ +openstack flavor create xxlarge --ram 65536 --disk 10 --vcpus 48 --private --project bioinformatics +``` + +## End-user access using Active Directory groups + +- In the Univerity Prod environment you would typically create an AD group with nested AD users +- To illustrate the method, chose the pre-existing group 'ISCA-Admins' + +```sh +openstack user list --group 'ISCA-Admins' --domain ldap ++------------------------------------------------------------------+--------+ +| ID | Name | ++------------------------------------------------------------------+--------+ +| c633f80625e587bc3bbe492af57cb99cec59201b16cc06f614e36a6b767d6b29 | mtw212 | +| 0c4e3bdacda6c9b8abcd61de94deb47ff236cec3581fbbacf2d9daa1c584a44d | mmb204 | +| 2d4338bc2ba649ff15111519e535d0fc6c65cbb7e5275772b4e0c675af09002b | rr274 | +| b9461f113d208b54a37862ca363ddf37da68cf00ec06d67ecc62bb1e5caf06d4 | dma204 | +| 0fb8469b2d7e297151102b0119a4b08f6b26113ad8401b6cb79936adf946ba19 | ac278 | ++------------------------------------------------------------------+--------+ + +# bind member role to users in the access group for the project +openstack role add --group-domain 'ldap' --group 'ISCA-Admins' --project-domain 'ldap' --project bioinformatics member + +# bind admin role to a specific user for the project +openstack role add --user-domain 'ldap' --user mtw212 --project-domain 'ldap' --project bioinformatics admin +openstack role assignment list --user $(openstack user show --domain 'ldap' mtw212 -f json | jq -r .id) --names ++-------+-------------+-------+---------------------+--------+--------+-----------+ +| Role | User | Group | Project | Domain | System | Inherited | ++-------+-------------+-------+---------------------+--------+--------+-----------+ +| admin | mtw212@ldap | | bioinformatics@ldap | | | False | ++-------+-------------+-------+---------------------+--------+--------+-----------+ + +# bind member role for local user 'tseed' for the project +openstack role add --user-domain 'Default' --user tseed --project-domain 'ldap' --project bioinformatics member + +# bind admin role for the (default) local user 'admin' for the project - we want the admin user to have full access to the project +openstack role add --user-domain 'Default' --user admin --project-domain 'ldap' --project bioinformatics admin +``` + +## Import a disk image to be used specifically for the project + +- This can be custom image pre-baked with specific software or any vendor OS install image +- Images should support cloud-init to support initial user login, generic distro images with cloud-init enabled should work + +```sh +wget https://repo.almalinux.org/almalinux/8/cloud/x86_64/images/AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2 +openstack image create --disk-format qcow2 --container-format bare --private --project bioinformatics --property os_type=linux --file ./AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2 alma_8.6 +``` + +## SSH keypairs + +Generate an ssh key pair, this will be used for initial login to a VM instance. + +- the keypair in this example is owned by the admin user, other users will not see the ssh keypair in the web console and will need a copy of the ssh private key (unless a password is set in cloud-init userdata) +- each user will have their own keypair that will be selected when provisioning a VM instance in the web console +- once instantiated, additional users can import ssh keys to the authorized_keys file as per typical linux host +- when generating ssh public keys Openstack requires a comment at the end of the key, when importing a keypair (even via the web console) the public key needs a comment + +Generic distro (cloud-init) images generally have their own default user, typically these image specific such as 'almalinux' or 'ubuntu', this user will login with this user using the ssh private key counterpart to the specified public ssh key with the '--key-name' parameter. +Some cloud-init images use the user in the comment of the ssh key as the default user (or as an additional user). +Convention is that you provision instances with cloud-init userdata with the expectation you will provide your own user + credentials. + +```sh +ssh-keygen -t rsa -b 4096 -C "bioinformatics@university.ac.uk" -f ~/bioinformatics_cloud +openstack keypair create --public-key ~/bioinformatics_cloud.pub bioinformatics +``` + +## Cloud-init userdata + +This OPTIONAL step is very useful, typically cloud providers utilise userdata to setup initial login, however userdata is much more powerful and often used to register the instance with a configuration management tool to install a suite of software (chef/puppet/ansible(in pull mode)) or even embed a shell script for direct software provision (pull+start containers), beware userdata is limited to 64KB. + +NOTE: OCF have built cloud-init userdata for Linux (and Windows in Azure) to configure SSSD to join cloud instances to Microsoft Active Directory to enable multi-user access, this is highly environment/customer specific. + +- Openstack is kind, you dont have to base64 encode the userdata like some public cloud providers, it is automatic +- generally each cloud-init image will have its own default user, typically these image specific such as 'almalinux' or 'ubuntu' +- the following config will replace this default user with your own bioinformatics user, password and ssh key (It also adds the universityops user to ensure an admin can get into the system) +- NOTE the ssh key entry below has had the trailing comment removed +- passwords can be in cleartext but Instance users will be able to see the password in the userdata, create a hash with the command `openssl passwd -6 -salt xyz Password0` +- userdata can be added to the instance when provisioning in the web console @ Customisation Script, it is always a good idea to provide a userdata template to the end user where they self provision + +```sh +nano -cw userdata.txt # yaml format + +#cloud-config +ssh_pwauth: true +groups: + - admingroup: [root,sys] + - bioinformatics + - universityops +users: + - name: bioinformatics + primary_group: bioinformatics + lock_passwd: false + passwd: $6$xyz$4tTWyuHIT6gXRuzotBZn/9xZBikUp0O2X6rOZ7MDJo26aax.Ok5P4rWYyzdgFkjArIIyB8z8LKVW1wARbcBzn/ + sudo: ALL=(ALL) NOPASSWD:ALL + shell: /bin/bash + ssh_authorized_keys: + - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQD4Yh0OuBTvyXObUcJKLDNjIhmSkf/RiSPhPYzNECwC7hlIms/fYcbODPQmboo8pgtnlDK0aElWr16n1z+Yb/3btzmO/G8pZEUR607VmWINuYzSJyAieL6zNPn0XC2eP9mqWJJP44SjroVKLjnhajy761FaGxXJyXr3RXmIb4xc+qW8ETJQh98ucZZZQ3X8MernjIOO+VGVObDDDTZXsaL1wih0+v/R9gMJP8AgSCpi539o0A6RgFzMqFfroUKe6uYa1ohBrjii+teKETEb7isNOZFPx459zhqRPVjFlzVXNpDBPVjz32uuUyBRW4jMlwQ/GIrhT7+fNjpxG0CrVe0c3F+BoBnqfdrsLFCJ3dg+z19lBLnC2ulp511kqEVctjG96l9DeEPtab28p22aV3fuzdnx24y3BJi8Wea79U8+RTy0fYCM0Sm8rwREUHD2bAgjtIUU8gTKnQLyeUAc5+qJCFqa3H9/DJZ44MQzk/rC0shBUU7z+IwWhftU1P9GWURko11Bmg6pq+/fdGVm/eqilDabirbZxjqnxXCBGcOM6QsPoooJ9cgCU34k9KhUxPJ34frYfwHaWkDYxe+7VBrrzPWpOnOGt04eegwdNBDMnl703wfXqobnyy8nMmzH04j2PThJ7ZrRnA6bo/dYtVZXHocfq76yPxSsmYClebJBSQ== + - name: universityops + primary_group: bioinformatics + lock_passwd: false + passwd: $6$xyz$4tTWyuHIT6gXRuzotBZn/9xZBikUp0O2X6rOZ7MDJo26aax.Ok5P4rWYyzdgFkjArIIyB8z8LKVW1wARbcBzn/ + sudo: ALL=(ALL) NOPASSWD:ALL + shell: /bin/bash +``` + +## Create a floating ip + +With the network design up to this point you can have a routable IP capable of accepting ingress traffic from the wider University estate by two methods: + +1. floating IP, a '1:1 NAT' of a provider network IP mapped to the VM interface IP in the private Openstack 'bioinformatics' network +2. interface IP directly in the provider network + +Floating IPs are more versatile as they can be moved between instances for all manner of blue-green scenarios, typically a VM instance does not have to be multihomed between networks either. +Floating IPs in Openstack private networks are possible can be just useful in a multi-tiered application stack - think DR strategy, scripting the Openstack API to move the floating IP between instances. +However end users may want a VM instance with only a provider network IP, this would only be able to communicate with other Openstack VM instances with a provider IP. + +```sh +# create a floating IP in the 'provider' network on the 'provider-subnet' subnet range +openstack floating ip create --project bioinformatics --description 'bioinformatics01' --subnet provider-subnet provider +openstack floating ip list --project bioinformatics --long -c 'ID' -c 'Floating IP Address' -c 'Description' ++--------------------------------------+---------------------+------------------+ +| ID | Floating IP Address | Description | ++--------------------------------------+---------------------+------------------+ +| 0eb3f78d-d59d-4ec6-b725-d2c1f45c9a77 | 10.121.4.246 | bioinformatics01 | ++--------------------------------------+---------------------+------------------+ +``` + +Check allocated 'ports', think of this as IP endpoints for objects known by openstack. + +- VM Instance = compute:nova +- Floating IP = network:floatingip +- DHCP service = network:dhcp (most networks will have one) +- Primary router interface = network:router_gateway (usually in the provider network, for egress/SNAT access to external networks) +- Secondary router interface = network:router_interface (router interface on a private Openstack network) + +```sh +openstack port list --long -c 'ID' -c 'Fixed IP Addresses' -c 'Device Owner' ++--------------------------------------+-----------------------------------------------------------------------------+--------------------------+ +| ID | Fixed IP Addresses | Device Owner | ++--------------------------------------+-----------------------------------------------------------------------------+--------------------------+ +| 108171d9-cd76-49ab-944e-751f8257c8d1 | ip_address='10.121.4.150', subnet_id='92361cfd-f348-48a2-b264-7845a3a3d592' | compute:nova | +| 3d86a21a-f187-47e0-8204-464adf334fb0 | ip_address='172.16.0.2', subnet_id='a92d2ac0-8b60-4329-986d-ade078e75f45' | network:dhcp | +| 3db3fe34-85a8-4028-b670-7f9aa5c86c1a | ip_address='10.121.4.148', subnet_id='92361cfd-f348-48a2-b264-7845a3a3d592' | network:floatingip | +| 400cb067-2302-4f8e-bc1a-e187929afbbc | ip_address='10.121.4.205', subnet_id='92361cfd-f348-48a2-b264-7845a3a3d592' | network:router_gateway | +| 5c93d336-05b5-49f0-8ad4-9de9c2ccf216 | ip_address='172.16.2.239', subnet_id='ab658788-0c5f-4d22-8786-aa7256db66b6' | compute:nova | +| 62afa3de-5316-4eb6-88ca-4830c141c898 | ip_address='172.16.1.1', subnet_id='ab658788-0c5f-4d22-8786-aa7256db66b6' | network:router_interface | +| 7c8b58c0-3ff7-44f6-9eb3-a601a139aab9 | ip_address='172.16.0.1', subnet_id='a92d2ac0-8b60-4329-986d-ade078e75f45' | network:router_interface | +| 9f41db95-8333-4f6d-88e0-c0e3f7d4b7f0 | ip_address='172.16.1.2', subnet_id='ab658788-0c5f-4d22-8786-aa7256db66b6' | network:dhcp | +| c9591f1b-8d43-4322-acd6-75cd4cce04e3 | ip_address='10.121.4.239', subnet_id='92361cfd-f348-48a2-b264-7845a3a3d592' | network:router_gateway | +| e3f35c0a-6543-4508-8d17-96de69f85a1c | ip_address='10.121.4.130', subnet_id='92361cfd-f348-48a2-b264-7845a3a3d592' | network:dhcp | ++--------------------------------------+-----------------------------------------------------------------------------+--------------------------+ +``` + +## Create disk volumes + +Create volumes that will be attached on VM instantiation (bioinformatics02). + +```sh +# find the image to use on the boot disk +openstack image list -c 'ID' -c 'Name' -c 'Project' --long -f json | jq -r '.[] | select(.Name == "alma_8.6").ID' +0a0d99c1-4bce-4e74-9df8-f9cf5666aa98 + +# create a bootable disk +openstack volume create --bootable --size 50 --image $(openstack image list -c 'ID' -c 'Name' -c 'Project' --long -f json | jq -r '.[] | select(.Name == "alma_8.6").ID') --description "bioinformatics02 boot" --os-project-domain-name='ldap' --os-project-name 'bioinformatics' bioinformatics02boot + +# create a data disk +openstack volume create --non-bootable --size 100 --description "bioinformatics02 data" --os-project-domain-name='ldap' --os-project-name 'bioinformatics' bioinformatics02data +``` + +## Create VM instances + +Creating instances via the CLI can save a lot of time VS the web console if the environment is not to be initially self provisioned by the end user, allowing you to template a bunch of machines quickly. + +VM instances are not technically 'owned' by a user, they reside in a domain/project, they are provisioned by a user (initially with a user specific SSH key) and can be administered by users in same the project via the CLI/web-console. SSH access to the VM will be user specific unless the provisioning user adds access for other users (via password or SSH private key distribution at the operating system level). Userdata is the key to true multitenancy. + +### Instance from flavour with larger disk and floating IP + +The following command illustrates: + +- create VM Instance in the Openstack 'bioinformatics' network with an additional floating IP +- override the instance flavour 10GB disk with a 100GB disk, the disk is not removed when the instance is deleted +- add multiple security groups, these apply to all interfaces by default, allowing specific ingress for only the floating IP would be achieved with a rule matching the destination of floating IP + +```sh +# create VM instance +openstack server create \ +--image alma_8.6 \ +--flavor large \ +--boot-from-volume 100 \ +--network bioinformatics-network \ +--security-group $(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(.Name == "default").ID') \ +--security-group $(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(.Name == "linux-default").ID') \ +--key-name bioinformatics \ +--user-data userdata.txt \ +--os-project-domain-name='ldap' \ +--os-project-name 'bioinformatics' \ +bioinformatics01 +``` + +Attach the floating IP: + +- this command relies on the unique uuid ID of both the server and floating IP objects as the command doesn't support the --project parameter +- we named both our floating IP and VM instance 'bioinformatics01', really this is where tags start to become useful + +```sh +# attach floating IP +openstack server add floating ip $(openstack server list --project bioinformatics -f json | jq -r '.[] | select(.Name == "bioinformatics01").ID') $(openstack floating ip list --project bioinformatics --long -c 'ID' -c 'Floating IP Address' -c 'Description' -f json | jq -r '.[] | select(.Description == "bioinformatics01") | ."Floating IP Address"') + +# check the IP addresses allocated to the VM instance, we see the floating IP 10.121.4.246 directly on the routable provider network +openstack server list --project bioinformatics ++--------------------------------------+------------------+--------+--------------------------------------------------+-------+--------+ +| ID | Name | Status | Networks | Image | Flavor | ++--------------------------------------+------------------+--------+--------------------------------------------------+-------+--------+ +| ca402aed-84dd-47ad-b5ba-5fc74978f66b | bioinformatics01 | ACTIVE | bioinformatics-network=172.16.3.74, 10.121.4.246 | | large | ++--------------------------------------+------------------+--------+--------------------------------------------------+-------+--------+ +``` + +### 'multi-homed' Instance from flavour with manually specified disk + +Create the VM instance with the disk volumes attached and network interfaces in both the project's Openstack private network and the provider network. + +```sh +# create a VM instance +## -v is a debug parameter, -vv for more +openstack server create \ +--volume $(openstack volume list --name bioinformatics02boot --project bioinformatics -f json | jq -r .[].ID) \ +--block-device-mapping vdb=$(openstack volume list --name bioinformatics02data --project bioinformatics -f json | jq -r .[].ID):volume::true \ +--flavor large \ +--nic net-id=provider \ +--nic net-id=bioinformatics-network \ +--security-group $(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(.Name == "default").ID') \ +--security-group $(openstack security group list --project bioinformatics -f json | jq -r '.[] | select(.Name == "linux-default").ID') \ +--key-name bioinformatics \ +--user-data userdata.txt \ +--os-project-domain-name='ldap' \ +--os-project-name 'bioinformatics' \ +bioinformatics02 -v + +# remove the server +## note that the data volume has been deleted, it was attached with the 'delete-on-terminate' flag set true in the '--block-device-mapping' parameter +## the main volume has not been removed, we see that 'delete-on-terminate' is set false in 'openstack server show' +## the web console will allow the boot volume to be delete-on-terminate, the CLI lacks this capability yet REST API clearly supports the functionality +openstack server delete $(openstack server show bioinformatics02 --os-project-domain-name='ldap' --os-project-name 'bioinformatics' -f json | jq -r .id) +openstack volume list --project bioinformatics ++--------------------------------------+----------------------+-----------+------+---------------------------------------------------------------+ +| ID | Name | Status | Size | Attached to | ++--------------------------------------+----------------------+-----------+------+---------------------------------------------------------------+ +| db137b16-67ed-4ade-8d89-fd57d463f573 | | in-use | 100 | Attached to ca402aed-84dd-47ad-b5ba-5fc74978f66b on /dev/vda | +| 1ff863bb-6cb3-4d40-8d25-06b61e974e38 | bioinformatics02boot | available | 50 | | ++--------------------------------------+----------------------+-----------+------+---------------------------------------------------------------+ +``` + +## Test access to VM instances + +```sh +# check the IP addresses allocated to the VM instance +openstack server list --project bioinformatics -c 'Name' -c 'Networks' --long --fit-width ++------------------+-----------------------------------------------------------+ +| Name | Networks | ++------------------+-----------------------------------------------------------+ +| bioinformatics02 | bioinformatics-network=172.16.3.254; provider=10.121.4.92 | +| bioinformatics01 | bioinformatics-network=172.16.3.74, 10.121.4.246 | ++------------------+-----------------------------------------------------------+ + +# gain access to the instances via native provider network ip and the floating ip respectively +ssh -i ~/bioinformatics_cloud bioinformatics@10.121.4.92 +ssh -i ~/bioinformatics_cloud bioinformatics@10.121.4.246 +``` \ No newline at end of file diff --git a/8) Testing.md b/8) Testing.md new file mode 100755 index 0000000..c8faf04 --- /dev/null +++ b/8) Testing.md @@ -0,0 +1,91 @@ +## Testing node evacuation + +```sh +# create guest VM +cd;source ~/overcloudrc +openstack server create --image cirros-0.5.1 --flavor m1.small --network internal test-failover +openstack server list -c Name -c Status + ++---------------+--------+ +| Name | Status | ++---------------+--------+ +| test-failover | ACTIVE | ++---------------+--------+ + +# find the compute node that the guest VM is running upon +openstack server show test-failover -f json | jq -r '."OS-EXT-SRV-ATTR:host"' +overcloud-novacomputeiha-3.localdomain + +# login to the compute node hosting the guest VM, crash the host +cd;source ~/stackrc +ssh heat-admin@overcloud-novacomputeiha-3.ctlplane.localdomain +sudo su - +echo c > /proc/sysrq-trigger +# this terminal will fail after a few minutes, the dashboard console view of the guest VM will hang +# node hard poweroff will achieve the same effect + +# check nova services +cd;source ~/overcloudrc +nova service-list + +| 0ad301e3-3420-4d5d-a2fb-2f00ba80a00f | nova-compute | overcloud-novacomputeiha-3.localdomain | nova | disabled | down | 2022-05-19T11:49:40.000000 | - | True | + +# check guest VM is still running, after a few minutes it should be running on another compute node +openstack server list -c Name -c Status +openstack server show test-failover -f json | jq -r .status +# VM Instance has not yet registered as on a down compute node +ACTIVE +# Openstack has detected the a down compute node and is moving the instance, rebuilding refers to the QEMU domain there is no VM rebuilding and active OS state is preserved +REBUILDING +# if you see an error state either IPMI interfaces cannot be contacted by the controllers or there is a storage migration issue, check with 'openstack server show test-failover' +ERROR +# you probably wont see this unless you recover from an ERROR state with 'openstack server stop test-failover' +SHUTOFF + +# check VM instance is on a new node +openstack server show test-failover -f json | jq -r '."OS-EXT-SRV-ATTR:host"' +overcloud-novacomputeiha-1.localdomain + +# Unless the compute node does not come back up you should see it automatically rejoined to the cluster +# If it does not rejoin the cluster try a reboot and wait a good 10 minutes +# If a node still does not come back you will have to remove it and redeploy from the undercloud - hassle +nova service-list +| 1be7bc8f-2769-4986-ac5e-686859779bca | nova-compute | overcloud-novacomputeiha-0.localdomain | nova | enabled | up | 2022-05-19T12:03:27.000000 | - | False | +| 0ad301e3-3420-4d5d-a2fb-2f00ba80a00f | nova-compute | overcloud-novacomputeiha-3.localdomain | nova | enabled | up | 2022-05-19T12:03:28.000000 | - | False | +| c8d3cfd8-d639-49a2-9520-5178bc5a426b | nova-compute | overcloud-novacomputeiha-2.localdomain | nova | enabled | up | 2022-05-19T12:03:26.000000 | - | False | +| 3c918b5b-36a6-4e63-b4de-1b584171a0c0 | nova-compute | overcloud-novacomputeiha-1.localdomain | nova | enabled | up | 2022-05-19T12:03:27.000000 | - | False | +``` + +Other commands to assist in debug of failover behaviour. + +> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/command_line_interface_reference/server#server_migrate # great CLI reference +> https://docs.openstack.org/nova/rocky/admin/evacuate.html # older reference, prefer openstack CLI commands that act as a wrapper to nova CLI + +```sh +# test that the controller nodes can run ipmitool against the compute nodes +ipmitool -I lanplus -H 10.0.9.45 -p 2000 -U USERID -P PASSW0RD chassis status + +# list physical nodes +openstack host list +nova hypervisor-list + +# list VMs, get compute node for an instance +openstack server list +openstack server list -c Name -c Status +nova list +openstack server show -f json | jq -r '."OS-EXT-SRV-ATTR:host"' + +# if you get a VM instance stuck in a power on/off state and you cant evacuate it from a failed node, issue 'openstack server stop ' +nova reset-state --active # set to active state even if it was in error state +nova reset-state --all-tenants # seems to set node back to error state if it was in active state but failed and powered off +nova stop [--all-tenants] +openstack server stop # new command line reference method, puts node in poweroff state, use for ERROR in migration + +# evacuate single VM server instance to a different compute node +# not prefered, older command syntax for direct nova service control +nova evacuate overcloud-novacomputeiha-3.localdomain # moves VM - pauses but doesn't shut down +nova evacuate --on-shared-storage test-1 overcloud-novacomputeiha-0.localdomain # live migration +# prefered openstack CLI native commands +openstack server migrate --live-migration # moves VM - pauses but doesn't shut down, state is preserved (presumably this only works owing to ceph/shared storage) +openstack server migrate --shared-migration # requires manual confirmation in web console, stops/starts VM, state not preserved +``` \ No newline at end of file diff --git a/9) Updating the external HTTPS endpoint(s) TLS cer.md b/9) Updating the external HTTPS endpoint(s) TLS cer.md new file mode 100755 index 0000000..aead686 --- /dev/null +++ b/9) Updating the external HTTPS endpoint(s) TLS cer.md @@ -0,0 +1,330 @@ +# check certificate for the Openstack Horizon dashboard + +```sh +openssl s_client -showcerts -connect stack.university.ac.uk:443 + +Certificate chain + 0 s:C = GB, ST = England, L = University, CN = stack.university.ac.uk + i:C = GB, ST = England, L = University, O = UOE, OU = Cloud, CN = University Openstack CA +``` + +We see the certificate is signed by the CA "University Openstack CA" created in the build guide, this is not quite a self signed certificate but has broadly the same level of security unless the CA cert is not installed on the client machines. + +# Check the certificate bundle recieved from an external signing authority + +## Unpack and inspect + +```sh +sudo dnf install unzip -y +unzip stack.university.ac.uk.zip +tree . +├── stack.university.ac.uk.cer full certificate chain, order: service certificate, intermediate CA, intermediate CA, top level CA +├── stack.university.ac.uk.cert.cer service certificate for stack.university.ac.uk +├── stack.university.ac.uk.csr certificate signing request (sent to public CA) +├── stack.university.ac.uk.interm.cer chain of intermediate and top level CA certificates, order: intermedia CA (Extended CA), intermediate CA, top level CA 321 +└── stack.university.ac.uk.key certificate private key +``` + +## Check each certificate to determine what has been included in the bundle + +Some signing authorities will not include all CA certificates in the bundle, it is up to you to inspect the service certificate and trace back through the certificate chain to obtain the various CA certificates. + +### certificate information + +Inspect service certificate. + +```sh +#openssl x509 -in stack.university.ac.uk.cert.cer -text -noout +cfssl-certinfo -cert stack.university.ac.uk.cert.cer +``` + +Service certificate attributes. + +``` +"common_name": "stack.university.ac.uk" + + "sans": [ + "stack.university.ac.uk", + "www.stack.university.ac.uk" + ], + "not_before": "2022-03-16T00:00:00Z", + "not_after": "2023-03-16T23:59:59Z", +``` + +### full certificate chain content + +Copy out each certificate from the full chain file `stack.university.ac.uk.cer` to its own temp file, run the openssl text query command `openssl x509 -in -text -noout` to inspect each certificate. + +The full chain certificate file is listed in following order. From the service certificate `stack.university.ac.uk` each certificate is signed by the preceding CA. + +| Certificate context name | purpose | capability | +| --- | --- | --- | +| CN = AAA Certificate Services | top level CA | CA capability | +| CN = USERTrust RSA Certification Authority | intermediate CA | CA capability | +| CN = GEANT OV RSA CA 4 | intermediate CA | CA capability
extended validation capability | +| CN = stack.university.ac.uk | the service certificate | stack.university.ac.uk certificate | + +## Check that the certificate chain is present by default in the trust store on the clients + +Open certmgr in windows, check in "Trusted Root Authorities/Certificates" for each CA/Intermediate-CA certificate, all certificated will likely be present. + +- look for the context name (CN) +- check the "X509v3 Subject Key Identifier" matches the "subject key identifier" from the `openssl x509 -in stack.university.ac.uk.cert.cer -text -noout` output + +Windows includes certificates for "AAA Certificate Services" and "USERTrust RSA Certification Authority", the extended validation Intermediate CA "GEANT OV RSA CA 4" maybe missing, this is not an issue as the client has the top level CAs so can validate and follow the signing chain. + +For modern Linux distros we find only one intermediate CA, this should be sufficient as any handshake using certificates signed from this will be able to validate. If the undercloud can find a CA in its trust store the deployed cluster nodes will most likely have it. + +```sh +trust list | grep -i label | grep -i "USERTrust RSA Certification Authority" + +# generally all certificates imported into the trust store get rendered into this global file +/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt + +# search the trust store for "USERTrust RSA Certification Authority", copy the content of the certificate field into a temp file for the following stanza +nano -cw /usr/share/pki/ca-trust-source/ca-bundle.trust.p11-kit + +[p11-kit-object-v1] +label: "USERTrust RSA Certification Authority" +trusted: true +nss-mozilla-ca-policy: true +modifiable: false + +# check the "X509v3 Subject Key Identifier" matches the CA in the certificate chain you recieved from the signing authority. +openssl x509 -in -text -noout | grep "53:79:BF:5A:AA:2B:4A:CF:54:80:E1:D8:9B:C0:9D:F2:B2:03:66:CB" +``` + +Browsers such as Edge and Chrome will use the OS trust store, Firefox distributes its own trust store. + +- 3 bar burger -> settings -> security -> view certificates -> authorities -> The UserTrust Network -> USERTrust RSA Certification Authority + +We find the fingerprint from the openssl command "X509v3 Subject Key Identifier" matches the certificate field "subject key identifier" in Firefox. + +## Configure the undercloud to use the CAs + +```sh +trust list | grep label | wc -l +148 + +sudo cp /home/stack/CERT/stack.university.ac.uk/stack.university.ac.uk.interm.cer /etc/pki/ca-trust/source/anchors/public_ca_chain.pem +sudo update-ca-trust extract + +# although the certificate chain includes 3 certificates only 1 is imported, this is the imtermediate CA "CN = GEANT OV RSA CA 4" that is not part of a default trust store +trust list | grep label | wc -l +149 + +# check CA/trusted certificates available to the OS +trust list | grep label | grep -i "AAA Certificate Services" + label: AAA Certificate Services + +trust list | grep label | grep -i "USERTrust RSA Certification Authority" + label: USERTrust RSA Certification Authority + label: USERTrust RSA Certification Authority + +trust list | grep label | grep -i "GEANT OV RSA CA 4" + label: GEANT OV RSA CA 4 + ``` + + +## Configure the controller nodes to use the publicly signed certificate + +NOTE: "PublicTLSCAFile" is used both by the overcloud HAProxy configuration and the undercloud installer to contact https://stack.university.ac.uk:13000 +- The documentation presents the "PublicTLSCAFile" configuration item as the root CA certificate. +- When the undercloud runs various custom Openstack ansible modules, the python libraries run have a completely empty trust store that do not reference the undercloud OS trust store and do not ingest shell variables to set trust store sources. +- For the python to validate the overcloud public API endpoint, the full trust chain must be present. Python is not fussy about the order of certificates in this file, the vendor CA trust chain file in this case was ordered starting with the root CA. + +Backup /home/stack/templates/enable-tls.yaml `mv /home/stack/templates/enable-tls.yaml /home/stack/templates/enable-tls.yaml.internal_ca` +Create new `/home/stack/templates/enable-tls.yaml`, the content for each field is source as follows: + +``` +PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/public_ca_chain.pem' +SSLCertificate: content from stack.university.ac.uk.cer +SSLIntermediateCertificate: use both intermediate certificates, in the order intermediate-2, intermediate-1 (RFC5426) +SSLKey: content from stack.university.ac.uk.key +``` + +The fully populated /home/stack/templates/enable-tls.yaml: + +NOTE: the intermediate certificates configuration item contains both intermediate certificates +Luckliy Openstack does not validate this field and pushes it directly into the HAProxy pem file, the order of the pem is as NGINX preferes (RFC5426), service certificate, intermediate CA2, intermediate CA1, root CA. +During the SSL handshake the client will check the intermediate certificates in the response, if they are not present in the local trust store signing will be checked up to the root CA which will be in the client trust store. + +```yaml +parameter_defaults: + # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon + # Type: boolean + HorizonSecureCookies: True + + # Specifies the default CA cert to use if TLS is used for services in the public network. + # Type: string + # PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/public_ca.pem' + PublicTLSCAFile: '/home/stack/templates/stack.university.ac.uk.interm.cer' + + # The content of the SSL certificate (without Key) in PEM format. + # Type: string + SSLCertificate: | + -----BEGIN CERTIFICATE----- + MIIHYDCCBUigAwIBAgIRAK55qnAAkkQKzs6cusLn+0IwDQYJKoZIhvcNAQEMBQAw + ..... + +vXuwEyJ5ULoW0TO6CuQvAvJsVM= + -----END CERTIFICATE----- + + # The content of an SSL intermediate CA certificate in PEM format. + # Type: string + SSLIntermediateCertificate: | + -----BEGIN CERTIFICATE----- + MIIG5TCCBM2gAwIBAgIRANpDvROb0li7TdYcrMTz2+AwDQYJKoZIhvcNAQEMBQAw + ..... + Ipwgu2L/WJclvd6g+ZA/iWkLSMcpnFb+uX6QBqvD6+RNxul1FaB5iHY= + -----END CERTIFICATE----- + + -----BEGIN CERTIFICATE----- + MIIFgTCCBGmgAwIBAgIQOXJEOvkit1HX02wQ3TE1lTANBgkqhkiG9w0BAQwFADB7 + ..... + vGp4z7h/jnZymQyd/teRCBaho1+V + -----END CERTIFICATE----- + + # The content of the SSL Key in PEM format. + # Type: string + SSLKey: | + -----BEGIN RSA PRIVATE KEY----- + MIIEpAIBAAKCAQEAqXvJwxSDfxjapmRMqFlchTPPpGUi6n0lFbJ7G2YQ+HUBwaEZ + ..... + PcVhU+Ybi7ABCOyRUzZWXDlf6DxF4Kgoe/Ak99nM7v0MIndlbgZBYA== + -----END RSA PRIVATE KEY----- + + # ****************************************************** + # Static parameters - these are values that must be + # included in the environment but should not be changed. + # ****************************************************** + # The filepath of the certificate as it will be stored in the controller. + # Type: string + DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem +``` + +## Update the overcloud nodes to have all of the CA + Intermediate CA certificates imported into their trust stores + +Whilst the overcloud nodes shouldn't use the public certificate for inter-service API communication (this is not a TLS everywhere installation), include this CA chain as a caution. +Backup /home/stack/templates/inject-trust-anchor-hiera.yaml `mv /home/stack/templates/inject-trust-anchor-hiera.yaml /home/stack/templates/inject-trust-anchor-hiera.yaml.internal_ca` +Create new `/home/stack/templates/inject-trust-anchor-hiera.yaml`, the content for each field is source as follows: + +```yaml + CAMap: + root-ca: + content: | + "CN = AAA Certificate Services" certificate content here + intermediate-ca-1: + content: | + "CN = USERTrust RSA Certification Authority" certificate content here + intermediate-ca-2: + content: | + "CN = GEANT OV RSA CA 4" certificate content here +``` + +The fully populated /home/stack/templates/inject-trust-anchor-hiera.yaml. + +```sh +parameter_defaults: + # Map containing the CA certs and information needed for deploying them. + # Type: json + CAMap: + root-ca: + content: | + -----BEGIN CERTIFICATE----- + MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb + ..... + smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== + -----END CERTIFICATE----- + intermediate-ca-1: + content: | + -----BEGIN CERTIFICATE----- + MIIFgTCCBGmgAwIBAgIQOXJEOvkit1HX02wQ3TE1lTANBgkqhkiG9w0BAQwFADB7 + ..... + vGp4z7h/jnZymQyd/teRCBaho1+V + -----END CERTIFICATE----- + intermediate-ca-2: + content: | + -----BEGIN CERTIFICATE----- + MIIG5TCCBM2gAwIBAgIRANpDvROb0li7TdYcrMTz2+AwDQYJKoZIhvcNAQEMBQAw + ..... + Ipwgu2L/WJclvd6g+ZA/iWkLSMcpnFb+uX6QBqvD6+RNxul1FaB5iHY= + -----END CERTIFICATE----- +``` + +## Deploy the overcloud + +The FQDN of the floating IP served by the HAProxy containers on the controller nodes must have an upstream DNS A record, this should be present as the `CloudName:` parameter. +The DNS hosts should return the A record, for University - the internal DNS server and a publically published record resolve stack.university.ac.uk. + +```sh +grep CloudName: /home/stack/templates/custom-domain.yaml + CloudName: stack.university.ac.uk + +grep DnsServers: /home/stack/templates/custom-domain.yaml + DnsServers: ["144.173.6.71", "1.1.1.1"] + +[stack@undercloud templates]$ grep 10.121.4.14 vips.yaml + PublicVirtualFixedIPs: [{'ip_address':'10.121.4.14'}] + +dig stack.university.ac.uk @144.173.6.71 +dig stack.university.ac.uk @1.1.1.1 + +;; ANSWER SECTION: +stack.university.ac.uk. 86400 IN A 10.121.4.14 +``` + +Use the exact same arguments as the previous deployment to mitigate any unwanted changes to the cluster, for this build the script `overcloud-deploy.sh` should be up to date with this record. + +```sh +./overcloud-deploy.sh +``` + +The update will complete for any overcloud nodes, however the undercloud may time out contacting the external API endpoint with the new SSL certificate changed. +The HAProxy containers on the controller nodes need to be restarted to pick up the new certificates. +If you were to run the deployment again (with no changes and restarted HAProxy containers) it should complete without issue and set the deployment with status 'UPDATE COMPLETE' when checking `openstack stack list`. + +## Restart HAProxy containers on the controller nodes + +Follow the instructions to restart the HAProxy containers on the overcloud controller nodes once the deployment has finished updating the SSL certificate. + +> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.2/html/advanced\_overcloud\_customization/assembly\_enabling-ssl-tls-on-overcloud-public-endpoints#proc\_manually-updating-ssl-tls-certificates\_enabling-ssl-tls-on-overcloud-public-endpoints](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_enabling-ssl-tls-on-overcloud-public-endpoints#proc_manually-updating-ssl-tls-certificates_enabling-ssl-tls-on-overcloud-public-endpoints) + +```sh +grep control /etc/hosts | grep ctlplane + +10.122.0.30 overcloud-controller-0.ctlplane.university.ac.uk overcloud-controller-0.ctlplane +10.122.0.31 overcloud-controller-1.ctlplane.university.ac.uk overcloud-controller-1.ctlplane +10.122.0.32 overcloud-controller-2.ctlplane.university.ac.uk overcloud-controller-2.ctlplane + +# for each controller node +ssh heat-admin@overcloud-controller-0.ctlplane.university.ac.uk +sudo su - +podman restart $(podman ps --format="{{.Names}}" | grep -w -E 'haproxy(-bundle-.*-[0-9]+)?') +``` + +# SSL notes + +Verify a full chain of certificates easily. +```sh +openssl verify -verbose -CAfile <(cat CERT/stack.university.ac.uk/intermediate_ca_2.pem CERT/stack.university.ac.uk/intermediate_ca_1.pem CERT/stack.university.ac.uk/root_ca.pem) CERT/stack.university.ac.uk/service_cert.pe +``` + +Check a certificate key is valid for a certificate. +```sh +openssl x509 -noout -modulus -in CERT/stack.university.ac.uk/stack.university.ac.uk.cert.cer | openssl md5 +(stdin)= 60a5df743ac212edb2b28bf315bce828 +openssl rsa -noout -modulus -in CERT/stack.university.ac.uk/stack.university.ac.uk.key | openssl md5 +(stdin)= 60a5df743ac212edb2b28bf315bce828 +``` + +Format of nginx type cert, haproxy (built on nginx) uses this format. When populating Openstack configuration files with multiple intermediate certs in a single field order multiple intermediate certs as so. +``` +# create chain bundle, order as per RFC5426 (IETF's RFC 5246 Section 7.4.2) search google for nginx cert chain order. +cat ../out/service.pem ../out/ca.pem > ../out/reg-chain.pem + +# the order with multiple intermediate certs would resemble +cert +int 2 +int 1 +root +``` \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..9574445 --- /dev/null +++ b/README.md @@ -0,0 +1,14 @@ +# What is this? + +Openstack RHOSP 16.2 (tripleo) baremetal deployment with: + +virtual undercloud +multiple server types +custom roles +ldap integration +public SSL validation on dashboard/api +standalone opensource Ceph Cluster with erasure-coding +Nvidia cumulus 100G switch(s) configuration with MLAG/CLAG +training documentation - domains, projects, groups, users, flavours, quotas, provider networks, private networks + +more rough guides for manilla with ceph, quay registry integration - not present here diff --git a/university_Network.drawio.png b/university_Network.drawio.png new file mode 100755 index 0000000000000000000000000000000000000000..87069c8c888b29d01a5053ee6243f5653de37046 GIT binary patch literal 70018 zcmeFYcU08Nwl#`j8!!uI6h%c5mCiv$>25mb97M5m&P^v1ZpAE!3Q91bf+#AAIbudY zK@qdt1fpU<3Fh!s?{LpK`|kJdcw>Cu`{%teT5NuyYSpS$wdS0)SjJ(Idjt;-4hjnD zL8Zj=f`U4x1O>Gp*rgM=(xOTT+-T?Ik?}!$tHNFe1@#}`B1&9luSRcD2aSdk{`fW; z2GLoauF>%L(J&a%q*l5dRtxwFuA8kYy+*HA{c#Tlfx#v~&=X)N0TenKj)S4W4>)=v z0)dkK@w{BCw*1);av}sYFjlO$sH`3*_{tK1A5aLm49A0S;06lu$5$^35e2T`Y&Mfx ztX9zVuF)_O5;P1)fy-m*6akYl8b$!$O?smm{6|(Rjn=>`Du>)NQEvs0L$MPf6XDVcPj_t{^42VQphZ0bL*s?=`15*Pv-4JQI$tMqcMLv99-=>K%5 zNy)*h$u1w2Jiw!*RgQ8Ok zuue5w?Wf3GSP|duG&#{s7{*9*kS%(pk!v&Ld~=WCRP!mZC*0k;owR(eZva zmdf$MRW!H4Dc3+1Bm>0lwrl)snu#k%qtq}G#$v+LDG;N@Yw=4!cXT?2Xfe6eYNk(& z84bhR&2XtifduB@5pI$}gH_m-ULW`kOdyzJ5)~rQFiZh!-Fl6}CiiMoNEFmAl&MW< z@B~^;VUbNB)bKhF)|(VhKucgk4;{FeB*viqJc(GOW>b=MVD?BPA4WwoNklmCT|=iN zftL*6pBKg!yX`1@;0c9|u1A6|YN^96qx*dxr&DjXYupB*8|Lt`&?dXcCb1wgTbRIm`HfLHwi+kgMyL0prdfj+^#7z4}^=tj?=f}avKhz7huflCtj1T1#I zdDMXUFajJ(!E#9wj7h*$+uV*oLukBLX!1%~1TWj2q|=EY7KhU2@HnL4rd&r)lHz%4 z6ppG7^a_`f^is18$uJ;%JOpqNFbuDC+WFuf(<(tp9ay(m0>RS+N;ArVAbB_>n_nT6 z5MT}p&CgYsuwYI|0ej-b96Vd8qx-c?6AucZ8EsBC6Doz9pc*v-sZ#2+R+R|O0=sR^+_01G76I9(-6E#YN&&agow-{hzUaSsU&iNoa2O=_$Z9Rp;-G0DEJYH>?#J4xS{aoFI`Y!!z~OkG*RQ3}#So_( z4tEhTT%1}c*OKEs4uaL9p>Y)qvlHU9Lzzq@Oo8Uo41qNy;~8OCgj&XNLmYCQ0*eh8 zE(vT{eva3!0ss3v;EDih@{1^FY?2v`0kaf{DmMd;!gBlynMh?~!6hP$BT3+h=kbsN z6P&QG*fjn0a<$+Tc!!^@dcs##)nqr_y149U>?#A>R@!Gaj6NgfW4f|HAdOgf*z zRf`oMYDs~-hli@MUNBWQq?v;xFg<>YRZK7wwGb0kthC_0Myn1TZ_~^8LJHZJMAB>J zY#I@+_A+I3tDKJXNpv#3NR80gm|6-#gC?;!R-a48AVJwAYm$P;mPi!@D34>qxpaQ7 zSR~TIC1|WzOGRiL6p>dM4`Jd3a%speqyXc&{A7RSf4wK9oQBK8`= zQo>txdJT)HhN74-gPAX85GWcm$?1nVm0BT;Yw+5@FbEF4+ibwQKqQlhPNBr$mOB+_ zw1TH|P^eH111r(V5lXj3#lr|-UMUE3k^$ouG6+~5GTtQ>duVi(D-cFts=Nlf$LWAk z0s#*Z8I%?VLa0+4I98<7j+8TO8kbrMF*tD+AD@fT=-6lrl18-rGkhJY#Vf&UhUHfp<=%q0b`++2B`u(N3vL9G&GKlQ;A(} z4bLfnAOTDv!CmTjq8`E2GvJ^tHXG{K63jk=2F1Z4VKk%_u7c5pB$5*ib6M10T@sfs zl7O{KlA%Rv14AltYDieO#0;m!yBun~kRU)- zEjR1ge4dX@qe`rB6x&9%!-z@_1MBdUi7G7-yus&F{1^(%Y|&7iMzL2TCny5#xljbY z9Y#UB`Bo;9fyZj8ewS3^wVMnGY#>GzUObbOaJ^6|Q-UVq z;cBSQtf67ieyNMkF&Y$38x#ZbB2&(T;RG-ahJrQV5gI>3jmN@#LaG36(Cabr@W5ol z^<=y_9t)+5RZ4dt$9QoPxXbC$F(jZHJjir-yvQZiun=Yr8U~R{g$z0t!y^Rpq!wnN zsI+*ASC8PcFgzKHCZjH5`vdXJVq{ER0mF;uw`q zxD2nMdQ4(B&q0Ozu`==n)#0hS?Q%i>J}i`nRgN)&KE zpC+fO5h!asPUHl`uu1~3h9AhJ4xY&8r6^QZalk|j$%3J{tYUAH!C=C%*wT0(xQwQd zm?F2I2jB&s4GW+LIyo?@Y>f+pwAe60A5TeAGvHE-(MrK;9k2GLxdMy@dBr)}BqD7#g z%V7vVAEAeMTmi5Yut1W;6LHu)mW2S7@s%nWR_>=HIf*wd-g$q0k>M1vnAKquWg& zd(!1ll8^`UgKILr-=Y+7@x`p7ogd8l^Tooy7+90 zRUt8>*knJHZ8I|CT@o3bWpTpE0v#d=#YZ>{E+>`EB5;T@jTaZsblW^IgdJtzdiWR@ z4q>6gY(|AW5Y!$K+pd)2bvUn^jN|j*SUMMhqw5qVfdL1>$#6oC)1$x(h-@m}$gyek zIEGXV*1H=pC_EE~AXr3fCc%SXh`cz4najW^@e~4tjU%DVUJnH)MA^JrkpBX+!o(>7 zio&anY9`L1lTvU5Ff0MW6R)Mqga|GJ&i2J?ad@pp3lh1@g|&=Da3o2ZeSE1 z!8ZumOyD$wgosO0I|ayi9nSu@`%;{OK#Vuy#AL2ht`{&N2JnChWl{J%Ob%Tsry+4j zG6-g~nrw#42xvW%2Z0zUE~MLviU%tL@AgCJHip0*0O$mt3xO2Muy$^M9g^rw8eFQ? z1L#Hak)Rf_2LfV5>PA}dE`bFM8!zNw(Fm*FYLjT~2D;wl_2~34Hd(A?ve6cd2dsuQ96RawAy{wR7lr1CmZ-a|~1+=t_dN(;+gXoMxwU_#&ABra-tQ za;Qh(g9so>rIBeMLEuO>L*ekii4rK$sN)c^a2QgKQ(zn}g2UwHTKEA(?~O+Q&yhfz zB$^QGrs{n#5t<^_`VCkWQjEm&RB{=VXut>qnb?Cx8SD%RGT>}0+Qs5pML0WD>|l^2 zaCuUITQEd4lwZY`$W2(Nl?|l>4oM>8=o~;5@j?mR?1B5u7All&=F)|rktBgotX2_l zHkUYFivp01;QDn z?1NbYvCQ!!WKI~I$umQ>tUywUSFjjta6>L6@(eCBnhx``s3da|#HbhP{bYsHOQtw@ z3b#ZIr8?YrCX=OB`I3BAm(-~-p#oqC3&X&5Mghqn(&313tKBcup;>eZD~aJ^L0oDH z!s-`D{W86a;8i(6Pw^Ht&rJp|8n8+?$Hg^5i8es9R9>&ffz**@WSz9#kh5KO zsEYyg7l5oxU{^B|1x5{7fnxdCc(_a=fDmW_a0fV)OoZY)>>gT@&#ds%AP$WQ4&V?{ zDrEcJ6cNU5meEi`6emf5O2Xh-ka#2kC_}&)LOcxVB-;owxQ}WknS^$;Qt6|qlmZ_L zMH8#cBB1I(u?V3D6_2Dr*k)H?v*Pd=94(#d7n79~nuP0gxuv#viIO9diKRpvMCP%& zrE-Lh#peLlk){oUK&oX($~9gWwAEW;Ppw5LtaZHi={L5`0M-jL^Y>i0LX14^Tal z56?B);CeYnsAEE)KD*K>a~m;SGq^?v*L*s($K|pM6>bAYX|Yj!N-$=&#EU_~DQFKB zkMYTQXd@TQD_(;nus903m&r0ou|k@k2oX579uI^7!iD6wqTOTw;Q@TXvy5zw2V;Q; zb_0b$tfaa{5Q@pc(Z|c6HW^L8CYjxSEeq`t`cy=m&l67)*?1TSiNMEDgf^MVjJ29Y zcBD*f0BsXRa+3*Z6Y`lZzlw@tP;mkqN$7;Ts5S=AgU4cAu6VfHsX@?vdI!m1QCfXY ziJs)s+Kon$jN)_HXatBV;0Y659vBAk_cqN&3j@!_LirFUQ!Wu39ke8f5@1{gT8SkQ zd2$LL$N+kX5~IN~>th69_1t3x?y% zEEcQRDWICvXsSbNpcCT>a)}m&rwC07jgw+TN|P8?8$p6s@##_s*F&(#t#*=HAv0Q& z#4?vqVu70UUYApjm6?+iEEt+&RcnY?i&8_PXfPBYLV}_Er9=KMN`l}2)PUhc{hefN zP|)xoYCKNho%#AykAwYh`aaGb3K`e;my|A@GV`U)c^{%a#Pu6c`{{i}p8nRkS>teV zHJtXy_wDAK{qk#AzeQ`3pNxJHeX_Q`Yj~$=$p>F8`$c@{9)1N4vVDKM<`fd7S>7%Y zw`!5+%~@>KvFf>R@4~;{`}(7L?#~^SH*K0GW8&A3n`7Z$=Qm~PF-eWJQBO$iN5%cq zha>JdSQmUV`Us5pLMPoj1`HMt|@A3g=W)Bf?r zc6|#8?GY)B9G5%mA72b~I`AKQ1TRLlU(zc)bHB2zqTy=J^fu$;GYgtO)uqvY{`emK z!Sm|Yo)S;f!?C2L=8tcGgZW!^Sy-oLABtJ33r^vzH~-@pJC5v~gC{KMnSpgSs0>F2 zcVZ&%ZI4>_a^EcZKMp&hHwB(@g|ZOKeZ@$G*Njp+2{$E4O<~ zV%Yh0%dT$TIN3u=Vtmyg`!e$)!h~1S|H(x`o8q#gRFfdqhf}1b^GKe zOln11__V3J!jD)#WUWd|I~y)ytp3M7H^$9w3SBP#>}sstd2`oB$NaQMP0A}nl-;v# z;(t4ywBkzc+^-+XfBmJK>eg;)pI~LsXywB>EmyAxZ`+=yS~>i{XmPiTO_;4wi{`~c zr~RXA@xAv+!LStFH#IagWD4=r8<#fV-)~b#i+1a7HWd^^`dY7(w|#lW$h*TCcckLQ zk<#zxgAU!-4{s(EE!740d%91t<{u;dMMo2OXCHJfG?IO?Z(-~E{WTeQ`y+NPrcZ27 zjWIaO(GW{|uQU{#`g9$ZjekL2N1s>vlcW0CGd;w*!gV~Z{>G|^p;fkJmyiEr`)>j< z@$@?ggFY>V>wU$XZ}=Cxn?g&FDaS!PHb30bXCx`UDRQ~E$Mw*0eVTvsJj$T1jV!;k zYRI$my{4@*?mb4zj7gjb{5RniZvLm|5tDB(Kk4~muD|Yf4@h62Bp&a#BL=yRygzl5 z`#}CM37*$$eC&oEU027c+eeG{uKU_lI?X=-Up(_v{lnQ!tKJ?vDqNEMx#nuGdG9(v zZ})B4yUbX8>hPpx`))1`YJ0pOuKgjxmQ$YKtYO(1lQbuDikDq1a<7S*o0{0gJZ#X4 zM@ZGofxkVxU1nnDeR@{YU_W%HBP=aFD{Rf;bm$7axyKIH#!3Bl?H~W9`>-jJdChA# zUv8TB@kFK}*5A51@#mM@*DG^^_hza#7`CuBinMyccHlN*+x<<|1GyWO9 z!t&+F8<^$`9@|`XVo&pzdaSmx!;LYQYwueOn-ek|mJCkr`TcVvtlza?kInoD08nYg2MYJ}<$R&1ckerD-vhwf9N)UUDPdC;FMas@R67ml&sLx%9eww!=3$Bc2Ol zBj+Hd)})SpgcVOJefDg%yXs2O4B3NGMQiHryqqJPxcANbhSdj{HYfWc?2}`DD;_+d z>C{hLQSyxHH{+ulzO#=#t1iB-U&`Ixh0`mVF+RTi#Sz!6@pi`^M?rWFzVuGQGr?5L zk0(D*@ikMvSeqYq4nJJ}vN-1lMf9WkXNSn!e##>MteuS5!p{BMX$b8{aYnW8G5`Ji zBROw(hO?r1;^57%rk(y2o%Qaq2HX7htNBawMfu!krJB0|xPRWvOTA&G4$+s3)pe(tKMkVb?=vPxr<@+ougmCNkaO6VnMwa~|MqFm z2l33P+Pe9{cCkhX{9b@waA(9tLUZpR;6Dk9?cDkc_!yTGo+rW>wImsQOFn1BZS45Bhe#nB8>h z_Bhk+y)!PJyibft(u`*Z$iVN!*<%=KG76FtQDd!p@n)t}O0I8Q#5}gg^G} z4??flQ71f2nd-{Slo3z2^*f-@m3@rw`Au-)WWsSfUVeI#X!*9t)^~aK3(?E^OIT(f8T5mdpoD_L;$ z>pT1V=AbVPM3I;!yQ@|DVCl6zS=9c_5#0_+M@28l9uzK}+*7v6pB#MtAn2x|?-nre zv#u>VbN!qOAG@!07HZUE{49VDnjbi;%X6_#S;{hT@Wq1Fcl8xLhoBf&3?mL?{fzaz zP0(8`sXt1WL9Gu@zIdd`nc{OC{8DDhFIvFBhBRF#hp%efFabnv*+7hKOCalR@-<{- zf>hG8U=E+pUxDlQ^AsX{0LngNZ)nXHUC6?M)jdXMmQ#`=<4LnNt>E{nP9C(WBPA{) zh&e)-%Q~yWW1jGf8L_qLu;E<=tCFS5l&AxP#~xEdQxt?Klz$C$Nzb8!vxp7WB5`r* zkNFQa-9;-etrRW3o;q}L!~y-)Y$VK3Tc~>k(t8B|vZ)~*r4(HJs(?rB9gxG%j?)M0 zYj%gODBz5(NxIpS*=78(oqtBx78r zPJE=8DwKiTwfQ7N7pz}XIe0}{ntJr5=KdXZcu8EaWRbt)qN@dS{>hPx z2GcU)Y6-uLT(~;!X4iq|pNUJq2y@mmaxRo);|o~%p5W2S?CJ+)n-9kzQxYD5;JUe} zG6SEDh`zWruv%suB80T3TmJG}AtSkcQRi=#q^ z5Z>P|qa}VxczfVOegOw0>aG2Vb(dDHost;7Eo$9IU0flV-vyVFUAX|be*2%U4*>bG zWkDc5^8ea>&cw9a8`@m?vF-&T1)sjR$gRWADrf$&mf`?`^I49~a=e(mQ7pzV^4NQK&=SINf zuI&HGWb1T(VN;j%)66`X(7I&p%%ttCb~~4ZOl_;EB@B(~uqLRhchnhn&Fn=vPy}=7 z$}>|2>=?GG9efI4OQB#1<<&1Ook{{Dd4F6N7|E5tM#9MgYdm-4qR5V^{UbyEW?(pZ z$3IN%e_Ilx9R$sye{e2dV_VcAZ|K> z5OS_P?@UDP$l)dwvHgNBuOFV8PD&d#0j{lD2Pl2P;p0iKM0ZbktS8jH!l#D4xqs4k zz;w5SxVY!gdwt;+Nsv!~NN4T4`|tgu#WBH6OM9@yC+nA zr^GRE*|5k98=%Vi=E|7$01{Ok)Zh5^U?r&{WXrHAFv^w>T~yE%Q^loAsUwg8Eo8y9 ztReids4c8q%!nI1*lCP{L&n<`duN=`1>X%*bnY3pI=c8IZ|3YbhYCM^e!e-Tq2t`T zRp@(yvY3{NkMR$`?Y|s8PgGx%^SEX1%l!5AEt`Ly44dMGRJ3hfF>%fFO|w~@x(^K6 zJbOYUC3c?x@IL3)W=8jOJCnV!uGc{j{f*?b}=Ix(KY7!q{}Y=v0Qb>`rs&c z>bG}e3$baIn8aiOYH;`c`JNPgY-kB`zC8XG!azjTSTd({2%NPX+d!N_qi z{T25^-}CaV#1-3j?u=^8x%;$V+_UbRG2c%5S}qJE06MIP&IXg(|II}0xY6RyS+ImC zmHm$E?E`6Q__T++Q~=#X@SpzLZQ{PAoaBgV)QXcoQ!n@UxFR=cnCTq2xqKdVimzE! z6jnW@1suuOF9q5|SJX!tVxug>Ld31D`^Je+6Bz)%hne$lnk8>gx7WX^0?<09ciOpGO%4u0&I%o5orrFa> zvt5X#5A0>@RWECP-r%JjmB_-G$2Q3GOEPmShuxMwy=UvW-g$f&e0*Afg&s%L-<y89(4cmW+@XEOC05_BV(wb2o_p~4 z+^^O3i--Imr4F5aM7U(y`(^O0E$faID9xMu_i1@lHXX|!J1@6#9js{K%5vqf4s>|R znxc7E*22q=M~d{{f1NV*@Qir(nHLy;E6w+`>stQ(J@bDKD>2?q*pRTX?_DRSe!!CW zi7zMCH1Sz^b3{MZPV>dh`&Rd8o%w0D@k>RwmctU92pn8FExNhmy{Re$sH%C z8z&uVAM<2z)r%)fHm@H9-?ZlA!>`^9&V%1df4q{HIG%kP{C4jC;~#EKFWbJ~e){$O zkE;};XKuSYamJV3uYa2@$sB(ypEO&t+rmw2d~xxWFX`xdW83?(BbDOP<6x|luWLST z{`on){095cg0-e~%Y56-wxZm7v*(xs!eHZ$@dcc3FE;odT$utBeQ9#UWDmpGW*r>2 z6VS2iN^NplT9y!R&FrF|7rUpGzO{5Nh$meIEswY?!Q3)_!uoHXj1S*m_|lI&?EU%1 zE8^6Cle2^s`6;^T6L*O5-saV$KD$}ZpMfUbMsB5-=2M)&3ap(JJ(ne%MzJ3`0BYeAB!!YUk2l`)?VB zv+qEWVtIXji8a$&<+`HrLb1Cfh+s7ALLo<=I!hwma*M$NE2t zT2~edf(K}8w;P?!?$#$#RI`Ey_I+_7o-REh1)q1(-`l$ho&rBgeNT1;vE1=G^Dzo_`Mtf6{y zT1LW94Iy?{%%tMyH*XKmTJZVHon6ABdLUCRJ-tTKEipX$2yg7X&exXri&iz)m#1G> zPTec`V%hS^s9c=xTV(qFrA?ifz~oio7p`*mw0$+rr2p@Tn-vgRy`vpQkR*!t_a=J7@e`;nfH6gZVE8;`&dQyY@Uf81()Wm_a3Sx!mwYTN2|B=e1e%8_IQCPAHcnVfay?f3nv2_S6}cu4)KO8 z+t!siB5cA_24<=a@nlrDzxe2FCopH~zSe7(7ZmmC5V6xjt3=8tvfsQv$=E@L(&ZW4*gCH-T08zP z8EIzq{?^A}HzkZ5l8ql6YF1n*;5_ORF*nC#h|7GkeE_|GBkuR9I!}%HqX{6^?f7Mk zRFcgBajXlLzdZXMWMvWxC^=*AU3g3b%HsI7=|D#Ki+n%t!sd_dj9?1`=>Jvk%)0aK zHaUc>5@7V=ZNpN7Vg=7XS9GY&1vBgTwZmS(&ewr3_zxB1am)&d{~E|w3+sR{StS7} z{G>@2{6u%IrC(})S=P1Cnux6aOA0uHN4%}Z0sXG0ZdehgSDMYaxPhr+*@o@99@-pq z6FRDF$;h~ijk&6*bAm3!*wEqQmOTDU8&pWZ1eWv`p*}dJ@8ZAdZ;`+6l6VlRD=;Cc zy~3E7@^gPGm5>~&1v6XUv2(&I&^ri&|CqqDiT(Gt0>9^XE(3aCt_qOz z_TIa|AJ-Q>j}Dt~5C%FY!Y1^bM%_$W`qkNcOV{)c*=6Hu&yReNo97G33dvmpf^te+ z5?G*5x^VVS8Y5i}>4)d``ekXw2=8gIi}g+gvocI41S@yZ6F}VmpX{B)uyEt;y&3kp zONWwkCwuk+3An85>sx!KtMBZSgG1K-gMi$}-)#^VfSoCz!TkHq1QsT}S2q5SHORd{ z?cX2wo2hWgMD{8jVn1?hb3LHk6+-{s5R0U>A1DT%-KN?TaBB0eftg?+-`lW>rlX?WglA-C5Q zuJ`o0>)R=HN^k<*@l)~rziI*gI*nF!JalBVxG!_l6fA7T71Z1>E-X;mTT5^6nLegN z=bk3$dIkUT?nwq9L|5#V9w<;u#*8~$wJ16TANBVwATY|mVmK{=S+ei)=kmq=Pfr^A zW)!UM7W>s%yv2e&qB?o{xf_2=J!@zFYo(IYBc+o1Zz@9UH(Sy& z8vk<2Cd+RTt9lLL7uH^Z{(N}Ex_P{0U!3M@cE7VL!r}BwlU*-3s^E)73r+WGg!mZH zMQmoq$kEKahYN}S7qY~zj&;oKfrN}m5kPizdh!Xd`Ow{zwc~fB_TLU`I}@@Tge`M9 zIEP!A9WsU$U$j89TXH?v#^r;O3t9dgXgKm!K6dOYY9E_X~P@ye&r&&SR;h?Nxy81s$W!1SEu%hkvM;uP%|o8 z+_{@*ng20B3H?$dfs;{90N(B|z604B)zS9)<`w4le-L4qkTv|Jhp)8F0cY)wi~Yox*G2JFHQT8p zveiTSbpcqz=?&GdNg-TKjI2E$e*a{;FSeS_D%~_F9L#KbWlU*8BrA99jRNKkYdlnO z?{?W!Plv32)1EzyQi=ME{&+EQ0{s@eY4}sx2;%jpoPQT8Ua+5Uoxnlg-|PKl^`JzeX1|hrtnJox z^-Z92cI~j`wx-1Xe50r0RdMM=&!f5MZymb@W!(-fTp~WnI_rz|oZB(;?gM_7=Pi&g zM)`mCY|i|4d_y_>$2rM2>=zo4?4MOnZaVoQIb+Y~$7z52r)%GZtoZGj})FaSH#b)6XB zZ*?+s(G@y$?QK^}w{J)7b-M1q)Q+$;f;y4j2bNEKxUxUs0c&TRa0^EaK~ZCu+q=K> zr*6A(-!3Fc?R!i(~MjQ-m96kk`G`=hjG4{osRhl?f~(mtaid#S*);@mlh+x`!F$%-ozZbmbOf=km+TMw%B7 zuN>RwXyeMF`j2DgKZ=wsNgY~$u~(wNkRmsA`Sx+a&&(&Wi`Snj{ZdmKQ@)Y@_F&iZ zFj~jLC4Dak$tz}5cR*A|-W&05&&x+^-VR7wfB)_A9*eK_Oa9`V*VmoZS2So;3ePXD@L^uuMs z{>(LFR{C#EGv{+0&oVE)pK*L}$%BDi^XBhtC}3VqX#SALi*2cV`c%I{-Bwv7tUmH) zRZ2}xxBIVz6K>P@#slHL@xeStKIeB$Bz^X<^rnfQEK>f9a}Nhp^qSv7$W^@>9@5AS zOFzczLN5Y}!6M>aD{9~7?ZqoR!Lgo4FN;_I1`*fqc77lCruW7shCTUt-nh1RK{rQ? zYO^H|I6CeTuJ=H1O2a2sUZei3Cs>#^S6|t%Z5*ga(XCzm`TEs4JFOtIW6E#gB5Su{ zB3i%_6HX}QcUe;MlQD}qV&2>R_5H8(tWdAWN>(-zeC5 zByZ@x*0rt9=T(9FxA?2;=EylpGI<{Cb_m}qn&Nkq6bwG7lGiLLgnh~_4yILBqu5EjDdqw#P zAb>eOy^{}`?B3n_^2QiH!&8JME0f7oh@!zD_o&rtqi<%a#nLJ&z-1tFWHKLG21VZ=wWv z9dG-4`N|E%75xwG?&|CkH zg|>k68pE9H{|VuM*p)*GV+I({bw|i$Pw?5LyzSKf-4SWa{$x3AQ^sI=-2|9w&IY`O z$T>NaU-p%f@T^alT=s3|>ic&+50UYib&2`;Gfi0H&Qn zUyQ-UzC+Z14koV!i1dvo_~N(=(F;-5hvSX6cBg})yo0cmaamkh;^Z4sB98JXYc=_j zzcI!CM)d=3>WQ46#pyNQ)Iw_bJT3Nsx3v9;re%pWlxWuui&&7323eY-6AmcRegZ@7Z%j^7XXpKREjl|#wK4-#m( z(*a{KQB$JGvFB`6HNn=11J|0TAJ*)BJ`eiKs;c*WLOmBREeF&yZ^!1YZP&DogVUc! zi+}aEVW+lAwgxtB4()G})4v5xAMLsK3$}F~BmJvn>EtHme-igCQ7XbRHLNe-j(s*f ze_DGU{y6Lj_T|-fSNZ+2@te*}TJ#JE3_B7s){a+iNqtfXcyzCuzSGS?NG&BbE%$M9 z5eV7QJ^vO`pDH6VhXwo=&=?l39sh^w2!xpIcXNOMCjz3xK}x~Jyr^~m8;I-QB&Q|i zi_z(r>;O%Us9IFlfgHjr`CWeoxikcg3x>We#< zXwP8JIcst{WbYUUsPlrPE|u?}1hJ2e8zTi;(|K+$a`LcUEpwi=GgJXV4tutpcpG51 zK)w4--=RAJ(cs@pcmCT}_BYis)IP`b+i~II9z$-mcT{FhfPt;|#iKy&;f~GoWCiTa zslQB`1%eHK76>o@QJ7?n_%}I;n5&Zasl5s|vgX>Qq>$+gx3X4lU6mFn?YpyIss8rq zeV}IT@;7kOU%Y$92~>NNJSZMt`XkoYT(A52zOHQK zHXVQ#*B{;9Ee-iTdE)%m7uR1jP5XMcdaE$y{t@eXAUYA}4*^GStw(-*|4RP32}nf< zG@3j=eCnK|Q(CUTCwyI7v_3mKJ7Vpw=|?Pnym~1pt?4j+$5up2?z{aCUqB{G?f#cc z^k4GKRv^#3c&R1Cwq_kCZI7y_af;_%zeF|#zj<(sPrJI_+mCyaSJ?V!zg;;D{H>sA zC1bn!{^3L5obtomkH3v+7*=MwG^G*L7^o)xF4d^6ZlJvn&Of?0%D8eDs6#mX8a|(% zE%GnVCvS~fS4G*;kvU>$>dKnspsH(khhGE7691zYF90t-xlrWmzRcUk9vyOdAn|?U zaK85JufM=Ry6mFb>07%pyci=p>D>?1v(p#jU~2GJ2el=~$NqSv5vJcg0_qWV?diWY zYCC%BI5I68f2#9}+#9ii%3YueqiFdJ?7%kLzfM?kDE(HB!|8%|&Jy%E_daj=jVMW< zF;dphPM{w0UB{<^oV_(Qt$umlCung1c=>m5YK}rR%+#VZm}wC(2y<-s_500;Z6G~fyW zy!yQ?Z=0bvfP>&H2^@xW-MqdcEPUp^hG1DORkXLN|CYW$TUz$JwlsC>Q(>jOKz$(3_(5AcB8c8KpYxdDt8KW%1?NWa9DG z{(}(Q`qz8&N^BNjfOgTYc7;pEv7XPW1xERUoFSkR;XjH|K8uEuGCB;9(y2&!h@^D*@)j1{mgSlF^4N#M~Ppr+Z$Tq+p#)tz_h8EE0bPa z%X={t|Go||_TW_mp%Jjn^S<7HL;`2ZtH2*AV0!in!weVC^yQ!YaL>qn2^5-X?Y*E* zMOHg8E?sM0>pK~=`; zKdUmvUe$m<$SHBX-8NjDDxLd4;h4W&jr0W^!d=2s>x03ZYR#4@%HprOBTvJI8M}-(;l3AZF(L(K6c`% zR^Hu1M)A7ZW6w85SFVpq+dJaL^l$z4Vc77hRXEjQZ{oMUhL$6Mw>>Q>Czk`Y50!pp zeZ`f!L_i9AD`#CwTlxN#GL=jMMMc}H&yGIwZ(5LTWl!GsyPEWmkNO;YGB5o=RR`)W z^WfPZUN6hMwQ*9?uA5a8XB@u_j#X#1?7evbR7{qD1DQ11-QR_v)Xg6!vKkJpe;G6X zb0uX(;7?+R17-+j5AFCiJ!@9h08?b~%Z5(3PA0Ft#PL3Pd$cY!v-sAATdM*(kTJ`Z zQeAj@`u1UxFD-LY@|Q@3O2+DXO_emJ^jk^y!H6!=(qt1YoJgN)5ba@CA6YWmKgU$M zbi+^Dk;NCnW%t`}J z>SRLI5n)z}>`->(mU$u<9>2Gtv!mYCR7^P$R+$yHcp z8m(@m@4>&TL3@B2q%UcFwtl+J2zj6MAY(;Q`L+l4rizjDs;hNnKn*I+NDmovuRAW2 z8)3USNPD@7y~z}^hftQ<{B%k9xI^b4sE$9ro@qDe;WqN!uSGG;jpIBMrHYWn>UBVu z8NGVB92`L`dO5K1<;|jJr@Mw!R8@EGIdt6oQuPXDx8pabO^9eHHCWzPZ0#29vOuv=PtIDX@5}TvC7K5rxo2XwC z-ZmsXnzWQQ|Ne<1VRw%doB2I@z)Vl$2bDV1OA&W2kvkT4zLf_mOj--qUEP1=;hN8@ zRyD73JU{T%wM_&2zGU{>baw9C!$2LHZ75(9UtF$V{9*Ux^nRnGe_e5HbHsDqr8Cne zNI32_N%NW>0(Gb~%JDIH{p7YYYgYy~qfHx7%uU}v93fUDjH&uwyF>MUK?h9va>}-) z-4|DMnH;D{w`SJs=S|q%x?*$n+_Zw-p|W+7C{>PDf2_W==up#|@b9NwzE{G#Ej#%2 z8`*7Xp@pJ}`@cOqx*?-?j0ij_DyV-l=x4;ngnokK_@!TE;USCf-k#7pHa63gRy02Q zf3f$TVNq?_y094p5mAt&AW zN;2HJaPQvT`}DcD`=tAP_v?or6jjt(bIma3m}9)}JE-M}(Yc-zQb~&)%z+Elj$B5S z4{nN_Ds{`o##AnDh^DzdPwSgE8#F?TGx}JziD|Jw5uwid4Rm(II*mG|9f$>4-`MYT z`z<2$J!G@Y5KDKgy8I|OT|uERQ(R$6Ef57a_Mw-c$j8TeU1=WJ;NhOjGEKiT<(^S5 z;?$%A)@9zrqu>#?nds0bbl*IXco8qPEQY88Z}fNokcc>TopPqD*4Ev-!-o13Mf;h4 zF7#$=&!_th>V{18j5?U_oJZmXm0C#)>W_zFDFMNhI06GTOb>E+pi=*LC zS}@lVN@mk_p8-aE^~rPU$c|#<(OmC1X8i;0Ao$4{96(Utz)WeV0AZlm+<67?KNN*_ z`b3M|cC9tbEQ!f#nTR*W9jkWwYbo1)CY~TQM*YP|PNl$>ph>A(n!BHNT(zG6;p#0~ z6)s$EkC=1l2L7866#Ks}0<@CzdjzQH7xg0E)sp;tH+JG??XF%luU+!arsqL8^Ow9G z2lO&_{PKq#mvZ~Wd>dA^A0K{xVdv&EGP|G03T4yHHh9v(Lh!t^@fRRS-tE`TbE!O- z&)1)1`f_F8yPPF{qbr(ZPS%K4M-Q7+c@;#*!o|s4nk@vbS{&-Ge3?9!0i>tG0jToHVb`;;ey}pTd1ja#mb<=O z7V_5w#~eW1v@+|BQdu7@AZ)6PdrxpbJT1N?s1IL?Dz#LY+XqmT)`sZ@_C@mAA1&Ap zkmA2IHyolOQ0xPA9XjirRuDV@b1W%FbD#8$TGSI>*k{`^vozpi#=;0Hub~^15x=#o z%=#-suh6az>4&x>caVovTH`>aF}lnb6}oW3_iO1{IF1sSw~x4($QY zsUa~@zNt~th9v3JVV1bYv{{}af>UH}cbk@7aQmlL>K{iq(2H}y+kgqM)ig}Q(5rqH zRXoB>NJ^A7d^zWt?ktjMF-eAUTa48!5%N(g82%uC{L3MV@CLfWB@Y?A%k5Hm!2pZJi) zGHocJcSPsDWruJ+w7}o_E$Cvt>>A`f#0T&KCU4BogJ7D{I;ZVDq}ToHAY2)PZw8NQ z%kn_y=!YMNYJhwn@U-x1NdY4Wm21C=j|BeZj$ATGGnKkzDCqP)Pg8J9S6Evxe*Qw) z!NH*<6eLG?08lO26PJkX)h7ODV%iNOFr<3Pl(Z6nV8I2QXCSc2@9>O&C+Q3IYZ&!S zQfAc5a!>r^Gs-k~cS}HTenxi8e%gYT(pT7ch7}zbu*PXURY4~-_#}(oXgj>PgbLgf zTCj-#vm@Cm3GfD9FL)ul%DdL`2gnD?c#8GNa}PD)sz+vW`sl2#jV$e%9%@9g%xyu3 z#%FSW0?Ej^{?)nkPcVlVkw!%8SxgcCXe`V3ikS$+i3EWqfKfq!V4|VHi;lM+nC`=~ z!vQYqca%JC!*)3Tz|?sFElKgtYY!JmO8rxm=x_Szj}Q$pxqI#JSb#r=Yd~T@@W$>6 z2y0Sea3WMQeZ?#IM8T|r0$mW)>b;q{H z%?hdj?8NzOLx~^|-t-#=>*!6^4L?`9hbq?_x5L>Ld~G4*-@_ch?xMM-{P5~qQtsT#ixJG)M&Oj&Uk~W^%<{s=B{GSyyV#BmAQ_A4B3hbRUndYS|hr}WQw{wz)u%fSMy4i>=d z^FQGR&cmwB@!>AwL&}~w8}HKytDyDB*cef?1mrqA zaFY231TEydgo4HcT!u2_6Wetl4`Zi2=0W&8fs~!D@(7VCp!-;ko18>&=I!81BY?*f zA0l@(Muh(kaT)L+; z5vr3Qtx1sKz=j04pfl!B*3AH(5P&)cNMI!>$GHHqh{~TR<%;6?ujxgDEx!3|i@!_$ zg+LT#sP`36n9y=bsdk24F#va@CutP$aesRSA~tKwP#I!HAEOKnzKqEWq8{&PXxcNn z?=Yp#0}I|C1l+D#|AO09rpX{REowYB1sc$Usd`ZX z_xBts`rI;Ykx6d?yZxesW`%8%BK0|sY$-%ytFk{@G!FuihmJ0SjM$Kl1U zP-CwU(H&4@`=-j#4vG4vJjd7TkVP7!pih5wbzZBUCXy?WgC1%#=g+}FK9=5JZ z=8w?>+Q7CP#Ru!7Jq`AeBvIewmzXUGz;B`FeAfJ~9e%K3<4Gz6d(K^_+%B>QfOGBP zKv+n&bLc)!M==?Z`@)ls4*M}Tk$ zqww*X{8AT3@0;G*+qXdAB^#2~kSZRp_pE2qty0l*p7U8O`nv*7VcIses%m*!s)=mpT&+Cv>b83Xme8a6k=2`Zq#my? zzv{Apd{n)R*_s3a!1}iQp4BDblwWs%TG5MFmZgoCQ3Zg;^zH%Mf_Up5`axStGDt%Augr#O1E$m>I8@j-UMA#yf{}G% zS=@fO`EJU#zFl+0i+Y$l5P~sDuY;iYu(?Sji!Oi@PFqec&{^(r~Cnh%+z}R;cNJIEa`;MbauD{dy>)haE%>-DxDrT z&+?<1PHH}&sn5K#%3h=9ydh`z`0)GrhpXxl>zEZiK@4M))U-MN8QK6tkEUM|0KOTSIGvLZ&XH+Y{=(hQ@3>0GTDO;^#CXk6bhiHV@YOZ)>6CBB!;y8Is^4oI-pdk!NY_K5 zu!mPLmA|`^ebnUczwTw@Zc*j9=@ByJG^xuL06R9r+t%kQcQ)<^e!A9xM7~1ybL%uK z@$|l*8$M1`-0@}L%a!+2MzWkN=YRM}v#+iiqC#Z#8O)dqUqlrG+ znE4q&CH^z~OTnYJII(mKZ3S<-!Q4;zZSFG|JA50{X7}_+fiI9Q^_aA*iRT#|;GWtd z;PbakI#*yXW&`dhK|($-w9eJ3;T6DPxz_Sr^_xZq+lP5PbX*t|i``rD9v$V#0! zd0vf@X@4?89cz49Q;TJP(|n_Padk#Ia#t^0NH@J*z+@M335r-9{EArBjh#qQ6tmOd z_U;j`En>wE40BJ79z?Nq)t#Ung`^NPlM4*~v|l)$2(2fUDbZEhOE2uP=;CNv2fXR4 zS8*6jf6n4^sF0PgZ)}H}H!E^QuZ4@0El@H?LaHJyGiRzIp^PCFN}2rSzB=?OcfT)b zxi5yeEmqGk+hFNB8`$j6;R2+$ioffd)Qyqb>R6Si!Tj!Uk%`AmZQjQR8oD*gtCfdx zo<85+R#KY8lj^y5-rn!QmsardQ$#%c$Q+wQ7V+_%AL{OAk2(||%>C6iF?FG+jN?29Osbv< zc_-{V`>omgeH>KVT3)WsEwKW;1uK>PS?cg3RGTr_F#1PZe}lFv+lVqgJ-WgMs&WU|K=&@gnVoC5gFpGw{SE=#Tv;mUMU6eQu(HVcq`Y&Otwtolpd%rfD zH~uy8A5km%>x}DovwMApYb0M}ihd(UIpDM$U;*&qg~v-b1oZ*<9@C_?O<7)uMjp1G zDjLS7`XRj!K!Q@*>;+yRsh796y1ToP>3wrezElflz*l8A>4{^DS1|<~kPDAsHG~(*bK6DltbRXSRD}RV5qA@{|{2t!&@SMpp<*nq4pA(!JE_jfQPsQ zdb}TtqL_H0HJ6OX9Vng9sN;6W3-GW9cGZFO)#IhpJ;bR7;3U#6-`WOfr5`+=+dTGz zvOHp_E{0UalqX_2-t+Q(_hsqs&z@%h*za?E=9Few=%(W%ILY;x8)k1lf6pK@aLFir zA~b;MGt(Q}zST8p5%LEa>QC6@x(YecaeqSA2!IL`zw+Td;1?dW z#SL}IQUe-`IELr|h(}%dBOdj)95jVg$q20suZsYmZ}%HMpNiY33mFggZ3##H_(!b|*`IuP@G3yD#I&C>Mg)f2e-=7e z+-A#~90Mue)C>qj0SF~20AQn`5nnV%-TCjtfeI@P$u2Cw}85i2gR* z0YG9|PIowoK(6xbS+o-1Z_^b3oq_UJyJai2-zl&}l<*9NOqAH4PD1|(gFsT*bZfS; z-%$TKhdVET?stNyzkm;v$@}ry6b4Io`?eUQ#=#T9>0?iRF?S3Gx$xouZt}~8hr(7q z!}O%USKi7}V;P{9E?xN82;gjNzroqS_vQ<{qjxcA^ogtv(GOyQESY%ke56V&Ygn`s zAdkPL(i4cbO;ezI|0h_$8M5pzOb>v{y3HpHU;)B9y2;?6m6V}fe!ERPEmE8`C1ZpK zB!lt)j9P$r9DhmBYeA@mo(LRtdn@B(vltNyy2(<-oCPYqxm(I?n|Imu33GfQDor@B z^MB2S`rEX+8sIU7ZGhw6&s>r4R0t*EDsYE|4%Y7 z|L=gA{zbw3gMvx@8@I|@89VSp z@51TB?+ED9z8oVm-M4(*vQ_*~aQR)TN&y_LewiXbc&Ccg_o$SK<5f(TjlB|M=AJ4h zwB@)A81EmFqeLX10XSFq@1m|iaZc6COePabmmx9+07q*U672*`Bt1RXwY1DuO_O*U zkcJEai52Z9=U9LK4N0ou1Hpdp>5=^Y(TXr@LH&tq*gB;Cf)M%{6+rah!e2vW*v7&6 z&Z?ge`6vF9^ZSuUPyU*h6|wGHNl}GrC0+lxyQe$#M|k4$c$pQc$7imDr2eNsc$M9# z{f6eJr;h@6Z`ZpGr5uo;T=m>>LWy-ivw=!RKHzsHxaBkH*9lg&iOhhcR~qB{XHBNR z>*WGv0n6zcz5|=@QN2i7;lK z;0MeLH)8ckZ0e7lh+fIP7yN4$6{Nudb6gGHK@Lm9_GZ1+x`~8N7KBek^Q({yiQODd zSI;B8UP%1Pp)X$<&>52lQHszwKK(#ao}g9*f<@%-bgfRH@)itqT{L~aGj2ZD= zc%EiBQ|E56(pG+<^9cBn(jX8)nbMPcXI@SB!cdCqk&93P*`Yl%hzbj_KMswptaku|9JjMAmL}FPxVB zt&FaNrXi@Bm;fJG^q59yN!~^%(vMP9y z|DF9vsts~S)s9N^DxGA3&~5QZ|C)O3)K2PsaV2C#Fbfk?wj-w^h(h>MO1LdU0jaKZtFnoWOA=aQ5p&9kLf*^ zT9`05xm6z@r;LV(S8cNF7^E9_?8~xZ_}@xl?{)C`zBSQ#@iy)Vgd%iT$C6Uudjr0C zr}-h}1q=B)Q+EVA%-0y5B&RUQ+#kk(zzh58JiN~KE1^*n4-YSS(6x`EPuF|$*f$!@ ze*DOed~t!@@0QovObiR*i+YLgRU0fByt%oeuUuDcyGYXJ(MK(z>n)m)*y@ZhzTS zUx%l%D#NiC z9{YH@(f!)`fsHQfY0;s6hyALHp1g}gf9wg-1AEd7Ll@USemoN{AfYW^aefE*yBBfY z#si)d++mit^i2L%cOrDqnO~U^`OzT<=2ae4OB1+FYCOa9Q&pJw`90}GT?RMY&2XA> zCJLN$U~2JGZn^3~ae(-r&7Au`i)SK~jXg9^hbj|8FZ$!oQd_W_;fFMmXQ-G1c_3E+ z@+?-JURM01ZCy#j?gLRu@~(jMO<-UkGiw3&li+bFb)xVbldk)4K>oeS5I;&flQzKm zo^03ZU^gsWr1!qsAE6L|gI&taT805*zxHwRTCW;F>1s;47HLY;{E`u15*d|&#!GGO zV)n-Y0+FfFw;Q>qZ6dfS88)(^845F2;jM`0C*(8=OYT{WAP-;88MHYt9K6K)rf(*~ zZCd9JGTzSECtPGXQcU)FTKU7W(NzAdHXxa@+a6vtCq==OSHbQBPNkW0+`k1v^3dTT zUtn!-75N7gCDp+#}4yh z1)3%x@X=$)cXANrVi)eBNeAc+PxV(gPXxwO;gnqk3xf@r3@Z|JLmZi~vl)ot;S ztP`~RgK*w^>&)yTvAgXhV*JKqC*FIe!$a}voeow6~_JKYw7tdn=yx#tb zzo31SZ6W1ParQ&JPL|v0V+#mj7C`fTU&qaDH~$!5Bee|d3tAp$VI*YNBrKRdYi$GK zH^zX$%`5rf)_owi7q?}c2?Vh9El(;wlV7`4`HeQ@Z4mKmm3q4SwUBnrc{ErWKs@A6 zst(OEMG3A*dY2zx?%RcCF92oQggxlgc82x%E2;scO=u1XQwCZ%?Z@COZ1np8|3xB& z(`Tz>mKs|c(m@0)3Lp<73q(Wy5a|G{DDv+sD}z;!Zczp*j2#bAF>RKNWQvm{Z5_JY zB2I8|V;%ski`VHl5^jTYni(YO`WN|v|7{LCgFftkgmLjtrSPj=<(+@W0{p+1&j0_r zXohUke?>O@PrRpp=?(o$Z|GloL;r~0P#=C#QZqa8)xJ0Fm|YHb$T?!$psl!_dJt%INl`T)^8s{Z`}uA|0^}T?bx&W-`>v}!qCmuo z>?1!A^s>E_sS*EyNh}I0>94Z~ZGaAlTfzX={pbI?w97}aBEQv?(EqtHdB4^b9SVfw z)oEp8E(H{Wa$ZldxOjLvLusTJp(8P*H9`j8dPEF^J3l~YdgTJ*`a6gZ5mV*yG^sl- zu=077#V(W=xi6m(2Fg&V{Z{Fs`;I?>BSZ|h9rA;!wycuvf)sv;jpyNHKhd0;5OjX~ z)>+!MO%Oo_>5xLIAU_&0$Uy-U(dMJs!LhM-;nc-nWPsQ{`~6qHYk=~l+ijEU|=wPdldO=Aq2mW$l1>rhW5p@xCy9wbO>*ZChJL7NG5) z2_Cv1D`}wnj;&1%>Iv!{;TS%@^VU<&z7IS*h39y_I)1R6swi}J#pg9ddeM-bVL_E0 zMsWq3M~H!FpClC05D&UzrrI|37`D}Lk^KcyWnF_QQ}opWz*#)OFZRm^DU~0fB1kFL zC?Lyd7CsX*x^9Op4w`6aH3^U+WvNGzlXdRZQ?^YwdABX2)PNkPe7o+;uLQb-l?0Km z2$kEJ+}r_&=V=xkppL=;hX-cqp~~r#DXx(6J6gd z#gCAnC8+S7gwdFt0(o4>kSCdSyKWDBIQ1?q4-${Gvu+zGiz&C4*4L?vR4hko`ZK88 zZm^>J2!a87&@pg1+t;if3`@SkU1T+K&Mht2!sEJWwvuW}{T*`Gfl4F_0diT8ZUQ&s zn#^1Mqj|!mFZUB2rAD}fGg@;L#!KsmMAbRf&*-^LyHXT^3u2KP?qc3tK5&?*R=9V5 zLwYdrP1sDd5`j=QZx7djw9w9_R5Dj4E=sdiMbOtcpZE+-SLZ-=2$KD!$I9X6B*#m8 zeII&Mac2;WmCG683^LTCCrY@w@j7V0uY{*%ynjmuo1sq)NhHfV@V^8ysrN6C1J*#t zBXwKl_Em$r{1TvPylOlc1K#u_Q@yjUb(}`VPpO1^H%LpdWM>OqdNc2+#CbmmK)v?X z4VRA{$UXLwbR4?5%=O6!+k1ILyZ5x9MktDj;wG|kkF?ACszlL|fUAI#ZlKZIkA!NX zsm#_4>R@G{PJ_$o@$M3+`k-=n5T)i3-S|o5@wo~7A8CrJ*Q-d_e9WbA_2UcQX@@WL zQWjKTx_S^-JyXT6tzX^OeAJ_W3B53WX!UpAQOT)azO^d6*NEFplnEk2s)y9IY94BA z=djGsk*=%nveJCzxUEt?u4H_***QsG$;iXd5`@w)N%TW3Ypw!UKkgC=32#vNPwVd- zW%_cT&vjmHBX{+fFi(a)KqkqaVM@g6BCnQ-89;q&;>D|s2fg?w->K*m1J?^hq~DMQ8VgBJw9}pXSVh2yzqV2& zI-3LcdQ>i*&4GXJ9TrDCaJ|Kpvp!bH?2pL0yW@Yu)g$_;HvW@MR#%VP?>Gn^K`;Kv zcd&IK^{)MYb@9D_b@A_W|No1#n3~-u05eS+(j45Ju5A=BnasE1+qhmb`W=XkG4O;p z;wO{+%w8I4s=Er3$)7UCUDSL8^v9)pP$6CfZ5VC76tA3M59?2*YwJ7MuDA%5`?hQo z^r{Q$mkR6g!jLxbu|zBpaF|g=v!E4p0Dm+rph~R z%fUkak%=$kWjK-c6dFFJO%dnOvwNR})I{Q?d12oOt~N3}CM^?M-Uk|clZHMkIX@-G zgHo7stvcTFM!qp^N-x;B>$wClKibLGOO||-5e}W{R#vHo>eV($^Zoc868yW5#Fgo& z%&O-r(Vwt^bmztU(~p19veUkDRBniE|8Z+BKvY28ZgQcaOede3L1iADB6tUxLnY)Rx3R{vXR&n(|s;-{VC zWpNuW?PYOiAKa;i9?zyW5La7Gj2ZNGihU&p9necn-SAr(X<`wSO}ZBM`+dNc2R?}m zjF_)Uh`iq>X){g5pe>}`Q?^u8nxGf6;JuJid-!c9rLFeOhMkRej}X%=_Jpjec8rD1U59M zSdN+1%OlXAj8%v1LB}&hy1fEgn-UZG&elTTfaGLH1Hh`6#a0HGIuNYv0g2lWxt0q` zhi|!?iAsd{#(sAH_V~a0W!(x$09zt%*Ns&ODFIF=xhM;b6*L>awEPs*1PYypw;6yw zQ9!DLeuF8NF}0PtU^$u7oEet!EP$X-E^<&!=(O;iITjtHQ;Tiz6VVUZmza$hbkcx^DF69R?0a)Me3nngzML6w9rRgq;pRAo!^ zZ{3$o!lj=ox0`r!S!=;D9E%wdt#<9JkTR1HdF8n__d2b1nR7sR#aCkk32Xl zMt&hGBIqlNlWcwIp;mc`(+9pgicb9|Aj~Ml9-uQSS+IGcgs>!(ME2j)q@6Vr^;u{- z@x@hFGt-JX&Ie1<7H(Ae_5h|=wD=iy-oD;~s#)CFxo#d!)k&SWTU|5LZC_)mFU%e{ zfu*U^g3k}%`_z>nqz%N_SwXPd;(Y(~3}eVy1I#+11{}eQfxG-(2dIJBJZ|xBK~t_( z*XQ3ip4kT^LW2q=#K;2w`d%8@ILrJU@(cu<9=LSsO);0c`F(~?Sq>oNykY`6p(sFq zTHoL<>rhG=y8WfPTAUU`dhpu!iBbz=Kq2Tq%F4=Wp)2w- zX8b5ysjxr3NO@ij%Li#Q9UGwlp3%U8%|nHDQ`RUG98jubC6IOp5_5Z>3~wNvL4-bK zicfF|wR>8;OEas6FFx6l{VnH$-lI#GA78i=&~#CZr2(zIwm2Za?^#3 z%qJ^r$vr3MaA<9<@n}x%#0z(8u3QkO>Nc0g_bVPpL};vaM%G?l3li=9&!`q7c&aY9 ziZUIRZN(+~rBwH8I$_M4_@REW{v7IZ9R(sS;uDb#ancwTTRwUu7p>{IKNT5J$<#a| zMC-)T+CzVNlXCEER@&=S>+q3kjDNzT)eFF#j?`mmrL~``S?P!#TW+6d*CplVEsWt@ zZNwn#x3$-atF)i1q4z8goGLZ&_QxhP$-gtoqTX{wjob3g`|{-O6`kBr2H*5x3I{I? zx)_nS65Guew;PocL%{9UufP1U&|$Y>zhnsjw@;#+87HW(!Yam}WkkFJkLr8wrQU1@ zDr8ym?5@uJKVrRSz5Pvj`B-{%Ds87f(l@t6Gh)C#Q#63zggvp&t-d9_`D?e#u(DP- z)N?gLT(+YQfSSgk$vmgKW7j6O^BAk9;$a)fx!Yvq+qQoE(<)mAY&V-&C>wpXduzOL zw;Hp@PxJ5;SNz|z#ufRvqK`XS9G!hyq%vk~WDP&N4DI{w$IZkkYIxyxk+THq-_lno zNG7kJZOU_H=*BR)Mx#mVs7$xpz=D2>R`iw5Q6I>W|7n4I@zn0>V#oLvV#z-(Jvh+0 ziHXu`uWep5@}1^r6Ws0-9_!w$yvVFKg5c%5r|@elBF4s}6np2$J+6Lu zqJ5q&vYg|!FFRV;d$#`3Y-_G3QcruIu!P>IQ+Q;LEi>;CA#>UMW%8N_n6RHFSgjbi z7q#~9p=n?-yX(8rx7LmqX72-j16|w3Nr~B7}o##!usTL z_~7`c?#jf{Q)7kvPjx@`Vc0N*0-07tC1X)8AE$3)kHB#{&H8JVF&Sa!v5~M__vLrm zv%fvRVMcJCb7ji|^_~b%CRoK);`xte+NjsK(?*y7H_yJuEbq-7g%eTj6)TdduA%SZ z(yBQ))kj&z25?8&c>zS1W1C2`&U4|y7n)#3lP2K^)`)4?bE@f& z{PercY!gU z>nf?@$i>`(sq9`bY~gc3IQ6YWZEy3OA9y}Jk(iGJHnxjAO8CfxG5}%P>icCX z@v&xMdN7tA*Bem5DtN!(Ws$R@A|4JoEG`hoq;RpkuY)U8h%R{tu~fX2t`LTjVxmmh zL9}29P#MRFD7Iq5x(Ly;-ku3T)|nEVJ`EUqeetNpA$UG1zUJry*yp@cw!~_#da##- z6da9b_iAibg?kbN;(BKZha^9fO#&G4!6sZ`86bqoFi9LXnYDPYIo_7ne*7o6%YuKP z`oN^0dNO8n55|4sQ%0)7Oa!~$mH|xiHEwy#jWbi>liWo(0uS2vw#?&c@YUWX1(Uc6 zv=mLeu3P@2AQ@%AhQ6=F>8RZ9rjsjJ*J(5CYYZji|Fr)YR*QiF%$NY3^R~#3+xgvkCc2N#I z5n6hB8Adkq4955M44k|c`6t9Sq^35k&;?_Ep)iN1zoV@zvP{s62N1M_bHr} z2{uMde1v*%W<|i>^^@uq${^@b7#bgp`Bz7L;}e4uTgFo`2wOQAk4U+z4no~^KVLzr z9tn6BP6s}(wP243rDFsO3@hQ+0$W8+{M^Ak>=L=ywkQFeRfjpW<%@Hj<;(8k*Fzt=(X2gceb;S2zx-!Ms?lNbQeKa?(nR zrYo%&gy*Bl)AmQOr0rKO=f{QVC)FoP3xqy(H^m|jC9HU=1tiajA_{n4+Bz(y9u;X` zy5PWaH@A@mcMI?Qo{NgT9aw>GR%iE_|NOW2X^GT>ueC%@A7k3C-%t+{U;%3$G67Hv z=2z$;H{ma?258IK-oH2dEL^3sPc|t{I#LOkQ$S-91FXpVF`y9rI1m6K=ggBRoFmV< zr#V4yL^aJOMz0!B@jDiv;Vm%f*Pd8LC0kPIZ8hqbyuyX4O;Nl?J%~7d=SY;>;48Y# z7kMp_^<;_<3yp>DNOduy(yyrOyh~|do*d|0ST~P&wAI*#wLldO{;@o@W1ri{&);A2 zn0^PPColsl>UMc~Bkv@E3UvcOOqatGcL0BWOciFgXu01OLoNCp-=2ON&NmnOs~s?R zmKJQ-R5k`=O2Z@JFD{}x^5EymXQ^iPN=p-(fAihJ__Wa~t}vcGWR!&B zOu9m;`LHQq1x9b=c_hMR$u}<@0QQLj0BD*4{S;0orp-tdwA2K0IA;*(yOlnzS|n#& z6ZzGI#fn98e_X4#?ySM`iPB(uquf%2VILvsTYZ1u*@Mu-9x7~LYh{DIV@o__ozT@f zP0~xN)+2_$kV(z7QI@EEkeizN202==Ng6e}ri}9u$aaN&aS5bAlQ@^g2T$Sy&grdE z>#zL9O%l6yUxd~0Fvrh2VCwAdy049Odoy~|oEtHW$z~qa^0VFgQvXOht`)J1pz zRqCM}byi4H@3u_jks-lSFalADZv(T}`GIKhtC;308ZBOwqx(g6;b+-WKhu*Fi;U!uHTUWX`8yzh6K+Oml*yR z4Bg<&WHCr{sv&mINVz@Dc{^kQ8t)QfwF>%0rJ7l9On6xDg?d*%)SOx!@L>(u?q0qv zi$3T$7Y<***~VDL4L}_8y+SoI(rW1MFDxnvh3n~~il{1;z%7BMu2F)4YPGwi`&(S{ z1bT@Sal}ZBBk0@#A4{yieljc`A01%7Zg?SnXwIt3KY&ej5N@4^T}w}CNY2vJC;k{} zQ)2*iC~9s7|MQvO$QEUElL;E$9T7hDM7D52H;fhf(y{WXHW z&mRJ7a8K5~dl$%;iW>?Np230t2PQL-jJTJWa%~;^Lz)@qCNO3f7ndIF)Uwn`l3t}ME-<3)-RE-)b}o;GyZWKS zwHFBCsj;~3#u8ddDZ4a~Fcc?)$iwog*(ymuld{mNa-KW~^9}qk>OxDn z^D(Aj=qv}^O_X2s&=E&hO zc*t|-Yw1SRT?2HoG6vQe-50+^(2e11)w*(lwdx0oPei#?%d3#1%L;qq1qMM(Au}`c zomDHrsfOhJPe79R@7{O};hq1No_*@RRDaTRHd+mPeaz#!n!gt1+7Gt1QEYA>728|1 zG9L-`;URi1OX2O$gf>bBSHBharmyE$_l&*8J}LBj)gVW)RVXav2wcskdxKbO4iu+Y zKJ%|VXBXVwHuyF+82TZR=OIIY0bG2i>#MD-%fCG}oP6iB&JPvx9o484L8~S+iLVrr zr9QYhQR)wngA`<@03VzQR4Sv@`!SB2H$abKa``m)!t^!ZMAWfz`cwNE*fiFsj(clI zy|wv+gfDBVoacYa`zH2yZZ(R(z}(;;BR=Lp>>0;m`JnJXv$I);1!w=GgNKX9X=+bQAY4W3B4guIa2|L0@q13 zYT*Y$_0lCW@=m!z-i4)9*SM3yz1d4Bed3@x@b{aW8J>5Jcpo^k-*LBV?R-xocgnkG z_v{?`30-uT&1{Nb5D_^o)u)$l7TrR?$`e#^v@qWm=$*rw?_6ouyhKhYY<)+^-L9oG zkc#EBV$W`27U|EZXMM+RzpVv9ilMHrgR)4MJYQ6*IGFKp}w3e9? zlnl0zK~x*LJV=ZuHC-170NaW<+0|Gcn{+OVkv>p02r7=6=CZxq|6{3hEmWE%31AUM zRu#-x@a*>5q^dAuL}sZx)jsXb(&F!vbH>Yy-Pt#}A;knnagPIE95RG+6gR}7W&GdJ$g1<94UCR6dW}_q>4kY%Tkbok{x{r6F_)h6m#5u)e#C8O z^%2MOu8%-OE!riV%}i~8+3mgQ{+;9$*Bhm4nrtEwb#Dxe26fGjQqkkhpD$}e59n~7 zDi(J3fV8(+#zLWf$0i!zc|;;?=FVMc+ZX{omrg@_Ixh6z#qWKv%TiGvMpiC7;dT#( z{#%ww=hNk*E{z*jSoDJcsLZJtYDniXIFegmvti?Tw0&uAvdN7)Q~GlsUj-Si>!0w_b{RF z6Tb} z&z>5Gg!A1X0&38j5R8yTu0!g_1fQUC<3>7rX@JUSaFWe^AG3GU$IG1&lJWIv-CK;>2 zmo6?T+zZin!;j${d9v+WwJR_YQM&AN@OPhywTSmHags7GY*ApDm>Lhj-#E3~T~3yz zAh7ruEu!~r9O31Hu+GvIlp2}L`fxlJRmHlynN1(|^&83pE9m}Rb@noWDRj3)ShVL=a^<-=^do~*BiM95$VD9{hKlrFKB~b zen(88x3`*bO~|&jO3eu@G8B?He8R5FC-S$MPS?0l+B`9&xeID+L=%g|wD@{?fHj@2 z{CBM9KeQ75tr^BT>M{+;qhH*(-r;c=wfltqg+$4&M)Yo!21siIx5q|Z@%v;OK2xO) zX{Y<+?rT+>X=iIy!8vf#5Uf?z|Aw{N0v`AIAWgRve@*B#$r?CP*My9iJmvq7ZY+PN zYZ`h3o^bkfZFMk$yKwqk3SQ27ifU924+;Hmx)CVmdafW-Rf6!mnY&`!W{d|kbt~Vh z51l|;2rF_=caRm@8b`PMCdi@B#O~Hx96;uMg+x#BkP6x@GZ$XU)}QSvO^oKA&>lee z;wXO&!{d1XDAw_oFh=Bpvsj);6{f_xKflN5jx@CUa@8dN6o+S)eAU#LT zYaSolzLIkTf&>F%54Ffy{dOcIq4o<&Y*U7So##LC&Us-DmVSKpCTy_t*CeY^=wA9~ zIw2C;Q7xRsHV{w;B?rxL9iMH|I0c!NwC%6tF0 zuKzoC1v3lJO4$@BUI2|uUf1uHE14jO0m26+&nX%Q!Wq@mK_SdRK!JP+0HU|J%!e># zlP+!1Lr+U{PDGIq8uI%jmKVa#`t|$`zbZOtUi3e`^-%^I#;dOshQy!=^8QNfIpZ@= z>)e}U7HDP5bBYJQ@J3TF?t5bDd+y6vWWUBplVAX*77DTWSl zt_IlAmM9ca5JC?r?||d66403I@adJ}kE1tYwgP7=XR5s32FL%foAGbWvsWT}3eMOP)B6=zKvoRE76E0JT-Sq9`g}TwlpT*@Rw4hmC-o4${fM&v6vh?)Vvk5?! zJSP|djeEsv=*x0w0fhRR@*<$AM95OL_>~#59*YztJ%)d|)J0*A9(b$>6rKQbcj}oO zZ?Htl4fw)6P~5fLD=r6;z1Z*kJRT3NW_B^>rut{nmw=!+66DOI*FnxF!{hiM+W1jK ziL0e8z;i<$AgFd>ybfwbnk^>)|D(T++*1aWuk8HBo}-v@MO!?7HtK<)Y4bb0--4;M zljLA`)r(%0<{-x^vP?L>uGi}`a!L3)Gws1Zhq$#4sKu56;(fNW3&V^N>Bl1cFqIw< z^7OT?LA72q|3EU6r&Y?JQSf?%D(DbP^gVyLRj2<85Uk|{HRDJ?t&b6cyB=DX`7ZFg z6THX8C6UW_DdIm$-Dew)WEM7Ty@mvw^>N|oAuqU-cfi$Kmfp=+upNsmpeGYN@dXB) z-fPXk%er2B@?!xDzPhh4=CZDiJ~bF!@@WmkhD(el-Frbfvy1WdfHE=uo`Z8)LdCuW zHle`?z2XmwkXV-WEh{X^`(>T?Ox?BjN%})qI{ZuPUu7=QPq2zblTv}I8JDM2)pb=z z*ig5xAlbfs3{3i{|2Z{QB>PjAcaN%}8Bkn`zN&Uint?|VU#;VAwk(?VNj)`|+p1$( ztExO0OIW;(n7r3BY{pM3 zQ;2huaj%buLgjiUB3t<8`oq!K8*zeu4)9DnPMhhPE}eT1Sag@Eu;TcRkVd@g{yKLb zt!&Z7-k?RZ%Idt)%y|68mt`~5<1VOV;yuZ8<9Ok^XO^-}hEyquvYDBgf$V1!tbN&| zN&&v2m-x>0Wc1?0gb~Tf$z5`H(2DiuGr(0huIJAB%7=g*&ctr15V7Zk2_@WPDF~pPEt4rMQBJ8TQRCA@rFLoU2jOe3Fw&%Dsz_Bje1v_y9 zuhpy%#5cBPzu4}{3XqRGSHfU56AC~-NTwU*)N6!5P3c!TSOjK^n~T2*?JV!>Ibp8` z2mY=e>Qe0tkR&oL75f4t#6ZNEe=YR`yUnnQP54ev_qKWRfPSX+^UBAk6O~=Dy+~l-Vu0b^8)3@XWL-MP?S6y_Pcijc` zgCoIak#Mn9zAW+4S=UEB;fLAVEsc96emvcnm&hj*wHTr-ikUrujKvzqC@kjSs<+gnvpI`^=rO25QdqvZ)?Zi|% zN4!-GuR&RR{)l6zt1i34{BKSU5anyR4m~v4IzE~9!^fw2VdzujOlHw)v`DsdR(c1h zVzY9n5k6|B>N8@ooO!uu^hlAyS-`I5t~g-&v$(-?p zlb_|psWq%x33t&pNdMR$%)9EcHjR9M58TBys9%@re@+fhZJcA< zqB8p~vR=xj&NYoJ5M6gJ=dQZ>O1QdB5vnWSOYdu!wWaYwOm*k$YHxpKeaFGDtMvpv z2um56y0G!(rZ?L15g)WY3_D`oJWsY0O_rCYbSsf|JKxfC_3PZ~qMnjct8E#y`hMn( zJ-io1ScL(PLKz&S1k&r<{b*VowcB}dx4aTvqcwClq zFI2ib=vG2;h!+S4q^Ep^;^jBDIcz2CQc6G?)yqkM=@HJA54{EMD&!q zY%pY%_9e`fGby(W__^^T2R@|A={w~C;fTl%GHg{GL$rUwX!{rS`@oE=kEHhG%L3K? z`)KjIC!R~;>d&V!v&;hzjJ6YMiBx`!BgTz4{S*X3YQ(J_JdJl3zt?QbRc(|_!pX08 zz9e5Mgx&R+%4+#AxRyf)Ppzd_Srufe=i{82iWNFt2@3hfs_EeZpo-yKQO7-sC0{UB zI+M&~?dGHl-uG6u-Lpc?#czg=a3nbLuGeuGd1AEZbB%M+S3gSi==zeU9+w_G6PF;M z=iM{M5`ZYNOVZA;MM(7L`N|-r#Dc|?CE9*e;l7`CAJa#!COFESGOn@~C*EiHT7KGF zSNk{(DW7nvK6+^y);fMSSbMhSQF>iKmQUPnC#bd*#=H_AJ-6{4b^DD^jEFg-=}&R1 zm9B_htK`%d8473JCsF=^Wesj5{JD{@FKUvrFT8nby-JeD%^s-uK7YQOlueK9^7)po zP{*G{@?;!iCdYNK*6pS1;YRsdhVU_cEvS_oeb*O+$=l-!Q?FO{S~ufsJ#?M?GMthJ z3f=U6d#E(|#R1n`1(n^_SM$6|9oNAQ^MEnsy6~3WtU-qkY+xFnVaRr;ZpVIE?|UQ@ zr_Ygiaz37+KVr&ZN5%p}eA=mYelM)xb~QUJ%*M<;4iUQxLJ3&@@EPKvm#;)*lfIF= zR-hTGH8*%NR>-vR9IwAjqi_DQpK6<=%>#H1sqDzjN6Hciy#9LHTUHu+dP=qK-`dnQ zPnUH0=F3Iz9p0FjDu3m2S#Y-#2c%Kf4_1Imm>3;Yi}!5zfHUD ze{uJoaZzo_yRZrhihu}+l3GMSB&lSOMnHmOY;qJ)a*hH`P(+l}WF$$>SuzNSNX|Lu zoHI1}T^rAtIsf}Qb7$tx{c?Z%%RZcT@3m{Ks#UA1o~I^=>h3&RnW{bvnBb$Wq1Rys z)sXnLrUPGJ>-I^RPSNp(47(*M;HrIlV==3KyFh3j^}TJ#K4BH11=t!f)6!pW~|a*wpHKeEGVL`j-9bQRJaeZt3a-vWcna zm2KZF@0B}y9@?wYv4T7I4|5xall*v$#P+6C+PQ}Tkzdv&nkXEztM*GoZ0%bg zv4{RVCLJ4N#+-XhlODJ3Kbz#|*|H|ZIxuHI&z-HZ7&VbRJGz|r*m6#_vhLwgi1-_% zG5;frO@h~5RH1f5JDzz@+vo`CU?nZxY|$G(t5o))1d0$TvgM6%d}`b`$M+j#p4+Wt^$ElO$%VYYE$TgvU4sDaF0vHj55^Wl!v)vn99PA ztPiDghB(`0WO#fvotF6G!hIS|L#25Lod)O@89cg&Y`&IF>`M>B}oR16uDk%W1?G>Su2|dIab-0Wt5<_*IAP)yPHd#o|RHY9hBDRsnxEd|y`C z;*FI7L6`{qThb?oGR56GM@LWQWD^t$$sIWKts#>^Giv=rqla1nlvRj~9S6L` z47LS42b7O}C9Uq9Pcrpk2WO>{v))i^W{vYVSJbDlx?RhnRXvy7D~%o=CC{dQaKK78 zROcO5bqGd_<@V4%^k&T|MWreBCG9t>q^VeKCnu}b7+xQqkoNsq3*boB!_9FM*7b-* zwIPbtl9eVRFr`?QLu0)9?%Gy)t}-y?{QcAEO81@d^%Uy5qd{&zgd!$;^hwAX5CA(% zXwhw}Vg@OywX%Y}S@xYhL92I7>gK&Zk&BFca4wEOMzY}p=#;04iHX|UL}f6fD+(l1HMGFd;}esC4z*{J!Of!p^K5;Dm`PdN>iqq4UtP*~ zomUXN{H>3-BnQ8~mPne)f*a0x7De1Gb4)GVaQvBOntzbO^ltH-aQ4vBKopz7b$dO< zl)khyRA20#@wDTsMn^^g6FK?Higc|v!PzAqe)}|^w_}2PL*G9Q+LnV3IRUw}0-kg)inNFI%Z}SK9AQS9Gc$j|>eCgO-UH zt#0)N@vZ%GSd@o+j}pJ+D)}TW>up#?2qyB{brs1l#tdO;3foCWN!lFP#)@MpFc>L6 zJL#NImE({}f3pUK|5A8*w^grO?+bQ(w27FRe8xs`^pwPp`fxjdN*TK`GaXs+=uS%-1Jo4 zJv`~hT}C2~rHP4NuK5+#@mg>m=z#bZWsdENU$l+^3Ps2-n2s<-wRIj03osSlSe+uqGpxei*v;$^3-g|zDZpT*(+9{qETzV$-{Z1r2t@)&iFGpRgzqATbI zlfhSOM1vJhjejtRdwOVrmxu+%K7-V?;fY+U1T;ep!r@n?&Vmz{JS{cPqEpPBuk@ibnbsrO zpsnZ7uPyy)g$UPKry5e|6XYGuu5C4^aN4BO`Wz4$Ua_A$d}DSIVMeM}f#e41iu+b( zyF+!|@j7X5gUs~&`q-YEU1qgE{QNcAY3==G!uge=G3KGzAV;ZVCr}Z5w*XO(3h{~8 z1u#b4XX$|<%ojQjtWT0x&a|)dLlHA2G|Gh|nIRkKCK1<6<1sYcuKno3GkugG8!deU zt@q?Dev>{(F+(}N*=#`3TA4YQ!7LC!5%Gx-)Ljf@$sFam(Gw=P72L7^$9TH8O$@X;lS>_h`)6@u3`RJkf z21hzbwEG`Nda;k1(E^B%Z2^I;_JZ=^7zR zfzJF8-HrAYjO;kxBSyoA>VI$! zdf=@Ofp09%o=o|o`W*CDy_?dLHD}^`D$({%Vf)Je0df24E{u9K<%Q>+((GM^gUs|jaJpBG)vC4z$1%a0v>f?D1d3^ZQ z^Ipp2$fs9xVjb4Cv6X*Lj#IyM#qE*U($@s0@+Pl>C)6zohqgg7L|&|+x2=xuy(8=CP5<>D6-kW`po zyz0V&e*F(}&KI0@89}%CDv6bAhQ|U+7^H%uOlc40H{GuD6mM3^tEw7)cub(3?Dz0h z23Z2I@NmUz^w-Zc&@)EY{SNf5|NJlD<*z`e*s3LeTvYq_U<{|k)^r!j7D~PyYM_y_RBHlFG~un7WO-Zjgm0!N0$ua#jd{tpBNV_8l8B z*P|7nZ>;fnH2{r~chKnAt{ki#7?D*6r3fS~=GQuL%(OMa51YPnNJpJ=(hPLI!P ze`1G2pa&ZO6$*{u_fE`z=nVl|+YBaKyan)gelft_SsjUh$|3=uWQwfb)s>+l)SsD* z7NjZd@#ZJ3=R4GCh$c7H=VKq77GlJ)UbnYOfjnPxfUK(OnR)i@vvE2z&lLOyr)TDw zB75lox|!o~?mRg&&peuaJ7=nCbM&`bnyjTWJH3Q{c6er<7v5;}VI4K!yx&jsJgwUo zs`cxzYU2~7p`l42r6Hb)#~T1Z5Oxjos*Bcup&gL3`)q#jP4ZK_b(Y{NaaY_PJ<9+( zU=DvhQ~#oN&rvdHri4+hYB$bCbVIen@hk&4Phj5k4q8qBz#GE_>qB{n=n+EcVPXf7 zrI#JFO|&uP$L)zgi@Ae(Jmezu=>MbF0@tG!B*-U%l}JIV?%Ks)NzHe$w1UBB3U)p& zCJ+)W;Z=5mG-&*x*w-#bo$YEbQa;JjKq~jt_1=5}^lMBZsrUDxA<Yrbbx^U||&m4T)pZ6w;pq>|-j3Y+HBB ziOMBQN2LHBTkG0edY|oY3J!+2lsqHp8>%y|Aq%>$cJhqmZOnl&sOf%#?TR|)y4a$} zALOO4m z4yjV-8msA*OwID}KkM*VTfY3J3QuV_Q3teK&)01sRRyh3%Rz%EDEcqCHUCEuUWlis z=g$qV>PoH%L5rV8f~w?ToaYIhEPmp^j|k4eW^x}k=t-M~%|qi1M0}AX-+vj<7O5<` zT|>Y^cWM3YPF5%e?S69u6QR>hPz}w+?e5_2K&(*ysG;(1BPqK(t>ot?5Q!u1tw-;f zD~~LB4(8*1bWT9R5&!^UpidEP2dF&~?$fY~%EhZBo7`l{N|FT?hnKyY_ydJ6(cL&J zVf-2M>i<$j<3BE*^v!R|jSvLgjS>{n6+3{xEeaYA8SAUxV9}I>N{9;d0Vkq#6sf9K zOc-qETiXO;}~OY$DAQzD12vjsicHL z`h-GcIoK+;0ej8U8xsb}Q6hTLPPQ?k4eIM{x`=xBH#ju!aG*skc%c4~1c?My(6>AP zUIU$xebndj3ut%~7ReIBoN+3f-KmkyW&inA*gGjIn-n!rkERb&Wp_~C`{xr2vp|oZ zFi=6KZH>-nI2kYmwbj`$R2Et<{a)E0Ro4Hb*jemxfR$gl~4*a!$2XoiW!7Dh;8dNzy6v3$3$pC8K|-jXEP84 z2~hm)kcWtC-D(Y`wkzUGy_q^=i=Hlp;euH|mqh)*Tf^^ad?GV=g7W*&rty+ z1&l7VmkfI8x`;IweKTxgU7~U17qURsz<}Q87Hh34BPe%l`Q`k!Mv{rx(q)5 zwqFl09AtqBzk!Q**j7>B_6yljP`EJ#i z(dRaM?3X9&bq$F-m(Hql2`HqmY>9Fm1y~jVd`grq9F#K}j#ZQwD1$bDf`DnQ@f)8g z1Vtf}q6(dCsq>LI8Tfox1HdF?XIDO^C}HWo)8eZ^n=@!Zkhn~{yIIMvorrIELAVT2 zd-;bhXuWWW{dfBqTcZ8p^(C+^EUd)oCNzgTDZAlQ;8MtgMtDYYri1yPREtcd$T*DK z2LUwW*ULnG|Ms%~6_b&VLwWvfen$2WLH|E78M*ve58!WzFD)^+M5?DjU;<6syGo5k z^zxai>`;Cv>4`a<*LFDu)JAj$(IXmvV&PLtf`W+zFcfq7SFI1yG|PEbM$6S0+BRg^ zLl1VAL;*x`qo)thtpPhDGAog|IyD#VlXQ)4)cCYYG}TvncLyna5gFux;)VBrKg52A zKj3f?zDa$p@8~=L0x)0r-1abfsdp+x34KCfySi?4MZ4T<$5gcPQV-6gfIF5W2QvL{((z_4{^%%(ERo0K8X#Yo=tOI?o(75`KZyf&UnoO-6so`qJ8*w_276lW$FLnnAD1(ZQr2h(YsGt%43EA!13 zO?eaYJw<{5iT`+Yr6cSE;d&J>qP~}f?hJQ%^`XnUZTL%I(k6U}GqM|%aJ})N>ok}R z_#;!)&32)8hso0DiG*Dv8(8!MUjY+<6=_fO)1{TX$ z6IH*GA4*x_FgDkz1z48xS4+MBbvf)oZywpWDxQ-;si z1aBep8$^}WJ9(bfvI9Ug<#}&>J`p+2+=uXY7Jg175D$R%33lCO2sa2^s)=i2YoVIW zFl|=1qWFNr_zTeA2*fHsTmjT4qKOW?gP@gVe|Ey0?VVi2z5ef?JW(%3kuFMrv1@=F zL{@KJxCwI@F4od5v2%2NVPrO35?;5eO1FVH8Zbq^t%lKO!$N72ycvq zBznlS1MsuxZ3ue}0n|Az2vnv&Yu+;GASk8(qo~_>GzoQo&^-&wWcTStO|-;4x`j#O zD`33=+z9}2Xl54|uz0eR|8iIMMMF%F6Ux5y?hr33C}X1cta{6kCi^sQgY7ewt30(> z=#LC`Qp+!q>m64-iN^#FXaI4*H_mCOuf|rf@+#e-${XgOVXfR>L>SVYH4ygL^w<(Y z>MJRM*&80D8~FoOw+;ht^RH_F+q}Z%1svb&+^O?=l^Nrv94@ng8)|%%W~vp4k2ucZ zX$I)|5T*}$mxbATGmceSOpH6*Pd3_DfZnS2rUd(~+_|2SK%y%y_y}1?YdM%``DCx| zLejNgvYF`c*a_z4U69TGbp>=L2$g##X}W8$lsUW1C;5@Q*j20Ru9S*~^R2B?QQRKZ z#TRWoOf>oKdOi#z$t}_%XeoQ?%tYO}4op-q26G>ShE%V{T=&x`yk=M;&&U40DlD?J zZ&LxRiu*TSY0`TGRG*xx>P@%e-ZEMps%S>wbqZY0u~23!`=%g~yjRe7beqEmf|Qy0<<(=4>p+QjU&K0JU~rZ2J#T`N!v zAkLy^13k4iZSgL~zGHm?n1N8n6~g7X@OSHxPWT>MIUL1%R&?aPtq4a5;9gux|~Yw@3D*UsvY|7c5ODrX}Z_7+u#Bm^rN3DvVbl< z$xmA~w?-}~_v0%%uQnZ{Ve-kriSDOaSvi)&{alvHPd~hN+v~p@o-|=GYdAj52i|#0 z6A!?fI2-PeS!Dw5gU=9^MRCc?bOXalnd=Z+MR%M^xs7paBzHyb-h78!_f{gX7B45L zH873Qp-WckV4`yQYl^vliq~J$$^by6bxZHSz`{-UDI7cF_hBDRY9d%};9Y{KJubjM zeiz+mO!>IP5SaU&;xO`|KEr7-e6A626u1Rn78QuNcE(D;&W`@>P;@85PRda2Yz*?> zOZ$?--~EU(FSbnDwb_ik%$Nn4s`Pt+4MqFWvqeK0zokeA;-_Z2eA>n@PR5yvrMND@ zkoiA;nGhg453Q-F2p8??AJfczrZ>rcX~PLM?v#)-(tJtv`HUcd%8jQ9JqF!X?lUvv zAzkwpLg_$h1p*H#e`h2%krsoaIqu?=Pz89W^X`WlPaXOU+-`<$hR`{y)|>b4x@}a6 zK44TNm(VtW)?BRS#G!{4t2hs*Y&Ms|$SIb-0USMg7x6rS{_ZWF&?ml;nJeBU6+cT5 zVgWdTc;CR5S##G|4$p}H&>62)eRPr(ro9373(C5g*=wP6P`c96J;I0_*uK?|+o4nA z-uu*#Th(#LsYmpZep(wf}3i3Mfg15 z9L_aBzHmEQi_Qa;GYTj%cI3X^voZ;T&5$F^@)B$GS+^S}+ z36s;K*cL1dv4I8+Fce8r`TbXkMwp=l5JVq>#goGR#-I~<-V+e%q)`3jM!uJgF6sGz zn}r%~K>ZcppQ#^s7d@`)6{QNy6;{QoeaKV)b-57z^a5^9G<|5!q~l7+T|mZzG>7VK zg=B?3TA{IHDWSOkcTwl>?Fp=s39ge6x6k>J^>SRuL0R*-w4jdD64x4Gb zCEcfL0Zsm|%Oq9O=Nx!~yd~{3HutLnxKynZ)bAa@Z50Mg`RPl7Pr`R^_>axG`h?dr zbq&v6fCxRtUFQq_tOa1u!(bre`0+Dh4X^?Tvqh4FJzbs4ta0r3yak#1JDy$&&u2gY z?3FL;lkz0l4ec7XuLELQI3ULq2eMjn)&Auv78YxdxP5VjiZHyayqJ`jbEx5*gV0Pf zGec7QJA=}wCkMa_nih^rQt>CqW}~@@L;?UG6SFtVFc19-4ww#hh-#_I=QG5VlBEHd zT!2>vxJZgj%}w6>pRqQmUoEz83!WOBX)Jk9+^5$!4Gr#npfz9~!8XLUz=KhB>zM*W zwnSajZ~0ex73KW(w#M<_6PQA}-MeZBu&ws9S6j<_a#xbO*0$F%JEF==o zG2@k3s=9puE*x~7xLrE)purLqa^;n=nz@>6l*zTIP?N4Sm>^aRtsP8D&vUj{ZoMNF zjI0*L-T_Aa9Txol=v@OhD^pTS5|6*i-K@g#Um><4*}1~&ezaL9vO1XacB4x8_OWHH zR4ivO_Z>PDd zvJq0T3uH-%hOF!N(-fbUlWxa3C@kKplb^o?aa?hsv2n0HtlBR9Dh|PVLV1@{(y9~u zx49Qe@Y^cm$TJyZYHb`#z1Q@mnBmXy>6OE{Ka00MfhUor^VN3oedAP<_H(8V}^%> zAd3~UQGfm<&Y+xy4%JvZ;~3tV=ryHm_CreyWLw;4 zM!s$j`*U;X=eDM`EF450*BHNS_Hs}919jR7Oa;N866Jk3`l*g*18ooP&4CH3|OBg z$K8aD&r?y%jmNJ~R^8>~Qhy6#5H(f9)^Ck}>{F9b#=d31Hx|(wiWs!KfBdzBMa9$- z5W{sUXIno3)Iv9QfTtg$xv#|Z=Sj)kb+zoR>5ALZ)7Mh5>JLDC{24%u9-Ty=X1-m* zJY6#DDCu@hJ1wHgb3sYd2S2Atdpg#kx%jj#ctpK53#FPBS$dq^?WT%|dK6J|zb%ei zUU2N|W>uT%F8BTGyzW;vEglk#$bKK8`!rv)pB*j1G*8LXu{X=#`MlHvE=CMve8~C^@GFRn&Zc^YzODh$<|MyIyz3tMBw*NV@B?Ky02I>6zcdKp_H182m<-#;ch}d5nI4|E#Fi z^olz-Lf;0(t3NbGGEme)6t|F{ z&gvm8x}V!38kGB{w8{fFXrJRd7A(D(hNkWI$2%cxz3GBnZ&CNY-Yb?~Cm=7ow~7gq ziy>8~Lu31D5ZS#$-IK-5B1B@=W!)KVFug)={;tBw_>ckCPLhmj^)lXt9t!T&I5=nGTzqs(hTXaIOZce{5#7)BM;!auD~v+t(Jx$I$zdXu zG1a~Ybq|SSsyC$HG1(FKAVjxQz1rhJ=Xe#hlK%GIE+F$3s3wmwGj?<9k#vss*vdm1 z_TpXR%LOym{rXqzMOf~aeEc%7eKiWPMjq}wZJnl+NXd|tZ5AmKvFhY6ZnxG_-$!A7 zg@R|{d766Br6uRfRBA{ubYstX$XWEqfZ3XdX2Lfui%ZEG^ z1{q53FE7DLy>lEXko`4kdbr5>YhV9=_q?gS-VuWL+tYB;J4ZXq^V7GO(-c#~<(6rD zE_IIjsHN`o;Uj`;ZG2XPubq5rTFIaCN$f^eQOHkC7hb>Lw2(7EL|SIcI8t$7OWJFF zb41SCL+-Jf?l)0Q0*D0~Po_W#mih`DMlx^s0(i)G8-jT$rXHsPx-x7yhdI7#%5A9B- z&h9=vwSL_aOm3@B1mR;^_w;2n?|Lr@zc86-OWX%)N4crV(YMGFqZJM7p~a#3-lU?= zC}BI&1+?uf+4f8gVXAa=VCq1i5cuTlg;gFw+2nlVZf*g5x7M}sjV?2CF4Cqv_(XEsy}6~- zo;0i!!>wA8J#p-GjO?DOK99=B?+m}}Vq6ZMu{Ov0@>4q2v3Ep>)5(Hxq<>n3)DF!R z|E3VW$B|jGPMPLZ^~t(F@5Jmb>vO|<#%}mRp@N^>P_pZv#;(GDQu)YJ71@d`K0QBU zkgmi-Ugd#DRn}MKq4CWI;sE^3eV{&*EFxvs#jvZ;X-K~bzeqN!(Z+l7WV=3Hf3Gon zzFvk7Q4r!mI>JD8Bn(W|;TTEBgq3M>xSnl*>@6dN{!!_Mbu|fR`;Jsr@)+fb8W`%y z{#gF(w=vljE18&T-7#b9?4TT6cmi@}{ydC4Xt}`^-%(jY+0wb=rS^&6cR6t^w4dN} z5qz{?l^o~}`MdjvEvgsT-4`pALMuiNnM`<02@g$_C_hG1+OV%Mt4N44S4O85m>NvA-9K0|kzdl|_x@SYQddXo%Exk z&lAh^j`HU}IM9BZ!SPstqUI|d27FnYNxNZ3SYmK7*q3!ZR;Yf}W?h@aEQxJT|3fPwD)AZ#CE?J5i&lSGFl}@A(dg$1NbnZwD@GNhU7#p`Mz> z?uJoT-`+P`@)TppOm5^lR+ff3AMa-`3<_l@jH1`L? zs1AS66Zr7FCcQQpQf$Am0|U-=Zy@kIY&Rz)XZk3v=Eo7$+xQ^7bt?A+Z_0HlxA>qN zzrEK9FTgas!mioO(|DN8gQo)SJ{kolV!}ryw++CX^NTjEVMEO`g-J$99cDdL zT3x6mjh*$!T;7yfQWTC-JHeW7yc-Y9z&iW=dg!Sgi__sxq~R{QKt9<_>2fegbAqZj zgIBDl6kON!u1KlyyEC6ITaqH{UpVB{xQT2me?EQaG-|@tGe77XUJ0-LZw;7Y_e6Vp zW|d{Z%U4Sl)E-hJ6{FrTIPJAs8gA^4Sg)g0&lafd_v8H62S(l2gFVne=lXq6Rh32s zNJE@qpOV^zmSAyB@G@`b9<-{Q8D9BE2?9zK;`N+h68`iz;pPsTxlPvP`3-3Az z7P+XOiXQanRmKVUV0FMy=yS->Uu6Q@iK|8tczqpfkZ@PtS4`j9J*WfF*`}ef8 zUOYIWqc*5Bl1SO-7e)9Wd~S<`5u29q!@0+j%INf6&RXhjH}ivjWXPwe+8K# zaE=!r8$MLxnYP17tt!xamf~GNzm2Y~OC3u-PBE*l*hc%Kz$hIDLbnZuj8YsTbh|AV zoWNdid1zdyQOTEby-J|r%T;uaN5OjAw1VFX*q{(Jzl^WYRW&yDTxk z`1qxvR(v^7=eCN#!s8u&`Rz&IS!BS@JffW)R3|6liJ*sf<$F?fJKy)>V@ZCi>Q_Ip zx_>#EeYRK@$FgJ4@A2jF=Ck-WwxJpEGAL5P9#zg*>QI?SrAp%cmtfB;EJ5}fPki^) zcMrY{8b20{+3Q18is^ah?T>tdEO-e@< zc686&sML_=n^3$6CsrtY6z@p$zOurA)XAzXdcn^EBc;k`PCNy$HuxBG1jy^uuTggp zg_l^+Y8)-pbf*ijIHdI;fSzmavpYFV2g_&9P+3nWS91Qh2263oE~aLTDM`4FG0vBr zmvka;_MJuk&c5Azj-FYFiea%;rp3v;;6 z(lSL3gZIAhMVrMCGchNpwUEiBE8v4^p#feoy%x8_cPuO#$4BGUl)cwO)Lmxzz-g%N z*1vcfvQh8!hxc!sKc4%1+HS#)V)Z<1(wtftN*;^T&C5#WA&EL$rw2pPF+MuciM4jy zi65$&e|bk?W?y2x6xE-Suob(0G1^NtCe?J zT!LF;50vAvs(}5G1L-Gi(wIEZT)!W|Pve3fE@aXW?;*iA? zEv2Q;CgfXenXrd}qX#0qcZJ&4mZ&U_9@aHL+l}8@(zRzrB}^D|jLmO9Ca7)groAw z95a7tYnC0y@k~nP&6>Qu2ow3v^-%dl0Vc!p4T5;b#P=raR5J{s_~`MNS2CB7{ne^J z0+uRYC4xb6{eED79q=E64}<BFvEUN9c=cCkzmepfyLg7y$U|&R~PTpMX zA?qID(H%S5ZQI`Q=}M`>o$I99yZZ4byoih;7crk5aVoCI!XMF^;Yi!U?cgi-_^@y# zVW!{^F{6srA*YrM+`iwbcyR&Z-?j4d(}{Fl5o%)zeM1$}8b%ROIQgdQM1w~jp{Wf) zRq|_JyFlEdoo&U_V#3;Y?@Q;J zZ5JK;3F#N0I`lP(|q`)!_E`v4!@rQ_n>$izFN4%96G0?VSQdY8AU)2h|=Pbn;DV zA1gYFazJ*vmG&)2XXo43fI*HpE(9);AQBA~=I1qJ{(-%W)XE2qBJP?8b)JVO_yvJe z+$(T@;kWmac_Bc3OeM-Xjxx7cv!C#%?mp{IFeKy<(b716U=qH4oxME1(8TV1|jM>sEB6KyoiUc-6#$ z#U`V}p8~N^@%$h`+$>GLPQjTw_2(c!mnp8QK5H{`_);;nM**SBDEYsxm#{%d;qa~6 zjzaD=xhSu)=Ut*%2>>aa=NSuF{oS2>m{(E;2rHAoX8dVL37yjRg92w^WM!>0T*D~E zs->0fFlE5{O+htlAJGl1-7qmVy~;C~DSHGr+qD~~=v`RCNybo~uefbU`!JJLs`LyG z+j0GZwdt`HPVMeUrG$PD%8Yuj2o(;6%p!L&EEuh?RF)p$=Qm90^xDW5s9=mRNo!%j zOu>pBo)ntHO}1UMergZx{T{^sJ1J6YA@r->U1S4XN>EK&>M0;;W&lM$Q4YJyD!oLD z-Km725lP@6Ao!$>R~ACl)7kZeW>rPwCcWkj?Z~H2_8;7gUW1|x`MV^0U0-_0nwXWC z48^2iW#NXlb1R0IS&_C*-@+@F+27ha7cZV;mx7geE^1mD-s~B4bj+TY%Bu(i9aJgA z3BzS#?&TIwc#)mO*9@Z&3q2g!2M4=5xTp@!DJ`yXhmn(H*)9VmqbdJu{pCoe%mvoB zw#-NQl2JTEV&%bEt1QlUw0~Tt7_Yue+8B8bX1Wo(MO$RMI2~WADF=>@{Qvamxa0ZL zBa0rQME$D44+@);1>7#lfl6n9==Y_S`68)Ny4Y;&bM72z&9kQh^41N@%;Eel&a-3A zfN2JFgV8D-&D*6cHQ12~=YT;cJ$> zD{pibTr!nT)?D_CxF*XT=VPRqq`Gf4acFe2Bj*}#HpGK=P-Zk8~nKiXWv6&8eC!5&pB-f2F!O|pf7aT z0ADxIpk|%dY0rWN>*@+mbsnbjmqd%J=ryPKhK(gB?1ox*%D6~*tm#Xxm2Vz;D;FMR z?J4R~25yoHexY1jTii%ptlQ^&UF>|XeE-!Rvf@kO(L&ExU3pV-EISj?0_n9ku5J4- zXAb$6=s`SiCAd1iE>Nhdtq9y7}um%gH;6R(`GOO@byxMangn&nvT%eX`yZ&0Tb(0 z6fpG&=B5So$3eSwi`zfOsTaLjyYS)7>z=Bl$frIzVJG=pa$DUx&yG9T-8cE*y@0-9 zakzw)?{GOU!4_Y`rbmIzu=}3F_*<;b5u#1J#~c!g7>QVt`*h__M}7I*@4bGcJX z(<*#Vt5Nv=!%-}s#<9n)e{Jd;S!5)7m5R;8+<~&jlPDiM*bRkUg&RYP#5~7Tt7hEW zdb3*68}xv>p#arQy1S~4z5s=AMc2%mBJxqFKawN%5rQRHQt4Ljqj406t zmzgCG(r+IPos{Xc^82UR@%*U&GRIigHj>^g9cwSIT5?@sP|3LtvD-?N$f>56brREe za6He+mfTRbjh+ha+#uv29;UZ$$443ePF@i-XZk8?7;Vd6TP?UZ!&)oN&q61Z0z7tNQE@r%`o|C9UfDWG$CA(Tanh zm28q|Q@s$=hK!-GIL$k9ZY6=zdJLEQ4x_o4^9p6BrjoVR#Z1aScy`Xi~f%CY8(`j7z=d+Oj*MQ#LHE71Yvz4a9q z&#Y6#!itGm*CFp{a;9TZ=PCOUf4If8_q&0VQNi66LL4Z2l%m=M&Y@uPD{UD2Vxj1_ zty6w@8l@@<v264YFvr@YYiT1(U+l{JjKcP!8J+YMasR_SLCTJQQ-8kaMVg@n3NcgT*l(kB>R2*g5PB zDK$+-^&{^bAGB-HJvw%k!in(Gkj9YIWpD z?_#EV-PGsc+cyV-&cQf3@XVTYSFWy^9Iyi;!F36gBr{WK%`ab^53Jvf9M?)!Itl!0qG|P=gz=c<1isB)E9~5KECoe9c5kG%ly{LsuTQdc&3=N!{$l@Ao;CJ9hadH?Xx00xfBI*mszk|OJfKtz4!S76F z4?qFD8uoWGiVGlI0!0xt`;RV^zv=d}RxHo6=D0s>6XCwjKXfVtLMf5<-t`PYQP!X2$^7B0m?zHTv!9_d3Rm|76_Ee^hkv^&N2YRG#ef zkP)wzwg{ppeDN2sB%Gf_8QR9}`Pygy&|Xpf4(3thuS=2#KnrKifzB-6P`8X57fkKl zHXym58=b#~4UX33ahxp&x;OZ`4sMm^oVfq-?BeM%8tdgroz&_oig`C3{*?b{+?o~- zTaI*TTMXH~m<`SSTSeTBXQXM9XBbiI8(Ve}etkWp7RW*qAA$M7dFuM*rRednpp zAb;=*5>7t564nJgj5`N4_4NOj7-pWE9GUUb?s4OMCYv`tQlH?qkJl4b;n9Lq$LNUV<&& z19-moI0j4=Gr#izd_7j!m5up=0kM1SDydO{s9|PSLM!F+R!lmoy#R}w;AD6+Z)E)# z*H)*9%%qNqSMc9`>Kdt8Ix3@ZaLLfbdaNT0F4#vo{S@7_pk=&XSY_)}w6v0}=6r?1 zr~sqZBZL5{zi*y+*>ctTb!=3e-yk<=4)o|9^yTg10IvoF(OHb0pgr$Td7JMkh`;9CAHvB}MC?zjs3N1ugAX;-SuJ5QP|&pO|3g&W z>3tSo{imD_$#~oYQ$O5Ap88$f0TP~GL#(p})V?_(PB=Zh&?v|&OjTg@Yf*sjjtTy- z0f`~Of0Y&6%-Ypj6oknhhG;L%1|=!|=uvF<6zD{p)6-<_-y zr?vsL%JSD{T3dyDBc}#`EY`=CyB~?!h?K_|4pgmFtmc)|Db5sAsL6vt%Fim90cVBC zy}K1OumV~!F68B^NLq^p&&T$w6~3Jr*S>WaT@;=~>bMooxiWClR{DnoI`~nl(V0#e zd&K1{dtb-?WgGB4qp+~I)|X=FhEZ>ZilT;wQ^!M@FOLmyNSO2K5lPAd0s`WPTNS)E zi}I(u?Ck6z(b3VJd#oiEz#17GCJn@m-57`y@G@RhaG}7C-oktriH$n8Z_XNaNZcvS z2&Eu%xQZJkc@7rEcstASf1~8r-wx#I4FPa)W<|xQxP(MzNy$L1_bnZNvFSLkhocpo zCWF%?QG!ZDX@gp?aUyOE{t&=18C2*Bij;VD+49a~=hfn!!OkWJ&E76wd_HE2BW*me z$`8^1Tn&gX02p$Pr{|UQw6skeffc8?A$B%4bcFG6aR+EoXWCJjosp3bI))y)#HD+` zRDuNu2M;s`Gnjzhe7k^#9DUed;Z&ye@jfHt_Tg^Dk`s{Q)M?}vgL~3iwCPsF?9h?G zR_0>bed4)C+(0}sk-XY`McesDO`9t}+X)|C{Xq&W_3Xw!Hv#uCzx`+dfX_k2%QTI| zT3nvTzD4Cq@}wNbpDWY!0mTQ&GK+1ZEQ*rvXsD{<mu4=({@yUlLdTe|24ZIMZt!mn5w^3UT5T z-R9R=PB%g>VTW4ABA%L(SXVYjtkx)tRfupzs^z*a)~v*DvZ8j#wMLgskM4}xY$r^|yg2iDYD%~tU<5>7 zJelC@N;GbT_#$U$L%?loPI!XT9cI$1%>A&C4dA24(#6EY{f4biKTvmRwiMJLxU!C; zhZCbQTXns$y|f97W+Qi^Uub@yEUDzBkTqDDr%e=&HHN8VUBvjSCASBrDua{d;B!?B zwRa!5%)4I!@o7Gr!5@l?g4j{vcY6E#yL}C-*z!2VfMjHZ@4)e}|5zzq_hI3tCZ6Xz zNnn%JNyE6d3aSQH@3^v2W=WQcV!Hab^WMW#%OhkKU)fD3&AEy`r_S(60h^w z&G9sPRtC{R|3R;Ed(o9Z4*E2z*g;BJgiq=q7RfND&ukMjo}eYd=N0gcZ(2uE%oA=@ z`*pu=(nv0}6^hdP#zG|)pj~Tbpe*dk^jh&#S_9655V0FqH(Z};Cn9ms>kJrcG`*~| z?%wMNmbON)MhOxwQYkMf&pT-q)P1GAS9eJ@sngJSm9aIcbolq79ER2>TfhgpmyIh! zk!EEF<++~W_yoe%)D`XznN@Ul&x(olmJK@7B7+k`%T^|91t{H#CdppC8pbOh!uEJUS9#X7Rc}+& zZ>d?%5f}a zONiHZH{}hw(ItdeAt{9Ulp9j#`w;`g7W(S>1VzU_S=X2WN&I>tcf|kbxHc$$PS$wa zL>|z6523LC4K&ZCs@9MPK&WL|Gt|4nTybG!%d41%)G#bi6MFB30UCa!R)5Emuh#w7 z(4E`h^N=v8QTFQsCUe4>Zbtzjz9&InfB!_583J*uprFn|hxXnX3pk5?*RgNMaZN!k zrzfSyGG~v|>EnS`DnFUxGOEvFR}(B2AVoAn9G53HVSyGwV`Hau%?T71k7uy_E83N7 zA#eaXdCBARAuj$)zGv$$XrprGWX9Y3l|5)1+IfEtb>9ybo=NqF!1czr(e8AcmyZ*T z-WY`IumDxbdiml-Gt^ov7<%AQ9LnqJ+N-3ro0hYRi`SX{g4L@_hWo(qTFbfA?~dxX zoElCrPS}O?_VFP-3fZ^JHR+x2OsJ=0?QiFP4rq_h=J9y3%tVO)=*n6q4S?+u#pzGj z@b&8Exsj%SiIYkBOWo8Zb_Z;ZQ+x+lL8GsaoLBngK%@rawCpFBtvD>r;k}ji6aWN7 zCFB{oV;=7WcM~?oI+kOwK+HwkZb{(6bNs@|wZ5t~LxbvKM%$}x4gz{{E-gOFW1RCZ z1p@eXy?B4<9=8*K;hPQF0S&);l4gE)B)1et1S+Tb45pfGI9wJ&WO$Ov)_lD!O|vgM ze+u&dY)HMOumj(cfo4s`Lx9*}!HZ_{l;+