seed repo

main
tseed 2022-11-20 09:48:18 +00:00
parent a943bbe97d
commit 88a5340fea
21 changed files with 1375 additions and 88 deletions

View File

@ -1,92 +1,9 @@
# kolla_openstack_terraform_ansible # kolla_openstack_terraform_ansible
# What is this?
Tech test from a recent interview, dipping my toes back into Terraform using the Openstack provider.
## Getting started - Build minimal Openstack testbench on VirtualBox
- Build provider network, project, user, image, flavor and seed some Terraform vars
To make it easy for you to get started with GitLab, here's a list of recommended next steps. - Terraform provision some small networks and instances, build ansible inventory anr run playbook to test connectivity between instances
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://tseed.work/tseed/kolla_openstack_terraform_ansible.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://tseed.work/tseed/kolla_openstack_terraform_ansible/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.

View File

@ -0,0 +1,53 @@
# VirtualBox setup
## Openstack VMs
- 2 Openstack VMs with identical configuration.
- 8 cores, 8GB RAM, 40GB disk, named node1 + node2 during installation.
- 1 Deployment VM named deploy.
- Ubuntu server 22.04 installed with defaults and only SSH server enabled.
- Ensure all network adapters are set to allow promiscuous mode, this should allow many MACs to originate from a single interface.
## Openstack VM networks
> Note: The Deployment VM only needs to be on the api and bridged networks.
Create the api network, this will allow outbound traffic for the Openstack nodes (package updates etc).
As the cluster configuration is minimal port forward rules are in place to access the dashboard on the api network.
![4a769a446c739c1b47fabcbf2cd8ebe5.png](4a769a446c739c1b47fabcbf2cd8ebe5.png)
Adapter1 on SNAT network api.
![d2d44788efc359b8ed1aac1ac8774a32.png](d2d44788efc359b8ed1aac1ac8774a32.png)
Adapter2 on internal network with no NAT, this will carry some VLANs, this network does not need creating, just type the name of the network in the dropdown and ensure the same name is used on all nodes.
![3253e97a0ebc8cc91abb310d620aa72c.png](3253e97a0ebc8cc91abb310d620aa72c.png)
Adapter3 bridged to the host network interface, this will be used for the Openstack provider network.
![1e801b75b336a73154c543004dd512d7.png](1e801b75b336a73154c543004dd512d7.png)
## Disable Hyper-V extensions that block VirtualBox
```sh
bcdedit /enum | findstr -i hypervisorlaunchtype
bcdedit /set hypervisorlaunchtype off
#bcdedit /set hypervisorlaunchtype auto # to reenable
reboot
```
## Enable nested virtualization for each Openstack VM
```ps
cd C:\Program Files\Oracle\VirtualBox
.\VBoxManage.exe modifyvm "node1" --nested-hw-virt on
```
Check nodes can run nested virtualization.
```ps
sudo apt-get install cpu-checker
sudo kvm-ok
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

View File

@ -0,0 +1,57 @@
# Set OS network configuration
- This configuration uses named interfaces for ease.
- The network topology represents the viable minimum logical networks required to build a multinode(2) cluster (no baremetal provisioning).
- The API and PROVIDER networks are on their own physical interfaces, the TUNNEL and STORAGE networks are VLANs on another physical interface.
```sh
nano -cw /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
api:
match:
name: enp0s3
set-name: api
addresses: [192.168.30.60/24]
gateway4: 192.168.30.1
nameservers:
addresses: [192.168.140.2, 192.168.140.1]
servicenet:
match:
name: enp0s8
set-name: servicenet
provider:
match:
name: enp0s9
set-name: provider
vlans:
tunnel:
id: 31
link: servicenet
addresses: [192.168.31.60/24]
storage:
id: 32
link: servicenet
addresses: [192.168.32.60/24]
```
The `netplan apply` command will not work with this config owing to the range of changes, reboot the node.
# Disable annoyances
```sh
systemctl stop ufw apparmor
systemctl disable ufw apparmor
```
# Setup service account / sudoers
```
groupadd -r -g 1001 openstack && useradd -r -u 1001 -g 1001 -m -s /bin/bash openstack
echo "%openstack ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/openstack
chmod 0440 /etc/sudoers.d/openstack
passwd openstack # password is Password0
exit
```

View File

@ -0,0 +1,82 @@
# Setup deployment host
> https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html
```sh
# install dependencies
sudo apt-get update -y
sudo apt-get install python3-dev python3-venv libffi-dev gcc libssl-dev sshpass ca-certificates jq -y
python3 -m venv /home/openstack/kolla_zed
# enter venv, install kolla
source /home/openstack/kolla_zed/bin/activate
pip install -U pip
pip install 'ansible>=4,<6'
pip install git+https://opendev.org/openstack/kolla-ansible@master
kolla-ansible install-deps
# create kolla config files
sudo mkdir -p /etc/kolla
sudo chown $USER:$USER /etc/kolla
cp -r /home/openstack/kolla_zed/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
cp /home/openstack/kolla_zed/share/kolla-ansible/ansible/inventory/multinode .
# create ansible config
sudo mkdir /etc/ansible
sudo touch /etc/ansible/ansible.cfg
sudo nano -cw /etc/ansible/ansible.cfg
[defaults]
host_key_checking=False
pipelining=True
forks=100
# populate hosts
sudo nano -cw /etc/hosts
### openstack
192.168.30.60 node1 control1
192.168.30.61 node2 compute1
192.168.30.62 node3 ceph1
# setup passwordless sudo for the openstack user on the deployment node, kolla requirment
sudo su -
echo "%openstack ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/openstack
chmod 0440 /etc/sudoers.d/openstack
```
# Create local ssh config
Generate keypair.
```sh
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa -C "" <<< y
```
Push keys to Openstack nodes.
```sh
sshpass -p "Password0" ssh-copy-id -o StrictHostKeyChecking=no openstack@192.168.140.60
```
Create local ssh config, ansible targets `node1` as `control1` using this configuration.
```sh
nano -cw ~/.ssh/config
###### openstack
Host control1
Hostname 192.168.30.60
User openstack
IdentityFile ~/.ssh/id_rsa
Host compute1
Hostname 192.168.30.61
User openstack
IdentityFile ~/.ssh/id_rsa
Host ceph1
Hostname 192.168.30.62
User openstack
IdentityFile ~/.ssh/id_rsa
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

145
docs/4) Deploy Openstack.md Normal file
View File

@ -0,0 +1,145 @@
# Configure kolla
Edit the ansible inventory entries to set credentials, networks and roles per host class (as placed in the /etc/hosts file).
The inventory follows ansible convention to allow per host configuration with less reliance on the central ansible.cfg, aiding portability and simplifing configuring nodes with heterogeneous hardware.
Replace the following sections at the top of the file, this is where node classes control/network(er)/compute/storage/deployment hosts are defined, the rest of the inventory entries control placement of the services on these main classes.
The use of \[class:children\] allows us to stack role services and make nodes dual purpose in this small environment.
```sh
# enter the venv
# source /home/openstack/kolla_zed/bin/activate
nano -cw multinode
# These initial groups are the only groups required to be modified.
# The additional groups are for more control of the environment.
[control]
control1 network_interface=provider neutron_external_interface=provider neutron_bridge_name="br-ex" tunnel_interface=tunnel storage_interface=storage ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_user=openstack ansible_password=Password0 ansible_become=true
[network:children]
control
[compute]
compute1 network_interface=provider neutron_external_interface=provider neutron_bridge_name="br-ex" tunnel_interface=tunnel storage_interface=storage ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_user=openstack ansible_password=Password0 ansible_become=true
[monitoring:children]
control
[storage:children]
compute
[deployment]
localhost ansible_connection=local
# additional groups
```
Test ansible node connectivity.
```
ansible -i multinode all -m ping
```
Populate /etc/kolla/passwords.yml, this autogenerates passwords/keys/tokens for various services and endpoints.
```sh
kolla-genpwd
```
Edit the global config file `nano -cw /etc/kolla/globals.yml`.
> set the external horizon/control-API endpoint IP with the kolla\_internal\_vip_address variable.
> use prebuilt containers (no compile+build) with kolla\_install\_type.
> enable\_neutron\_provider_networks is used when we want nova instances to have an interface directly in the provider network
```sh
(kolla_zed) ocfadm@NieX0:~$ cat /etc/kolla/globals.yml | grep -v "#" | sed '/^$/d'
---
config_strategy: "COPY_ALWAYS"
kolla_base_distro: 'ubuntu'
kolla_install_type: "binary" # now deprecated and not present in the config, did not seem to compile containers so assume binary by default
kolla_internal_vip_address: "192.168.140.63"
openstack_logging_debug: "True"
enable_neutron_provider_networks: "yes"
```
# Deploy kolla
```sh
# enter the venv
source /home/openstack/kolla_zed/bin/activate
# install repos, package dependencies (docker, systemd scripts), pulls containers, groups, sudoers, firewall and more
kolla-ansible -i ./multinode bootstrap-servers
# pre flight checks
kolla-ansible -i ./multinode prechecks
# deploy
kolla-ansible -i ./multinode deploy
```
# Post deployment
```sh
# enter the venv
source /home/openstack/kolla_zed/bin/activate
# install openstack cli tool
pip install python-openstackclient
# generate admin-openrc.sh and octavia-openrc.sh
kolla-ansible post-deploy
# source environment and credentials to use the openstack cli
. /etc/kolla/admin-openrc.sh
# check cluster
$OPENSTACK_CLI host list
$OPENSTACK_CLI hypervisor list
$OPENSTACK_CLI user list
# find horizon admin password
cat /etc/kolla/admin-openrc.sh
OS_USERNAME=admin
OS_PASSWORD=dPgJtPrWTx0whQhV8p6G2QaK4dI4fEFmIRkiPjuB
OS_AUTH_URL=http://192.168.30.63:5000
# the controller VIP exposes the external API (used by the openstack cli) and horizon web portal
http://192.168.30.63
```
# Login to the Horizon dashboard
The dashboard is located on the API network, in this build the network resides in a VirtualBox SNAT network.
Create a port forward to allow the workstation to access the dashboard http://127.0.0.1
![d4d81f00e1bee890f85a75d2ad78e602.png](d4d81f00e1bee890f85a75d2ad78e602.png)
# Start/Stop cluster
```sh
# enter venv, load openstack credentials
source /home/openstack/kolla_zed/bin/activate
source /etc/kolla/admin-openrc.sh
# stop
kolla-ansible -i multinode stop --yes-i-really-really-mean-it
shutdown -h now # all nodes
# start
kolla-ansible -i multinode deploy
```
# Change / Destroy cluster
Sometimes containers and their persistant storage on local disk can get borked, you may be able to "docker rm" + "docker rmi" on select containers for the broken node and then run a reconfigure to re-pull containers. Always run a mirror sync local registry with large deployments and pin container tags.
```
# reconfigure, not sure if this is effective with physical network changes owing to database entries?
kolla-ansible -i ./multinode reconfigure
# destroy
kolla-ansible -i ./multinode destroy --yes-i-really-really-mean-it
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View File

@ -0,0 +1,566 @@
# Create Terraform vars template
Create project directory.
```sh
mkdir /home/openstack/stack
```
Create vars template.
```sh
nano -cw /home/openstack/stack/vars.tf.envsubst
## vars
variable "provider_config" {
type = map(string)
default = {
auth_url = "${AUTH_URL}"
auth_user = "${ACCOUNT}"
auth_pass = "${ACCOUNT_PASSWORD}"
project = "${PROJECT}"
}
}
variable "extnetid" {
type = string
default = "${PROVIDER_NET_ID}"
}
variable "image" {
type = string
default = "${IMAGE}"
}
variable "flavor" {
type = string
default = "${FLAVOR}"
}
locals {
project = "${var.provider_config["project"]}"
pubkey = "${PUB_KEY}"
}
```
# Initial cluster configuration
This script configures:
- Creates Provider network
- Creates a project
- Creates project based quotas
- Creates a user with RBAC
- Uploads instance disk images
- Creates flavours
- Renders the Terraform vars file
```sh
touch /home/openstack/stack/configure_cluster.sh
chmod +x /home/openstack/stack/configure_cluster.sh
nano -cw /home/openstack/stack/configure_cluster.sh
#!/usr/bin/env bash
# load venv and credentials
source /home/openstack/kolla_zed/bin/activate
source /etc/kolla/admin-openrc.sh
# vars
OPENSTACK_CLI=openstack
EXT_NET_CIDR='192.168.140.0/24'
EXT_NET_RANGE='start=192.168.140.200,end=192.168.140.254'
EXT_NET_GATEWAY='192.168.140.1'
PROJECT='test'
ACCOUNT='tseed'
ACCOUNT_PASSWORD='Password0'
ACCOUNT_EMAIL='toby.n.seed@gmail.com'
# check cluster
$OPENSTACK_CLI host list
$OPENSTACK_CLI hypervisor list
$OPENSTACK_CLI user list
# provider shared network
$OPENSTACK_CLI network create --external --share --provider-physical-network physnet1 --provider-network-type flat provider_network
$OPENSTACK_CLI subnet create --dhcp --network provider_network --subnet-range ${EXT_NET_CIDR} --gateway ${EXT_NET_GATEWAY} --allocation-pool ${EXT_NET_RANGE} provider_subnet
# create project
$OPENSTACK_CLI project create --domain default --description "guest project" $PROJECT
# set quota on project
$OPENSTACK_CLI quota set --instances 10 $PROJECT
$OPENSTACK_CLI quota set --cores 4 $PROJECT
$OPENSTACK_CLI quota set --ram 6144 $PROJECT
$OPENSTACK_CLI quota set --gigabytes 30 $PROJECT
$OPENSTACK_CLI quota set --volumes 10 $PROJECT
$OPENSTACK_CLI quota set --backups 0 $PROJECT
$OPENSTACK_CLI quota set --snapshots 0 $PROJECT
$OPENSTACK_CLI quota set --key-pairs 20 $PROJECT
$OPENSTACK_CLI quota set --floating-ips 20 $PROJECT
$OPENSTACK_CLI quota set --networks 10 $PROJECT
$OPENSTACK_CLI quota set --routers 10 $PROJECT
$OPENSTACK_CLI quota set --subnets 10 $PROJECT
$OPENSTACK_CLI quota set --secgroups 20 $PROJECT
$OPENSTACK_CLI quota set --secgroup-rules 100 $PROJECT
# create user
$OPENSTACK_CLI user create --password ${ACCOUNT_PASSWORD} --email ${ACCOUNT_EMAIL} $ACCOUNT
# set the default project in the web console for user
$OPENSTACK_CLI user set --project $PROJECT $ACCOUNT
$OPENSTACK_CLI project show $(openstack user show $ACCOUNT --domain default -f json | jq -r .default_project_id) -f json | jq -r .description
# set RBAC for guest project
$OPENSTACK_CLI role add --project $PROJECT --user $ACCOUNT admin
# download the cirros test image for admin project
wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
$OPENSTACK_CLI image create --disk-format qcow2 --container-format bare --private --project admin --property os_type=linux --file ./cirros-0.5.1-x86_64-disk.img cirros-0.5.1
# download the ubuntu image for all projects
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
$OPENSTACK_CLI image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./bionic-server-cloudimg-amd64.img ubuntu_18.04
# create a flavour for the admin project
$OPENSTACK_CLI flavor create admin.tiny --ram 1048 --disk 1 --vcpus 2 --private --project admin
# create flavours for the guest project
$OPENSTACK_CLI flavor create m1.tiny --ram 512 --disk 5 --vcpus 1 --private --project $PROJECT
$OPENSTACK_CLI flavor create m1.smaller --ram 1024 --disk 10 --vcpus 1 --private --project $PROJECT
# collect vars
export PROJECT=$PROJECT
export ACCOUNT=$ACCOUNT
export ACCOUNT_PASSWORD=$ACCOUNT_PASSWORD
export AUTH_URL=$(openstack endpoint list -f json | jq -r '.[] | select(."Service Name" == "keystone" and ."Interface" == "public") | .URL')
export PROVIDER_NET_ID=$(openstack network list -f json | jq -r '.[] | select(."Name" == "provider_network") | .ID')
export IMAGE=$(openstack image list -f json | jq -r '.[] | select(."Name" == "ubuntu_18.04") | .ID')
export FLAVOR=$(openstack flavor list --all -f json | jq -r '.[] | select(."Name" == "m1.tiny") | .ID')
export PUB_KEY=$(cat /home/openstack/.ssh/id_rsa.pub)
# render terraform vars.tf
envsubst < /home/openstack/stack/vars.tf.envsubst > /home/openstack/stack/vars.tf
```
# Install Terraform
```sh
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
```
# Create remaining project config and templates
## Create user data template
Salted hash password generated with `openssl passwd -6 -salt xyz Password0`.
This could be automated in Terraform on a per instance basis to resulting in a different hash for the same password to deter anyone who maybe able to intercept or check cloud-init on instantiation (maybe you can see it at the metadata endpoint in Openstack?).
```sh
nano -cw /home/openstack/stack/user_data.sh
#cloud-config
ssh_pwauth: true
groups:
- admingroup: [root,sys]
- openstack
users:
- name: openstack
primary_group: openstack
lock_passwd: false
passwd: $6$xyz$4tTWyuHIT6gXRuzotBZn/9xZBikUp0O2X6rOZ7MDJo26aax.Ok5P4rWYyzdgFkjArIIyB8z8LKVW1wARbcBzn/
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa ${pubkey}
```
## Create Ansible inventory template
Gets rendered by Terraform. Ansible will also work with cloud-init seeded ssh pub key.
```sh
nano -cw user_data.sh
[nodes]
%{ for index, name in subnet1_instance_name ~}
${name} ansible_host=${subnet1_instance_address[index]} ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' ansible_user=${user} ansible_password=${password} ansible_become=true
%{ endfor ~}
%{ for index, name in subnet2_instance_name ~}
${name} ansible_host=${subnet2_instance_address[index]} ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' ansible_user=${user} ansible_password=${password} ansible_become=true
%{ endfor ~}
[subnet1_instances]
%{ for index, name in subnet1_instance_name ~}
${name}
%{ endfor ~}
[subnet2_instances]
%{ for index, name in subnet2_instance_name ~}
${name}
%{ endfor ~}
# when rendered this should look a little like the following, notice the provider network IPs provided by floating ip
[nodes]
subnet1_test0 ansible_host=192.168.140.230 ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' ansible_user=openstack ansible_password=Password0 ansible_become=true
subnet1_test1 ansible_host=192.168.140.223 ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' ansible_user=openstack ansible_password=Password0 ansible_become=true
subnet2_test0 ansible_host=192.168.140.217 ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' ansible_user=openstack ansible_password=Password0 ansible_become=true
[subnet1_instances]
subnet1_test0
subnet1_test1
[subnet2_instances]
subnet2_test0
```
## Create Ansible ping test playbook
```yaml
nano -cw ansible_inventory ping_test.yml
---
- name: build ping_map
hosts: localhost
become: no
gather_facts: false
tasks:
- name: build ping_map
ansible.builtin.set_fact:
_ping_map: "{{ _ping_map | default({}) | combine ({entry: []}, recursive=True) }}"
loop: "{{ inventory_hosts }}"
loop_control:
loop_var: entry
vars:
inventory_hosts: "{{ hostvars[inventory_hostname]['groups']['all'] }}"
# - ansible.builtin.debug:
# msg:
# - "{{ _ping_map }}"
- name: populate ping_map
ansible.builtin.set_fact:
_ping_map: "{{ _ping_map | default({}) | combine ({source: destination_list_append}, recursive=True) }}"
loop: "{{ target_hosts|product(target_hosts) }}"
loop_control:
loop_var: entry
vars:
target_hosts: "{{ hostvars[inventory_hostname]['groups']['all'] }}"
source: "{{ entry[0] }}"
destination: "{{ entry[1] }}"
destination_list: "{{ _ping_map[source] }}"
destination_list_append: "{{ destination_list + [destination] }}"
when: not entry[0] == entry[1]
# - ansible.builtin.debug:
# msg:
# - "{{ _ping_map }}"
- name: write global ping_map
set_fact:
_global_ping_map: "{{ _ping_map }}"
delegate_to: localhost
delegate_facts: true
- name: ping test
hosts: all
become: yes
gather_facts: True
tasks:
- name: load global ping_map
set_fact:
_ping_map: "{{ hostvars['localhost']['_global_ping_map'] }}"
when:
- hostvars['localhost']['_global_ping_map'] is defined
# - debug:
# msg:
# - "{{ _ping_map }}"
- name: ping neighbours
shell: |
echo SOURCE {{ inventory_hostname }}
echo DESTINATION {{ destination_target }}
echo
ping -Rn -c 1 {{ destination_ip }}
loop: "{{ destination_targets }}"
loop_control:
loop_var: entry
vars:
destination_targets: "{{ _ping_map[inventory_hostname] }}"
destination_target: "{{ entry }}"
destination_ip: "{{ hostvars[destination_target]['ansible_default_ipv4']['address'] }}"
source: "{{ inventory_hostname }}"
register: _ping_results
- name: print results
debug:
msg:
- "{{ output }}"
loop: "{{ _ping_results['results'] }}"
loop_control:
loop_var: idx
label: "{{ destination }}"
vars:
destination: "{{ idx['entry'] }}"
output: "{{ idx['stdout_lines'] }}"
```
## Create Terraform configuration
```sh
nano -cw /home/openstack/stack/stack.tf
## load provider
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.48.0"
}
}
}
## configure provider
provider "openstack" {
auth_url = "${var.provider_config["auth_url"]}"
user_name = "${var.provider_config["auth_user"]}"
password = "${var.provider_config["auth_pass"]}"
tenant_name = "${var.provider_config["project"]}"
region = "RegionOne"
}
## vars
variable "dns" {
type = list(string)
default = ["1.1.1.1", "8.8.8.8"]
}
variable "subnet1" {
type = map(string)
default = {
subnet_name = "subnet1"
cidr = "172.16.10.0/24"
instance_count = "2"
}
}
variable "subnet2" {
type = map(string)
default = {
subnet_name = "subnet2"
cidr = "172.16.11.0/24"
instance_count = "1"
}
}
## data sources
data "openstack_networking_network_v2" "exnetname" {
network_id = "${var.extnetid}"
}
#output "exnet_name" {
# value = "${data.openstack_networking_network_v2.exnetname.name}"
#}
## resources
# router
resource "openstack_networking_router_v2" "router" {
name = "router_${local.project}"
admin_state_up = true
external_network_id = var.extnetid
}
# network1
resource "openstack_networking_network_v2" "network1" {
name = "network1_${local.project}"
}
# network2
resource "openstack_networking_network_v2" "network2" {
name = "network2_${local.project}"
}
# subnet1
resource "openstack_networking_subnet_v2" "subnet1" {
name = "${var.subnet1["subnet_name"]}_${local.project}"
network_id = openstack_networking_network_v2.network1.id
cidr = var.subnet1["cidr"]
dns_nameservers = var.dns
}
# subnet2
resource "openstack_networking_subnet_v2" "subnet2" {
name = "${var.subnet2["subnet_name"]}_${local.project}"
network_id = openstack_networking_network_v2.network2.id
cidr = var.subnet2["cidr"]
dns_nameservers = var.dns
}
# router interface subnet1
resource "openstack_networking_router_interface_v2" "interface1" {
router_id = openstack_networking_router_v2.router.id
subnet_id = openstack_networking_subnet_v2.subnet1.id
}
# router interface subnet2
resource "openstack_networking_router_interface_v2" "interface2" {
router_id = openstack_networking_router_v2.router.id
subnet_id = openstack_networking_subnet_v2.subnet2.id
}
# security group
resource "openstack_compute_secgroup_v2" "ingress" {
name = "${local.project}"
description = "ingress rules"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "192.168.140.0/24"
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "192.168.140.0/24"
}
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
self = true
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
self = true
}
}
# floating ip instance_subnet1
resource "openstack_compute_floatingip_v2" "instance_subnet1_fip" {
count = "${var.subnet1["instance_count"]}"
pool = "${data.openstack_networking_network_v2.exnetname.name}"
#depends_on = ["openstack_networking_router_interface_v2.router"]
}
# floating ip instance_subnet2
resource "openstack_compute_floatingip_v2" "instance_subnet2_fip" {
count = "${var.subnet2["instance_count"]}"
pool = "${data.openstack_networking_network_v2.exnetname.name}"
#depends_on = ["openstack_networking_router_interface_v2.router"]
}
# subnet1 instances
resource "openstack_compute_instance_v2" "instance_subnet1" {
count = "${var.subnet1["instance_count"]}"
name = "${var.subnet1["subnet_name"]}_${local.project}${count.index}"
image_id = var.image
flavor_id = var.flavor
user_data = templatefile("user_data.sh", {
pubkey = local.pubkey
} )
#network {
# uuid = var.extnetid
#}
network {
uuid = openstack_networking_network_v2.network1.id
}
security_groups = [ "${openstack_compute_secgroup_v2.ingress.name}" ]
depends_on = [
openstack_networking_subnet_v2.subnet1
]
}
# subnet2 instances
resource "openstack_compute_instance_v2" "instance_subnet2" {
count = "${var.subnet2["instance_count"]}"
name = "${var.subnet2["subnet_name"]}_${local.project}${count.index}"
image_id = var.image
flavor_id = var.flavor
user_data = templatefile("user_data.sh", {
pubkey = local.pubkey
} )
network {
uuid = openstack_networking_network_v2.network2.id
}
security_groups = [ "${openstack_compute_secgroup_v2.ingress.name}" ]
depends_on = [
openstack_networking_subnet_v2.subnet2
]
}
# subnet1 floating ips
resource "openstack_compute_floatingip_associate_v2" "fip_subnet1" {
count = "${var.subnet1["instance_count"]}"
floating_ip = "${openstack_compute_floatingip_v2.instance_subnet1_fip[count.index].address}"
instance_id = "${openstack_compute_instance_v2.instance_subnet1[count.index].id}"
}
# subnet2 floating ips
resource "openstack_compute_floatingip_associate_v2" "fip_subnet2" {
count = "${var.subnet2["instance_count"]}"
floating_ip = "${openstack_compute_floatingip_v2.instance_subnet2_fip[count.index].address}"
instance_id = "${openstack_compute_instance_v2.instance_subnet2[count.index].id}"
}
# ansible inventory
resource "local_file" "ansible_inventory" {
content = templatefile("inventory.tmpl",
{
user = "openstack"
password = "Password0"
subnet1_instance_name = openstack_compute_instance_v2.instance_subnet1[*].name
subnet1_instance_address = openstack_compute_floatingip_v2.instance_subnet1_fip[*].address
subnet2_instance_name = openstack_compute_instance_v2.instance_subnet2[*].name
subnet2_instance_address = openstack_compute_floatingip_v2.instance_subnet2_fip[*].address
}
)
filename = "ansible_inventory"
}
# cheat, no until connection - wait for nodes to boot and start ssh
resource "time_sleep" "loitering" {
create_duration = "120s"
}
# check ansible instance connectivity
resource "null_resource" "ansible_floating_ip_ping" {
provisioner "local-exec" {
command = "ansible -i ansible_inventory all -m ping"
}
depends_on = [
time_sleep.loitering
]
}
# check ansible inter-instance connectivity
resource "null_resource" "ansible_private_net_ping" {
provisioner "local-exec" {
command = "ansible-playbook -i ansible_inventory ping_test.yml"
}
depends_on = [
null_resource.ansible_floating_ip_ping
]
}
```
# Run
```sh
cd /home/openstack/stack
terraform init
terraform plan
terraform apply -auto-approve
terraform destroy -auto-approve
```

17
docs/6) Result.md Normal file
View File

@ -0,0 +1,17 @@
# Result
Network topology.
![8a8f46f83fb73b771de4c88c3090f13d.png](8a8f46f83fb73b771de4c88c3090f13d.png)
During the Terraform run the deployment host will run an Ansible ping command to test connectivity to the new Instances.
![313f2cb361c51b9c3058124604b6ae52.png](313f2cb361c51b9c3058124604b6ae52.png)
Once the Instances are live, Terraform will invoke an Ansible playbook to ping test between each node, the traffic route is displayed and shows traversal across the virtual router between networks.
![3bb1d14aaf1365eaf282d0df9b982075.png](3bb1d14aaf1365eaf282d0df9b982075.png)
It is even more interesting to run this command from the deployment host to an instance via a floating IP, the NAT traversal is displayed. The SG rules dictate any ICMP type.
![d1b15c31186ba1d7edc41fd6e059b4f4.png](d1b15c31186ba1d7edc41fd6e059b4f4.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

81
stack/configure_cluster.sh Executable file
View File

@ -0,0 +1,81 @@
#!/usr/bin/env bash
# load venv and credentials
source /home/openstack/kolla_ussuri/bin/activate
source /etc/kolla/admin-openrc.sh
# vars
OPENSTACK_CLI=openstack
EXT_NET_CIDR='192.168.140.0/24'
EXT_NET_RANGE='start=192.168.140.200,end=192.168.140.254'
EXT_NET_GATEWAY='192.168.140.1'
PROJECT='test'
ACCOUNT='tseed'
ACCOUNT_PASSWORD='Password0'
ACCOUNT_EMAIL='toby.n.seed@gmail.com'
# check cluster
$OPENSTACK_CLI host list
$OPENSTACK_CLI hypervisor list
$OPENSTACK_CLI user list
# provider network
$OPENSTACK_CLI network create --external --share --provider-physical-network physnet1 --provider-network-type flat provider_network
$OPENSTACK_CLI subnet create --dhcp --network provider_network --subnet-range ${EXT_NET_CIDR} --gateway ${EXT_NET_GATEWAY} --allocation-pool ${EXT_NET_RANGE} provider_subnet
# create project
$OPENSTACK_CLI project create --domain default --description "guest project" $PROJECT
# set quota on project
$OPENSTACK_CLI quota set --instances 10 $PROJECT
$OPENSTACK_CLI quota set --cores 4 $PROJECT
$OPENSTACK_CLI quota set --ram 6144 $PROJECT
$OPENSTACK_CLI quota set --gigabytes 30 $PROJECT
$OPENSTACK_CLI quota set --volumes 10 $PROJECT
$OPENSTACK_CLI quota set --backups 0 $PROJECT
$OPENSTACK_CLI quota set --snapshots 0 $PROJECT
$OPENSTACK_CLI quota set --key-pairs 20 $PROJECT
$OPENSTACK_CLI quota set --floating-ips 20 $PROJECT
$OPENSTACK_CLI quota set --networks 10 $PROJECT
$OPENSTACK_CLI quota set --routers 10 $PROJECT
$OPENSTACK_CLI quota set --subnets 10 $PROJECT
$OPENSTACK_CLI quota set --secgroups 20 $PROJECT
$OPENSTACK_CLI quota set --secgroup-rules 100 $PROJECT
# create user
$OPENSTACK_CLI user create --password ${ACCOUNT_PASSWORD} --email ${ACCOUNT_EMAIL} $ACCOUNT
# set the default project in the web console for user
$OPENSTACK_CLI user set --project $PROJECT $ACCOUNT
$OPENSTACK_CLI project show $(openstack user show $ACCOUNT --domain default -f json | jq -r .default_project_id) -f json | jq -r .description
# set RBAC for guest project
$OPENSTACK_CLI role add --project $PROJECT --user $ACCOUNT admin
# download the cirros test image for admin project
wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
$OPENSTACK_CLI image create --disk-format qcow2 --container-format bare --private --project admin --property os_type=linux --file ./cirros-0.5.1-x86_64-disk.img cirros-0.5.1
# download the ubuntu image for all projects
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
$OPENSTACK_CLI image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./bionic-server-cloudimg-amd64.img ubuntu_18.04
# create a flavour for the admin project
$OPENSTACK_CLI flavor create admin.tiny --ram 1048 --disk 1 --vcpus 2 --private --project admin
# create flavours for the guest project
$OPENSTACK_CLI flavor create m1.tiny --ram 512 --disk 5 --vcpus 1 --private --project $PROJECT
$OPENSTACK_CLI flavor create m1.smaller --ram 1024 --disk 10 --vcpus 1 --private --project $PROJECT
# collect vars
export PROJECT=$PROJECT
export ACCOUNT=$ACCOUNT
export ACCOUNT_PASSWORD=$ACCOUNT_PASSWORD
export AUTH_URL=$(openstack endpoint list -f json | jq -r '.[] | select(."Service Name" == "keystone" and ."Interface" == "public") | .URL')
export PROVIDER_NET_ID=$(openstack network list -f json | jq -r '.[] | select(."Name" == "provider_network") | .ID')
export IMAGE=$(openstack image list -f json | jq -r '.[] | select(."Name" == "ubuntu_18.04") | .ID')
export FLAVOR=$(openstack flavor list --all -f json | jq -r '.[] | select(."Name" == "m1.tiny") | .ID')
export PUB_KEY=$(cat /home/openstack/.ssh/id_rsa.pub)
# render terraform vars.tf
envsubst < /home/openstack/stack/vars.tf.envsubst > /home/openstack/stack/vars.tf

90
stack/ping_test.yml Normal file
View File

@ -0,0 +1,90 @@
---
- name: build ping_map
hosts: localhost
become: no
gather_facts: false
tasks:
- name: build ping_map
ansible.builtin.set_fact:
_ping_map: "{{ _ping_map | default({}) | combine ({entry: []}, recursive=True) }}"
loop: "{{ inventory_hosts }}"
loop_control:
loop_var: entry
vars:
inventory_hosts: "{{ hostvars[inventory_hostname]['groups']['all'] }}"
# - ansible.builtin.debug:
# msg:
# - "{{ _ping_map }}"
- name: populate ping_map
ansible.builtin.set_fact:
_ping_map: "{{ _ping_map | default({}) | combine ({source: destination_list_append}, recursive=True) }}"
loop: "{{ target_hosts|product(target_hosts) }}"
loop_control:
loop_var: entry
vars:
target_hosts: "{{ hostvars[inventory_hostname]['groups']['all'] }}"
source: "{{ entry[0] }}"
destination: "{{ entry[1] }}"
destination_list: "{{ _ping_map[source] }}"
destination_list_append: "{{ destination_list + [destination] }}"
when: not entry[0] == entry[1]
# - ansible.builtin.debug:
# msg:
# - "{{ _ping_map }}"
- name: write global ping_map
ansible.builtin.set_fact:
_global_ping_map: "{{ _ping_map }}"
delegate_to: localhost
delegate_facts: true
- name: ping test
hosts: all
become: yes
gather_facts: True
tasks:
- name: load global ping_map
ansible.builtin.set_fact:
_ping_map: "{{ hostvars['localhost']['_global_ping_map'] }}"
when:
- hostvars['localhost']['_global_ping_map'] is defined
# - debug:
# msg:
# - "{{ _ping_map }}"
- name: ping neighbours
ansible.builtin.shell: |
echo SOURCE {{ inventory_hostname }}
echo DESTINATION {{ destination_target }}
echo
ping -Rn -c 1 {{ destination_ip }}
loop: "{{ destination_targets }}"
loop_control:
loop_var: entry
vars:
destination_targets: "{{ _ping_map[inventory_hostname] }}"
destination_target: "{{ entry }}"
destination_ip: "{{ hostvars[destination_target]['ansible_default_ipv4']['address'] }}"
source: "{{ inventory_hostname }}"
register: _ping_results
- name: print results
ansible.builtin.debug:
msg:
- "{{ output }}"
loop: "{{ _ping_results['results'] }}"
loop_control:
loop_var: idx
label: "{{ destination }}"
vars:
destination: "{{ idx['entry'] }}"
output: "{{ idx['stdout_lines'] }}"

235
stack/stack.tf Normal file
View File

@ -0,0 +1,235 @@
## load provider
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.48.0"
}
}
}
## configure provider
provider "openstack" {
auth_url = "${var.provider_config["auth_url"]}"
user_name = "${var.provider_config["auth_user"]}"
password = "${var.provider_config["auth_pass"]}"
tenant_name = "${var.provider_config["project"]}"
region = "RegionOne"
}
## vars
variable "dns" {
type = list(string)
default = ["1.1.1.1", "8.8.8.8"]
}
variable "subnet1" {
type = map(string)
default = {
subnet_name = "subnet1"
cidr = "172.16.10.0/24"
instance_count = "2"
}
}
variable "subnet2" {
type = map(string)
default = {
subnet_name = "subnet2"
cidr = "172.16.11.0/24"
instance_count = "1"
}
}
## data sources
data "openstack_networking_network_v2" "exnetname" {
network_id = "${var.extnetid}"
}
#output "exnet_name" {
# value = "${data.openstack_networking_network_v2.exnetname.name}"
#}
## resources
# router
resource "openstack_networking_router_v2" "router" {
name = "router_${local.project}"
admin_state_up = true
external_network_id = var.extnetid
}
# network1
resource "openstack_networking_network_v2" "network1" {
name = "network1_${local.project}"
}
# network2
resource "openstack_networking_network_v2" "network2" {
name = "network2_${local.project}"
}
# subnet1
resource "openstack_networking_subnet_v2" "subnet1" {
name = "${var.subnet1["subnet_name"]}_${local.project}"
network_id = openstack_networking_network_v2.network1.id
cidr = var.subnet1["cidr"]
dns_nameservers = var.dns
}
# subnet2
resource "openstack_networking_subnet_v2" "subnet2" {
name = "${var.subnet2["subnet_name"]}_${local.project}"
network_id = openstack_networking_network_v2.network2.id
cidr = var.subnet2["cidr"]
dns_nameservers = var.dns
}
# router interface subnet1
resource "openstack_networking_router_interface_v2" "interface1" {
router_id = openstack_networking_router_v2.router.id
subnet_id = openstack_networking_subnet_v2.subnet1.id
}
# router interface subnet2
resource "openstack_networking_router_interface_v2" "interface2" {
router_id = openstack_networking_router_v2.router.id
subnet_id = openstack_networking_subnet_v2.subnet2.id
}
# security group
resource "openstack_compute_secgroup_v2" "ingress" {
name = "${local.project}"
description = "ingress rules"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "192.168.140.0/24"
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "192.168.140.0/24"
}
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
self = true
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
self = true
}
}
# floating ip instance_subnet1
resource "openstack_compute_floatingip_v2" "instance_subnet1_fip" {
count = "${var.subnet1["instance_count"]}"
pool = "${data.openstack_networking_network_v2.exnetname.name}"
#depends_on = ["openstack_networking_router_interface_v2.router"]
}
# floating ip instance_subnet2
resource "openstack_compute_floatingip_v2" "instance_subnet2_fip" {
count = "${var.subnet2["instance_count"]}"
pool = "${data.openstack_networking_network_v2.exnetname.name}"
#depends_on = ["openstack_networking_router_interface_v2.router"]
}
# subnet1 instances
resource "openstack_compute_instance_v2" "instance_subnet1" {
count = "${var.subnet1["instance_count"]}"
name = "${var.subnet1["subnet_name"]}_${local.project}${count.index}"
image_id = var.image
flavor_id = var.flavor
user_data = templatefile("user_data.sh", {
pubkey = local.pubkey
} )
#network {
# uuid = var.extnetid
#}
network {
uuid = openstack_networking_network_v2.network1.id
}
security_groups = [ "${openstack_compute_secgroup_v2.ingress.name}" ]
depends_on = [
openstack_networking_subnet_v2.subnet1
]
}
# subnet2 instances
resource "openstack_compute_instance_v2" "instance_subnet2" {
count = "${var.subnet2["instance_count"]}"
name = "${var.subnet2["subnet_name"]}_${local.project}${count.index}"
image_id = var.image
flavor_id = var.flavor
user_data = templatefile("user_data.sh", {
pubkey = local.pubkey
} )
network {
uuid = openstack_networking_network_v2.network2.id
}
security_groups = [ "${openstack_compute_secgroup_v2.ingress.name}" ]
depends_on = [
openstack_networking_subnet_v2.subnet2
]
}
# subnet1 floating ips
resource "openstack_compute_floatingip_associate_v2" "fip_subnet1" {
count = "${var.subnet1["instance_count"]}"
floating_ip = "${openstack_compute_floatingip_v2.instance_subnet1_fip[count.index].address}"
instance_id = "${openstack_compute_instance_v2.instance_subnet1[count.index].id}"
}
# subnet2 floating ips
resource "openstack_compute_floatingip_associate_v2" "fip_subnet2" {
count = "${var.subnet2["instance_count"]}"
floating_ip = "${openstack_compute_floatingip_v2.instance_subnet2_fip[count.index].address}"
instance_id = "${openstack_compute_instance_v2.instance_subnet2[count.index].id}"
}
# ansible inventory
resource "local_file" "ansible_inventory" {
content = templatefile("inventory.tmpl",
{
user = "openstack"
password = "Password0"
subnet1_instance_name = openstack_compute_instance_v2.instance_subnet1[*].name
subnet1_instance_address = openstack_compute_floatingip_v2.instance_subnet1_fip[*].address
subnet2_instance_name = openstack_compute_instance_v2.instance_subnet2[*].name
subnet2_instance_address = openstack_compute_floatingip_v2.instance_subnet2_fip[*].address
}
)
filename = "ansible_inventory"
}
# cheat, no until connection - wait for nodes to boot and start ssh
resource "time_sleep" "loitering" {
create_duration = "120s"
}
# check ansible instance connectivity
resource "null_resource" "ansible_floating_ip_ping" {
provisioner "local-exec" {
command = "ansible -i ansible_inventory all -m ping"
}
depends_on = [
time_sleep.loitering
]
}
# check ansible inter-instance connectivity
resource "null_resource" "ansible_private_net_ping" {
provisioner "local-exec" {
command = "ansible-playbook -i ansible_inventory ping_test.yml"
}
depends_on = [
null_resource.ansible_floating_ip_ping
]
}

14
stack/user_data.sh Normal file
View File

@ -0,0 +1,14 @@
#cloud-config
ssh_pwauth: true
groups:
- admingroup: [root,sys]
- openstack
users:
- name: openstack
primary_group: openstack
lock_passwd: false
passwd: $6$xyz$4tTWyuHIT6gXRuzotBZn/9xZBikUp0O2X6rOZ7MDJo26aax.Ok5P4rWYyzdgFkjArIIyB8z8LKVW1wARbcBzn/
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa ${pubkey}

30
stack/vars.tf.envsubst Normal file
View File

@ -0,0 +1,30 @@
## vars
variable "provider_config" {
type = map(string)
default = {
auth_url = "${AUTH_URL}"
auth_user = "${ACCOUNT}"
auth_pass = "${ACCOUNT_PASSWORD}"
project = "${PROJECT}"
}
}
variable "extnetid" {
type = string
default = "${PROVIDER_NET_ID}"
}
variable "image" {
type = string
default = "${IMAGE}"
}
variable "flavor" {
type = string
default = "${FLAVOR}"
}
locals {
project = "${var.provider_config["project"]}"
pubkey = "${PUB_KEY}"
}