initial commit
commit
63d09ca364
|
|
@ -0,0 +1,2 @@
|
|||
venv/
|
||||
backup_ansible/
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# What is this?
|
||||
|
||||
Side project to build a CI testbench to work on various Ansible roles to fit with employers default HPC deployment.
|
||||
|
||||
# Drivers
|
||||
|
||||
* There is not a true representation of the stack suitable for CI
|
||||
* Existing testbench is slow to re-provision and very manual
|
||||
* There isnt enough hardware to test multiple stacks
|
||||
* Corp hypervisors are unsuitable (possibly Python IPMI listener -> Proxmox/VMWare API would be ok but many ancillary virtual networks would be required that may change on a stack to stack basis)
|
||||
* Updating Ansible in vi on customer systems is tedious
|
||||
|
||||
# Goal
|
||||
|
||||
The aim is to simulate baremetal node provision using XCAT -> iPXE/IPMI -> virtualBMC -> QEMU VMs, and continue to develop the Ansible roles that configure the various classes of node.
|
||||
|
||||
# Components
|
||||
|
||||
Use commodity hardware to act as hypervisors and model the storage and network components
|
||||
* tested on 2 and 3 nodes, single nvme, single NIC
|
||||
|
||||
Generate static or dynamic Ansible inventory natively via the XCAT API.
|
||||
* working model
|
||||
* requires networks to be pulled from XCAT
|
||||
|
||||
Use a dynamic role model triggered by XCAT group membership.
|
||||
* existing working model
|
||||
* all Ansible variables imported under top level object ready for keypairDB integration
|
||||
* various helper roles to deep merge dictionaries and lists for individual site/deployment customisations
|
||||
|
||||
|
||||
Use VXLAN point to point between each hypervisor to simulate the various cluster networks.
|
||||
* working model that will scale to many hypervisors
|
||||
|
||||
Use hyperconverged Ceph to provide RBD for VM disk images, CephFS+Ganesha for NFS mounts hosting scheduler/HPC software
|
||||
* latest Ceph is now nearly all yaml spec driven allowing automation, most exisitng Ansible is behind
|
||||
* cluster build automation complete
|
||||
* OSD + Pools complete
|
||||
* RBD complete
|
||||
* NFS outstanding
|
||||
|
||||
Deploy XCAT container, seed with inventory of to-be provisioned VMs
|
||||
* to complete
|
||||
|
||||
Deploy virtualBMC
|
||||
* working model
|
||||
|
||||
Deploy QEMU with RBD disk
|
||||
* to complete
|
||||
|
|
@ -0,0 +1,89 @@
|
|||
|
||||
# setup hypervisor hosts
|
||||
|
||||
- AlmaLinux 8, minimal install
|
||||
- LVM, root uses 30G, no home volume, all remaining disk provisioned by ceph
|
||||
- 3 nodes - 192.168.140.1-3/24
|
||||
- user: ansible, has password-less sudo and ssh keys setup
|
||||
|
||||
## network
|
||||
|
||||
```sh
|
||||
nmcli con add type ethernet ifname ens1 con-name ctlplane connection.autoconnect yes ip4 192.168.140.41/24 gw4 192.168.140.1 ipv4.dns 1.1.1.1,8.8.8.8 ipv4.dns-search local
|
||||
nmcli con del ens1 && reboot
|
||||
```
|
||||
|
||||
## ansible user
|
||||
|
||||
```sh
|
||||
groupadd -r -g 1001 ansible && useradd -r -u 1001 -g 1001 -m -s /bin/bash ansible ;\
|
||||
echo "%ansible ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/ansible ;\
|
||||
chmod 0440 /etc/sudoers.d/ansible ;\
|
||||
passwd ansible ;\
|
||||
hostnamectl set-hostname qemu01.local ;\
|
||||
hostnamectl set-hostname --transient qemu01.local ;\
|
||||
hostnamectl set-hostname --pretty qemu01 ;\
|
||||
hostnamectl
|
||||
|
||||
ssh-copy-id -i ~/.ssh/id_rsa.pub ansible@192.168.140.41
|
||||
```
|
||||
|
||||
# setup python venv
|
||||
|
||||
Setup a venv the easy way.
|
||||
|
||||
```sh
|
||||
sudo apt-get update
|
||||
sudo apt-get install python3-dev libffi-dev gcc libssl-dev
|
||||
sudo apt install python3-venv
|
||||
mkdir -p /home/tseed/ansible/venv
|
||||
python3 -m venv /home/tseed/ansible/venv
|
||||
source /home/tseed/ansible/venv/bin/activate
|
||||
```
|
||||
|
||||
# setup ansible environment
|
||||
|
||||
## install additional ansible galaxy collection
|
||||
ansible-galaxy collection install community.general
|
||||
|
||||
## record collections file for replicating this environment
|
||||
nano -cw requirements.yml
|
||||
|
||||
```sh
|
||||
collections:
|
||||
- name: community.general
|
||||
```
|
||||
|
||||
## install requirements from file on new environment
|
||||
|
||||
```sh
|
||||
ansible-galaxy collection install -r requirements.yml
|
||||
ansible-galaxy collection install community.general --upgrade
|
||||
|
||||
dnf install sshpass / apt-get install sshpass
|
||||
pip install jmespath
|
||||
```
|
||||
|
||||
# run playbook
|
||||
|
||||
## start venv
|
||||
|
||||
```sh
|
||||
source /home/tseed/ansible/venv/bin/activate
|
||||
```
|
||||
|
||||
## run hypervisor build playbook
|
||||
|
||||
This only builds hypervisors up to Ceph RBD, VM provisioning not complete
|
||||
|
||||
```sh
|
||||
ansible-playbook bootstrap_hypervisors.yml
|
||||
```
|
||||
|
||||
## run dynamic roles from XCAT inventory for the various provisioned VMs
|
||||
|
||||
Used in production stack to provision various node classes, there are no real roles in this repo - just framework stuff and ntp/os_packages
|
||||
|
||||
```sh
|
||||
ansible-playbook -l all site.yml
|
||||
```
|
||||
Binary file not shown.
|
|
@ -0,0 +1 @@
|
|||
from ansible_merge_vars import ActionModule
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
[defaults]
|
||||
inventory = ./hosts
|
||||
remote_user = ansible
|
||||
ask_pass = false
|
||||
host_key_checking = False
|
||||
|
||||
[privilege_escalation]
|
||||
become = true
|
||||
become_method = sudo
|
||||
become_user = root
|
||||
become_ask_pass = false
|
||||
|
|
@ -0,0 +1,252 @@
|
|||
---
|
||||
- name: populate inventory
|
||||
hosts: localhost
|
||||
user: ansible
|
||||
# become: yes
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
|
||||
######## wipe inventory to ensure this playbook only uses it own dynamically generated variables
|
||||
|
||||
- name: refresh inventory
|
||||
meta: refresh_inventory
|
||||
|
||||
######## load core group_vars
|
||||
#
|
||||
# load the following core environment files under vars['testbench']
|
||||
# - inventory/group_vars/cluster.yml
|
||||
# - inventory/group_vars/networks.yml
|
||||
|
||||
- name: load core environment configuration
|
||||
block:
|
||||
|
||||
- name: set runtime facts
|
||||
ansible.builtin.set_fact:
|
||||
_env_files:
|
||||
- 'cluster.yml'
|
||||
- 'hypervisor.yml'
|
||||
- 'networks.yml'
|
||||
_env_dir: "{{ ansible_inventory_sources[0] | dirname }}/group_vars"
|
||||
config_namespace: "testbench"
|
||||
|
||||
- name: include vars from core config files
|
||||
ansible.builtin.include_vars:
|
||||
file: "{{ env_path }}"
|
||||
name: "env_import_{{ env_namespace }}"
|
||||
loop: "{{ _env_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
env_path: "{{ _env_dir }}/{{ entry }}"
|
||||
env_namespace: "{{ entry.split('.yml')[0] }}"
|
||||
|
||||
- name: append env vars to temp dict
|
||||
ansible.builtin.set_fact:
|
||||
_env_dict: "{{ _env_dict | default({}) | combine (env_import, recursive=True) }}"
|
||||
loop: "{{ lookup('ansible.builtin.varnames', 'env_import_').split(',') }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
env_import: "{{ vars[entry] }}"
|
||||
|
||||
- name: copy dict of env vars under top level namespace, access @ vars[config_namespace]
|
||||
ansible.builtin.set_fact:
|
||||
{ "{{ config_namespace }}": "{{ _env_dict }}" }
|
||||
|
||||
# think i only need to include hypervisor.yml here - it looks nicer to only include a small set of vars then ref directly at top level not config_namespace
|
||||
|
||||
######## populate arp cache, find dhcp ip of hypervisor and add to inventory
|
||||
|
||||
# uncomment if arp cache stale, this is slow so comment during dev
|
||||
# - name: populate arp cache
|
||||
# command: nmap -sn {{ range }}
|
||||
# vars:
|
||||
# dhcp_network: "{{ vars[config_namespace]['hypervisor']['nmcli_con_names']['primary'] }}"
|
||||
# network: "{{ vars[config_namespace]['hypervisor']['cluster_networks'][dhcp_network]['network'] }}"
|
||||
# netmask: "{{ vars[config_namespace]['hypervisor']['cluster_networks'][dhcp_network]['netmask'] }}"
|
||||
# range: "{{ network }}/{{ (network + '/' + netmask) | ansible.utils.ipaddr('prefix') }}"
|
||||
|
||||
# WSL2 specific method to get host arp cache
|
||||
- name: get arp table
|
||||
ansible.builtin.command: '/mnt/c/Windows/system32/arp.exe -a'
|
||||
register: _arp_cache
|
||||
|
||||
# windows arp.exe parse, write new mac_map with dhcp_ip
|
||||
- name: find dhcp ip
|
||||
ansible.builtin.set_fact:
|
||||
_update_mac_map: "{{ _update_mac_map | default([]) + [new_record] }}"
|
||||
loop: "{{ _arp_cache['stdout_lines'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
check_record: "{{ entry | trim | regex_search('^[0-9]+') is not none }}"
|
||||
format_record: "{{ entry | trim | regex_replace('\\s+', ',') | split(',') }}"
|
||||
dhcp_ip: "{{ format_record[0] }}"
|
||||
arp_mac: "{{ format_record[1] | regex_replace('-', ':') }}"
|
||||
mac_map: "{{ vars[config_namespace]['hypervisor']['mac_map'] }}"
|
||||
match_host: "{{ mac_map | selectattr('mac', '==', arp_mac) | map(attribute='host') }}"
|
||||
match_ip: "{{ mac_map | selectattr('mac', '==', arp_mac) | map(attribute='ip') }}"
|
||||
ipv6_link_local: "{{ 'fe80::0000:0000:0000:0000' | ansible.utils.slaac(arp_mac) }}"
|
||||
nmcli_con: "{{ mac_map | selectattr('mac', '==', arp_mac) | map(attribute='nmcli_con') }}"
|
||||
new_record: "{{ { 'host': match_host[0], 'mac': arp_mac, 'dhcp_ip': dhcp_ip, 'ip': match_ip[0], 'ipv6': ipv6_link_local, 'nmcli_con': nmcli_con[0] } }}"
|
||||
when:
|
||||
- check_record
|
||||
- match_host | length >0
|
||||
|
||||
- name: fail with insufficient hosts matched, check mac_map
|
||||
fail:
|
||||
when:
|
||||
- _update_mac_map is not defined
|
||||
- _update_mac_map | length <2
|
||||
|
||||
# sort to ensure first host in mac_map gets the first vxlan ip, initially the arp cache dictates the order in which hosts are discovered
|
||||
- name: sort mac_map
|
||||
set_fact:
|
||||
_sort_mac_map: "{{ _sort_mac_map | default([]) + mac_map_entry }}"
|
||||
loop: "{{ vars[config_namespace]['hypervisor']['mac_map'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
host: "{{ entry['host'] }}"
|
||||
mac_map_entry: "{{ _update_mac_map | selectattr('host', '==', host) }}"
|
||||
|
||||
- name: write global mac map
|
||||
set_fact:
|
||||
# mac_map: "{{ _update_mac_map }}"
|
||||
mac_map: "{{ _sort_mac_map }}"
|
||||
delegate_to: localhost
|
||||
delegate_facts: true
|
||||
|
||||
######## update the in-memory inventory with the hypervisors
|
||||
|
||||
- name: add hosts to in-memory inventory
|
||||
ansible.builtin.add_host: >
|
||||
name={{ host }}
|
||||
groups={{ host_groups }}
|
||||
ansible_ssh_host={{ ansible_ssh_host }}
|
||||
ansible_ssh_common_args='-o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no"'
|
||||
ansible_user={{ ansible_user }}
|
||||
ansible_password={{ ansible_password }}
|
||||
loop: "{{ hostvars['localhost']['mac_map'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
host: "{{ entry['host'] }}"
|
||||
# set host group membership, auto-create groups
|
||||
host_groups:
|
||||
- all
|
||||
- hypervisor
|
||||
- ceph
|
||||
ansible_ssh_host: "{{ entry['dhcp_ip'] }}"
|
||||
ansible_user: "{{ vars[config_namespace]['hypervisor']['ssh_user'] }}"
|
||||
ansible_password: "{{ vars[config_namespace]['hypervisor']['ssh_password'] }}"
|
||||
|
||||
######## bootstrap hypervisors
|
||||
|
||||
- name: run roles on hypervisors
|
||||
hosts: hypervisor
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
|
||||
######## load core group_vars
|
||||
#
|
||||
# load the following core environment files under vars['testbench']
|
||||
# - inventory/group_vars/cluster.yml
|
||||
# - inventory/group_vars/networks.yml
|
||||
|
||||
- name: load core environment configuration
|
||||
block:
|
||||
|
||||
# roles:
|
||||
# hypervisor_network - setup interfaces
|
||||
# hypervisor_vxlan - setup overlay networks - we also want to add ceph_public and ceph_cluster - we should do an overlay here
|
||||
# hypervisor_ceph - great reference https://github.com/jcmdln/cephadm-playbook
|
||||
# hypervisor_qemu - not written
|
||||
# hypervisor_qemu_gui - not written, great qt5 web container for virt-manager that accepts qemu api endpoints over ssh as ENV vars
|
||||
#
|
||||
# need a role to replace nested dict items - needs to accept a dict as path maybe
|
||||
|
||||
- name: set runtime facts
|
||||
ansible.builtin.set_fact:
|
||||
_run_roles:
|
||||
# - hypervisor_network
|
||||
# - ntp
|
||||
# - os_packages
|
||||
# - hypervisor_prep
|
||||
# - hypervisor_vxlan
|
||||
# - cephadm_prep
|
||||
# - cephadm_bootstrap
|
||||
- cephadm_services
|
||||
_env_dir: "{{ ansible_inventory_sources[0] | dirname }}/group_vars"
|
||||
_env_files:
|
||||
- 'cluster.yml'
|
||||
- 'hypervisor.yml'
|
||||
- 'networks.yml'
|
||||
config_namespace: "testbench"
|
||||
|
||||
- name: include vars from core config files
|
||||
ansible.builtin.include_vars:
|
||||
file: "{{ env_path }}"
|
||||
name: "env_import_{{ env_namespace }}"
|
||||
loop: "{{ _env_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
env_path: "{{ _env_dir }}/{{ entry }}"
|
||||
env_namespace: "{{ entry.split('.yml')[0] }}"
|
||||
|
||||
- name: append env vars to temp dict
|
||||
ansible.builtin.set_fact:
|
||||
_env_dict: "{{ _env_dict | default({}) | combine (env_import, recursive=True) }}"
|
||||
loop: "{{ lookup('ansible.builtin.varnames', 'env_import_').split(',') }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
env_import: "{{ vars[entry] }}"
|
||||
|
||||
- name: copy dict of env vars under top level namespace, access @ vars[config_namespace]
|
||||
ansible.builtin.set_fact:
|
||||
{ "{{ config_namespace }}": "{{ _env_dict }}" }
|
||||
|
||||
######## set some global variables used by roles for (vm) cluster node provisioning, if these roles are to be reused in the bootstrap of the hypervisors some static values will be required
|
||||
|
||||
# this needs to loop over hypervisor.cluster_networks but exclude primary/external for vxlan creation//
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ groups }}"
|
||||
# - "{{ ['all'] + hostvars[inventory_hostname]['group_names'] }}"
|
||||
|
||||
# - fail:
|
||||
# msg:
|
||||
|
||||
- name: populate the active_role_groups variable, add ceph_cluster network for vxlan creation
|
||||
ansible.builtin.set_fact:
|
||||
# active_role_groups: ['all', 'hypervisor', 'ceph'] # this should be a copy of hostvars['groups'] with additional all group
|
||||
active_role_groups: "{{ ['all'] + hostvars[inventory_hostname]['group_names'] }}"
|
||||
_cluster_networks: "{{ vars[config_namespace] | combine( {'cluster_networks' :{'cephclus': { 'comment': comment, 'gateway': 'null', 'mtu': 'null', 'nameserver': 'null', 'netmask': netmask, 'network': network } } }, recursive=True) }}"
|
||||
vars:
|
||||
network: "{{ vars['hypervisor']['cluster_networks']['cephclus']['network'] }}"
|
||||
netmask: "{{ vars['hypervisor']['cluster_networks']['cephclus']['netmask'] }}"
|
||||
comment: "{{ vars['hypervisor']['cluster_networks']['cephclus']['comment'] }}"
|
||||
|
||||
- ansible.builtin.set_fact:
|
||||
{ "{{ config_namespace }}": "{{ _cluster_networks }}" }
|
||||
|
||||
######## run roles against hypervisor hosts
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ hostvars[inventory_hostname] }}"
|
||||
|
||||
# - fail:
|
||||
# msg:
|
||||
|
||||
|
||||
- ansible.builtin.include_role:
|
||||
name: "{{ entry }}"
|
||||
loop: "{{ _run_roles }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
label: run {{ entry }} role on {{ inventory_hostname }}
|
||||
|
|
@ -0,0 +1,207 @@
|
|||
all:
|
||||
hosts:
|
||||
compute001:
|
||||
ansible_ssh_host: 172.22.10.1
|
||||
xcat_nics:
|
||||
- device: ib0
|
||||
ip: 172.23.10.1
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
- device: ens18
|
||||
ip: 172.22.10.1
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic:
|
||||
- device: ipmi
|
||||
ip: 172.21.10.1
|
||||
network: ipmi
|
||||
type: bmc
|
||||
compute002:
|
||||
ansible_ssh_host: 172.22.10.2
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.10.2
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
- device: ib0
|
||||
ip: 172.23.10.2
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
ipmi_nic:
|
||||
- device: ipmi
|
||||
ip: 172.21.10.2
|
||||
network: ipmi
|
||||
type: bmc
|
||||
gateway01:
|
||||
ansible_ssh_host: 172.22.1.254
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.254
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic: []
|
||||
hmem001:
|
||||
ansible_ssh_host: 172.22.2.1
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.2.1
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
- device: ib0
|
||||
ip: 172.23.2.1
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
ipmi_nic:
|
||||
- device: ipmi
|
||||
ip: 172.21.2.1
|
||||
network: ipmi
|
||||
type: bmc
|
||||
mail01:
|
||||
ansible_ssh_host: 172.22.1.230
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.230
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic:
|
||||
- device: ens19
|
||||
ip: 172.21.1.230
|
||||
network: ipmi
|
||||
type: Ethernet
|
||||
monitoring01:
|
||||
ansible_ssh_host: 172.22.1.224
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.224
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic: []
|
||||
nfs01:
|
||||
ansible_ssh_host: 172.22.1.225
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.225
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic: []
|
||||
repos01:
|
||||
ansible_ssh_host: 172.22.1.223
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.223
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic: []
|
||||
sl1:
|
||||
ansible_ssh_host: 172.22.1.1
|
||||
xcat_nics:
|
||||
- device: ib0
|
||||
ip: 172.23.1.1
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
ipmi_nic:
|
||||
- device: ipmi
|
||||
ip: 172.21.1.1
|
||||
network: ipmi
|
||||
type: bmc
|
||||
wlm01:
|
||||
ansible_ssh_host: 172.22.1.221
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.221
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic: []
|
||||
xcat01:
|
||||
ansible_ssh_host: 172.22.1.220
|
||||
xcat_nics:
|
||||
- device: ens18
|
||||
ip: 172.22.1.220
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
ipmi_nic:
|
||||
- device: ens19
|
||||
ip: 172.21.1.220
|
||||
network: ipmi
|
||||
type: Ethernet
|
||||
compute:
|
||||
hosts:
|
||||
compute001:
|
||||
compute002:
|
||||
slurm:
|
||||
hosts:
|
||||
compute001:
|
||||
compute002:
|
||||
hmem001:
|
||||
ansible:
|
||||
hosts:
|
||||
compute001:
|
||||
compute002:
|
||||
hmem001:
|
||||
mail01:
|
||||
monitoring01:
|
||||
nfs01:
|
||||
wlm01:
|
||||
gateway:
|
||||
hosts:
|
||||
gateway01:
|
||||
vm:
|
||||
hosts:
|
||||
gateway01:
|
||||
mail01:
|
||||
monitoring01:
|
||||
nfs01:
|
||||
repos01:
|
||||
sl1:
|
||||
wlm01:
|
||||
xcat01:
|
||||
external:
|
||||
hosts:
|
||||
gateway01:
|
||||
repos01:
|
||||
hmem:
|
||||
hosts:
|
||||
hmem001:
|
||||
smtp:
|
||||
hosts:
|
||||
mail01:
|
||||
monitoring:
|
||||
hosts:
|
||||
monitoring01:
|
||||
prometheus:
|
||||
hosts:
|
||||
monitoring01:
|
||||
steel:
|
||||
hosts:
|
||||
monitoring01:
|
||||
nfs01:
|
||||
inet:
|
||||
hosts:
|
||||
monitoring01:
|
||||
nfs:
|
||||
hosts:
|
||||
nfs01:
|
||||
nfsserver:
|
||||
hosts:
|
||||
nfs01:
|
||||
repos:
|
||||
hosts:
|
||||
repos01:
|
||||
httpd:
|
||||
hosts:
|
||||
repos01:
|
||||
login:
|
||||
hosts:
|
||||
sl1:
|
||||
wlm:
|
||||
hosts:
|
||||
wlm01:
|
||||
mgmt:
|
||||
hosts:
|
||||
xcat01:
|
||||
xcat:
|
||||
hosts:
|
||||
xcat01:
|
||||
ntp:
|
||||
hosts:
|
||||
xcat01:
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
env:
|
||||
cluster_domain: cluster.local
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
firewalld:
|
||||
enable: false
|
||||
firewalld_services:
|
||||
- name: ssh
|
||||
short: "SSHooph again"
|
||||
description: "SSH service"
|
||||
port:
|
||||
- port: 22
|
||||
protocol: tcp
|
||||
zone: public
|
||||
xcat_groups:
|
||||
- compute
|
||||
- all
|
||||
- slurm
|
||||
- ansible
|
||||
- test
|
||||
- test
|
||||
xcat_networks:
|
||||
- cluster
|
||||
- infiniband
|
||||
- test
|
||||
- test1
|
||||
- name: named
|
||||
short: "named"
|
||||
description: "DNS Service"
|
||||
port:
|
||||
- port: 53
|
||||
protocol: tcp
|
||||
- port: 953
|
||||
protocol: tcp
|
||||
firewalld_ipsets:
|
||||
fail2ban-ssh-ipv6:
|
||||
short: fail2ban-ssh-ipv6
|
||||
description: fail2ban-ssh-ipv6 ipset
|
||||
type: 'hash:ip'
|
||||
options:
|
||||
family:
|
||||
- inet6
|
||||
maxelem:
|
||||
- 65536
|
||||
timeout:
|
||||
- 300
|
||||
hashsize:
|
||||
- 1024
|
||||
targets:
|
||||
- 2a01::1
|
||||
|
|
@ -0,0 +1,167 @@
|
|||
hypervisor:
|
||||
ssh_user: 'root'
|
||||
ssh_password: 'Password0'
|
||||
# connection: 'external'
|
||||
# map of mac addresses to match to the primary/control-plane interface for bootstrap, this should be ordered with master host first
|
||||
mac_map:
|
||||
- host: 'qemu01'
|
||||
mac: 'b8:97:5a:cf:d7:d3'
|
||||
ip: '192.168.140.41'
|
||||
nmcli_con: 'primary'
|
||||
- host: 'qemu02'
|
||||
mac: 'b8:97:5a:cf:da:c6'
|
||||
ip: '192.168.140.42'
|
||||
nmcli_con: 'primary'
|
||||
- host: 'qemu03'
|
||||
mac: 'b8:97:5a:cf:d8:bf'
|
||||
ip: '192.168.140.43'
|
||||
nmcli_con: 'primary'
|
||||
# ceph disk
|
||||
ceph_disk: /dev/nvme0n1
|
||||
# ceph dasboard admin user password
|
||||
ceph_dash_admin_password: "Password0"
|
||||
# nmcli connection interface names, device eth0 or nmcli interface names
|
||||
nmcli_con_names:
|
||||
primary: 'external'
|
||||
ceph_public: 'storage'
|
||||
ceph_cluster: 'cephclus'
|
||||
ceph_rgw: 'storage'
|
||||
# hypervisor specific networks to add to the cluster_networks dict imported from group_vars/networks.yml
|
||||
cluster_networks:
|
||||
external:
|
||||
network: 192.168.140.0
|
||||
netmask: 255.255.255.0
|
||||
gateway: 192.168.140.1
|
||||
mtu:
|
||||
nameserver: 1.1.1.1
|
||||
comment: ext
|
||||
# cephpub:
|
||||
# network: 172.26.0.0
|
||||
# netmask: 255.255.255.0
|
||||
# gateway: 172.26.0.1
|
||||
# mtu:
|
||||
# nameserver: 1.1.1.1
|
||||
# comment: ext
|
||||
cephclus:
|
||||
network: 172.25.0.0
|
||||
netmask: 255.255.255.0
|
||||
gateway:
|
||||
mtu:
|
||||
nameserver:
|
||||
comment: int
|
||||
ceph_service_placement:
|
||||
- host: 'qemu01'
|
||||
labels:
|
||||
- _admin
|
||||
- mon
|
||||
- osd
|
||||
- mgr
|
||||
- mds
|
||||
- nfs
|
||||
- rgw
|
||||
- host: 'qemu02'
|
||||
labels:
|
||||
- _admin
|
||||
- mon
|
||||
- osd
|
||||
- mgr
|
||||
- mds
|
||||
- host: 'qemu03'
|
||||
labels:
|
||||
- _admin
|
||||
- mon
|
||||
- osd
|
||||
- mgr
|
||||
- mds
|
||||
# an nfs service uses an cephfs namespace or an rgw bucket, do not include an nfs service spec in this list
|
||||
ceph_service_spec:
|
||||
- service_type: alertmanager
|
||||
service_name: alertmanager
|
||||
placement:
|
||||
count: 1
|
||||
- service_type: crash
|
||||
service_name: crash
|
||||
placement:
|
||||
host_pattern: '*'
|
||||
- service_type: grafana
|
||||
service_name: grafana
|
||||
placement:
|
||||
count: 1
|
||||
- service_type: node-exporter
|
||||
service_name: node-exporter
|
||||
placement:
|
||||
host_pattern: '*'
|
||||
- service_type: prometheus
|
||||
service_name: prometheus
|
||||
placement:
|
||||
count: 1
|
||||
- service_type: mon
|
||||
service_name: mon
|
||||
placement:
|
||||
label: "mon"
|
||||
- service_type: mgr
|
||||
service_name: mgr
|
||||
placement:
|
||||
label: "mgr"
|
||||
# multiple osd spec files on a per host basis can be included with adjusted placement configuration
|
||||
- service_type: osd
|
||||
service_id: osd_using_device_file
|
||||
placement:
|
||||
label: "osd"
|
||||
spec:
|
||||
data_devices:
|
||||
paths:
|
||||
- /dev/ceph/ceph_data
|
||||
# db_devices:
|
||||
# paths:
|
||||
# - /dev/sdc
|
||||
# wal_devices:
|
||||
# paths:
|
||||
# - /dev/sdd
|
||||
- service_type: mds
|
||||
service_id: cephfs
|
||||
placement:
|
||||
label: "mds"
|
||||
# this rgw configuration provisions rgw instance with no realm and a zonegroup and zone named default and a data pool named .rgw.root
|
||||
# there are 4 auto provisioned pools .rgw.root (pg32) / default.rgw.log (pg32) / default.rgw.control (pg32) / default.rgw.meta (pg8)
|
||||
# a multisite configuration (specify realm/zonegroup/zone and specifc data pool) requires additional commands and multiple spec files
|
||||
- service_type: rgw
|
||||
service_id: object
|
||||
placement:
|
||||
label: "rgw"
|
||||
count: 1
|
||||
spec:
|
||||
ssl: false
|
||||
rgw_frontend_port: 8080
|
||||
rgw_frontend_type: beast
|
||||
- service_type: nfs
|
||||
service_id: ganesha
|
||||
placement:
|
||||
label: "nfs"
|
||||
spec:
|
||||
port: 2049
|
||||
# add 'pg: <number>' entry if you dont want default allocation, pg autoscaling is enabled
|
||||
ceph_pools:
|
||||
- type: rbd
|
||||
name: vms
|
||||
# pg: 64
|
||||
- type: cephfs
|
||||
name: cephfs.cluster_volume.data
|
||||
cephfs_type: data
|
||||
volume: cephfs_cluster_volume
|
||||
- type: cephfs
|
||||
name: cephfs.cluster_volume.meta
|
||||
cephfs_type: meta
|
||||
volume: cephfs_cluster_volume
|
||||
- type: cephfs
|
||||
name: cephfs.cluster_volume1.data
|
||||
pg: 16
|
||||
cephfs_type: data
|
||||
volume: cephfs_cluster_volume1
|
||||
- type: cephfs
|
||||
name: cephfs.cluster_volume1.meta
|
||||
pg: 16
|
||||
cephfs_type: meta
|
||||
volume: cephfs_cluster_volume1
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
# Generated using xcat2ansible_vars - DO NOT EDIT
|
||||
cluster_networks:
|
||||
campus:
|
||||
network: 192.168.13.0
|
||||
netmask: 255.255.255.0
|
||||
gateway: 192.168.13.254
|
||||
mtu:
|
||||
nameserver:
|
||||
comment: ext
|
||||
cluster:
|
||||
network: 172.22.0.0
|
||||
netmask: 255.255.0.0
|
||||
gateway: 172.22.1.254
|
||||
mtu: 1500
|
||||
nameserver: 172.22.1.220
|
||||
comment: int
|
||||
infiniband:
|
||||
network: 172.23.0.0
|
||||
netmask: 255.255.0.0
|
||||
gateway:
|
||||
mtu:
|
||||
nameserver:
|
||||
comment: int
|
||||
ipmi:
|
||||
network: 172.21.0.0
|
||||
netmask: 255.255.0.0
|
||||
gateway:
|
||||
mtu:
|
||||
nameserver:
|
||||
comment: int
|
||||
storage:
|
||||
network: 172.24.0.0
|
||||
netmask: 255.255.0.0
|
||||
gateway:
|
||||
mtu:
|
||||
nameserver:
|
||||
comment: int
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
ntp:
|
||||
external_hosts:
|
||||
- 0.uk.pool.ntp.org
|
||||
- time.cloudflare.com
|
||||
- gbg1.ntp.se
|
||||
- ntp1.hetzner.de
|
||||
timezone: Europe/London1
|
||||
a:
|
||||
b: "stuff"
|
||||
c: "stuff"
|
||||
d:
|
||||
- "stuff"
|
||||
- "stuff"
|
||||
- stuff: "stuff1"
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
roles:
|
||||
all:
|
||||
- network
|
||||
- repos
|
||||
- yum
|
||||
- os_packages
|
||||
- ssh
|
||||
- ntp
|
||||
- users
|
||||
- systemd
|
||||
- rsyslog
|
||||
- audittrail
|
||||
- sysctl
|
||||
- postfix
|
||||
hypervisor:
|
||||
- hypervisor_prep
|
||||
- vxlan
|
||||
- libvirt
|
||||
- podman
|
||||
ntpd:
|
||||
- ntp
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
xcat_ip: 192.168.140.40
|
||||
xcat_groups:
|
||||
- hypervisor
|
||||
- all
|
||||
xcat_nics:
|
||||
- device: ib0
|
||||
ip: 172.23.10.1
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
carrier: ib0
|
||||
- device: ens18
|
||||
ip: 172.22.10.1
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
carrier: ens18
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
xcat_ip: 192.168.140.41
|
||||
xcat_groups:
|
||||
- hypervisor
|
||||
- all
|
||||
xcat_nics:
|
||||
- device: ib0
|
||||
ip: 172.23.10.1
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
carrier: ib0
|
||||
- device: ens18
|
||||
ip: 172.22.10.1
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
carrier: ens18
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
xcat_ip: 192.168.140.42
|
||||
xcat_groups:
|
||||
- hypervisor
|
||||
- all
|
||||
xcat_nics:
|
||||
- device: ib0
|
||||
ip: 172.23.10.1
|
||||
network: infiniband
|
||||
type: Infiniband
|
||||
carrier: ib0
|
||||
- device: ens18
|
||||
ip: 172.22.10.1
|
||||
network: cluster
|
||||
type: Ethernet
|
||||
carrier: ens18
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
[all]
|
||||
qemu01 ansible_ssh_host=192.168.140.41
|
||||
qemu02 ansible_ssh_host=192.168.140.42
|
||||
qemu03 ansible_ssh_host=192.168.140.43
|
||||
|
||||
[hypervisor]
|
||||
qemu01
|
||||
qemu02
|
||||
qemu03
|
||||
|
||||
[test2]
|
||||
qemu01
|
||||
qemu02
|
||||
qemu03
|
||||
|
||||
[ntpd]
|
||||
qemu01
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
collections:
|
||||
- name: community.general
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
# install dependencies, get source, compile, install
|
||||
|
||||
```sh
|
||||
sudo dnf install gcc libffi-devel openssl-libs openssl-devel libuuid-devel
|
||||
mkdir ~/python
|
||||
cd ~/python
|
||||
wget https://www.python.org/ftp/python/3.10.6/Python-3.10.6.tgz
|
||||
tar -xvzf Python-3.10.6.tgz
|
||||
mkdir ~/python/3.10.6 # running directory
|
||||
cd ~/python/Python-3.10.6 # compile directory
|
||||
./configure --prefix /opt/ocf_tseed/python/3.10.6
|
||||
make -j$(nproc)
|
||||
#make clean # if you install more dependencies
|
||||
make -n install
|
||||
make install
|
||||
ll /opt/ocf_tseed/python/3.10.6
|
||||
```
|
||||
|
||||
# create virtual environment with local python and activate
|
||||
|
||||
```sh
|
||||
# create venv
|
||||
/opt/ocf_tseed/python/3.10.6/bin/python3 -m venv --prompt 3.10.6 ~/.venv
|
||||
source ~/.venv/bin/activate
|
||||
python --version # check not system version
|
||||
pip install --upgrade pip
|
||||
```
|
||||
|
||||
# update bashrc
|
||||
|
||||
```sh
|
||||
|
||||
vi ~/.bashrc
|
||||
|
||||
# User specific aliases and functions
|
||||
source $HOME/.venv/bin/activate
|
||||
```
|
||||
|
||||
# manually start/stop venv
|
||||
|
||||
## start
|
||||
```sh
|
||||
source $HOME/.venv/bin/activate
|
||||
```
|
||||
|
||||
## exit venv
|
||||
```sh
|
||||
deactivate
|
||||
```
|
||||
|
||||
# install pip packages
|
||||
|
||||
The following pip packages are required for the playbook, netaddr is essential for ip filters and nmcli module, ansible-merge-vars is required for complex/deep variable overlay.
|
||||
|
||||
```sh
|
||||
pip install netaddr ansible-merge-vars jmespath pip-autoremove
|
||||
```
|
||||
|
||||
```sh
|
||||
pip freeze > pip_requirements.txt
|
||||
|
||||
vi requirements.txt
|
||||
|
||||
ansible==6.2.0
|
||||
ansible-core==2.13.3
|
||||
netaddr==0.8.0
|
||||
ansible-merge-vars==5.0.0
|
||||
|
||||
python -m pip install -r pip_requirements.txt
|
||||
```
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
autofs:
|
||||
##
|
||||
enabled: true
|
||||
# is this really required? mapfile name is now using config_namespace leaving only topdir (uded in template) which seems like it could be static?
|
||||
map_config:
|
||||
topdir: "/-"
|
||||
mapfile: steel
|
||||
timeout: 300
|
||||
##
|
||||
nfs:
|
||||
enabled: true
|
||||
default_version: 3
|
||||
lustre:
|
||||
enabled: false
|
||||
gpfs:
|
||||
enabled: false
|
||||
beegfs:
|
||||
enabled: false
|
||||
exports:
|
||||
- type: nfs
|
||||
export: "/nfs/home"
|
||||
# ansible inventory group or node
|
||||
exporter:
|
||||
- nfs01
|
||||
network: cluster
|
||||
# ansible inventory group or node
|
||||
consumer:
|
||||
- login
|
||||
- stateless
|
||||
- compute
|
||||
mount: /home
|
||||
opts: "rw,async,no_root_squash"
|
||||
- type: nfs
|
||||
export: "/nfs/software"
|
||||
exporter:
|
||||
- nfs01
|
||||
network: cluster
|
||||
consumer:
|
||||
- stateless
|
||||
- login
|
||||
- wlm
|
||||
- compute
|
||||
mount: /opt/software
|
||||
opts: "rw,async,no_root_squash"
|
||||
- type: nfs
|
||||
export: /nfs/slurm
|
||||
exporter:
|
||||
- wlm01
|
||||
network: cluster
|
||||
consumer:
|
||||
- slurm
|
||||
- compute
|
||||
mount: /opt/software
|
||||
opts: "rw,async,no_root_squash"
|
||||
# if you only intend to be a consumer you could omit the exporter field but would require export with proper protocol://
|
||||
# - type: nfs
|
||||
# export: /nfs/slurm111
|
||||
# network: cluster
|
||||
# consumer:
|
||||
# - compute002
|
||||
# mount: /opt/software
|
||||
# opts: "rw,async,no_root_squash"
|
||||
# example lustre - type and consumer fields will be mandatory, other fields will be filesystem type or role specific
|
||||
- type: lustre
|
||||
export: "/lustre/data"
|
||||
network: cluster
|
||||
consumer:
|
||||
- group1
|
||||
- group2
|
||||
- node1
|
||||
- node2
|
||||
mount: /home
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
- name: Restart autofs
|
||||
ansible.builtin.systemd:
|
||||
name: autofs
|
||||
state: restarted
|
||||
enabled: true
|
||||
listen: "Restart autofs"
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.1
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
|
@ -0,0 +1,179 @@
|
|||
# Copyright 2022 OCF Ltd. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: ft=yaml
|
||||
---
|
||||
|
||||
######## parse exports list, filter on target host and tag as exporter/consumer, run role with filtered export list by type
|
||||
#
|
||||
# NOTE
|
||||
# - mandatory fields in each list item in the exports dict are 'consumer' and 'type', all other fields are role specific
|
||||
# - consumers list items can be ansible inventory groups or individual hosts
|
||||
# - this logic will set a host that is both the exporter and the consumer to have the exporter tag
|
||||
# it is not expected that a host will mount its own export
|
||||
# however, if this is required - add another dict to the exports list with only a consumer list and explicitly no exporter list
|
||||
|
||||
# multiple/group exporters entries logic will heavily rely on dynamic named client mount points - this will mean other roles that use mount points will have todo lookup
|
||||
# make this task only take exporter ansible_hostname[0] and no groups, to simplify render the autofs config
|
||||
#
|
||||
# - name: build exports list for target host, add tag for exporter or consumer action
|
||||
# set_fact:
|
||||
# _target_exports: "{{ _target_exports | default([]) + ([export_definition]) }}"
|
||||
# loop: "{{ autofs['exports'] }}"
|
||||
# loop_control:
|
||||
# loop_var: entry
|
||||
# vars:
|
||||
# consumer_group_match: "{{ entry['consumer'] | intersect(active_role_groups) }}"
|
||||
# consumer_host_match: "{{ entry['consumer'] | intersect(ansible_hostname) }}"
|
||||
# exporter_group_match: "{{ entry['exporter'] | default ([]) | intersect(active_role_groups) }}"
|
||||
# exporter_host_match: "{{ entry['exporter'] | default ([]) | intersect(ansible_hostname) }}"
|
||||
# toggle_exporter_group: "{{ exporter_group_match | length >0 }}"
|
||||
# toggle_exporter_host: "{{ exporter_host_match | length >0 }}"
|
||||
# toggle_exporter: "{{ ((toggle_exporter_group + toggle_exporter_host) | int >0) | ternary('exporter', 'consumer') }}"
|
||||
# export_definition: "{{ entry | default({}) | combine({ 'action': toggle_exporter }, recursive=True) }}"
|
||||
# when:
|
||||
# - consumer_group_match | length>0 or
|
||||
# consumer_host_match | length>0 or
|
||||
# exporter_group_match | length>0 or
|
||||
# exporter_host_match | length >0
|
||||
|
||||
- name: build exports list for target host, add tag for exporter or consumer action
|
||||
set_fact:
|
||||
_target_exports: "{{ _target_exports | default([]) + ([export_definition]) }}"
|
||||
loop: "{{ autofs['exports'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
consumer_group_match: "{{ entry['consumer'] | intersect(active_role_groups) }}"
|
||||
consumer_host_match: "{{ entry['consumer'] | intersect(ansible_hostname) }}"
|
||||
exporter_host_match: "{{ entry['exporter'] | default ([]) | intersect(ansible_hostname) }}"
|
||||
toggle_exporter_host: "{{ exporter_host_match | length >0 }}"
|
||||
toggle_exporter: "{{ (toggle_exporter_host | int >0) | ternary('exporter', 'consumer') }}"
|
||||
export_definition: "{{ entry | default({}) | combine({ 'action': toggle_exporter }, recursive=True) }}"
|
||||
when:
|
||||
- consumer_group_match | length>0 or
|
||||
consumer_host_match | length>0 or
|
||||
exporter_host_match | length >0
|
||||
|
||||
# - debug:
|
||||
# msg: "{{ _target_exports }}"
|
||||
|
||||
######## run role with filtered export list by type
|
||||
|
||||
- name: Run NFS role
|
||||
include_role:
|
||||
name: nfs
|
||||
vars:
|
||||
exports: "{{ _target_exports | selectattr('type', '==', 'nfs' ) }}"
|
||||
toggle_run: "{{ exports | length >0 }}"
|
||||
when:
|
||||
- autofs['nfs']['enabled'] | bool
|
||||
- toggle_run
|
||||
|
||||
- name: Run Lustre role
|
||||
include_role:
|
||||
name: lustre
|
||||
vars:
|
||||
exports: "{{ _target_exports | selectattr('type', '==', 'lustre' ) }}"
|
||||
toggle_run: "{{ exports | length >0 }}"
|
||||
when:
|
||||
- autofs['lustre']['enabled'] | bool
|
||||
- toggle_run
|
||||
|
||||
- name: Run Spectrum Scale role
|
||||
include_role:
|
||||
name: gpfs
|
||||
vars:
|
||||
exports: "{{ _target_exports | selectattr('type', '==', 'gpfs' ) }}"
|
||||
toggle_run: "{{ exports | length >0 }}"
|
||||
when:
|
||||
- autofs['gpfs']['enabled'] | bool
|
||||
- toggle_run
|
||||
|
||||
- name: Run BeeGFS role
|
||||
include_role:
|
||||
name: beegfs
|
||||
vars:
|
||||
exports: "{{ _target_exports | selectattr('type', '==', 'beegfs' ) }}"
|
||||
toggle_run: "{{ exports | length >0 }}"
|
||||
when:
|
||||
- autofs['beegfs']['enabled'] | bool
|
||||
- toggle_run
|
||||
|
||||
######## configure autofs
|
||||
|
||||
- name: Install and map autofs paths
|
||||
block:
|
||||
- name: Install autofs package
|
||||
package:
|
||||
name: autofs
|
||||
state: latest
|
||||
|
||||
- name: Configure autofs master
|
||||
template:
|
||||
dest: /etc/auto.master
|
||||
src: templates/auto.master.j2
|
||||
mode: 0644
|
||||
trim_blocks: False
|
||||
notify: Restart autofs
|
||||
|
||||
- name: Configure autofs process
|
||||
template:
|
||||
dest: /etc/autofs.conf
|
||||
src: templates/autofs.conf.j2
|
||||
mode: 0644
|
||||
trim_blocks: False
|
||||
notify: Restart autofs
|
||||
|
||||
# bring logic inboard from the jinja template using the exporter/consumer logic
|
||||
# this is tailored for nfs (see vers param), likely there will be multiple replica tasks to account for different mount types, these should all add to the _map_list
|
||||
- name: Build autofs mapping
|
||||
ansible.builtin.set_fact:
|
||||
_map_list: "{{ _map_list | default([]) + [autofs_entry] }}"
|
||||
loop: "{{ _target_exports }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
mount: "{{ entry['mount'] }}"
|
||||
fstype: "{{ entry['type'] }}"
|
||||
vers: "{{ autofs[fstype]['default_version'] }}"
|
||||
exporter: "{{ entry['exporter'] | first }}"
|
||||
export: "{{ entry['export'] }}"
|
||||
autofs_entry: "{{ mount }} -fstype={{ fstype }},vers={{ vers }} {{ exporter }}:{{ export }}"
|
||||
when:
|
||||
- _target_exports | selectattr('action', '==', 'consumer' )
|
||||
|
||||
- name: Configure autofs mapping
|
||||
template:
|
||||
dest: /etc/autofs-{{ config_namespace }}.map
|
||||
src: autofs.map.j2
|
||||
mode: 0644
|
||||
trim_blocks: False
|
||||
notify: Restart autofs
|
||||
|
||||
- name: AutoFS configured
|
||||
ansible.builtin.set_fact:
|
||||
autofs_configured: true
|
||||
|
||||
when:
|
||||
- autofs['enabled'] | bool
|
||||
|
||||
# Refresh facts and services facts after these will have been configured by
|
||||
# the autofs role
|
||||
|
||||
- name: Refresh service facts
|
||||
ansible.builtin.service_facts:
|
||||
|
||||
- name: Refresh facts
|
||||
ansible.builtin.setup:
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
#
|
||||
# {{ ansible_managed }}
|
||||
#
|
||||
# Include /etc/auto.master.d/*.autofs
|
||||
# The included files must conform to the format of this file.
|
||||
#
|
||||
+dir:/etc/auto.master.d
|
||||
+auto.master
|
||||
{{ autofs.map_config.topdir }} /etc/autofs-{{ config_namespace }}.map --timeout={{ autofs.timeout }}
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
#
|
||||
# {{ ansible_managed }}
|
||||
#
|
||||
[ autofs ]
|
||||
timeout = {{ autofs.timeout }}
|
||||
browse_mode = no
|
||||
mount_nfs_default_protocol = {{ autofs.nfs.default_version }}
|
||||
[ amd ]
|
||||
dismount_interval = 300
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#
|
||||
# {{ ansible_managed }}
|
||||
#
|
||||
{%- for entry in _map_list %}
|
||||
{{ entry }}
|
||||
{%- endfor %}
|
||||
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- prometheus
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for template_role
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# defaults file for roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# handlers file for roles/role-template
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
- name: bootstrap first ceph node
|
||||
block:
|
||||
|
||||
- name: create /etc/ceph directory
|
||||
file:
|
||||
path: /etc/ceph
|
||||
state: directory
|
||||
|
||||
- name: check if /etc/ceph/ceph.conf exists
|
||||
stat:
|
||||
path: /etc/ceph/ceph.conf
|
||||
register: cephadm_check_ceph_conf
|
||||
|
||||
- name: bootstrap ceph
|
||||
ansible.builtin.command:
|
||||
cmd: "cephadm bootstrap --mon-ip {{ mon_ip }} --cluster-network {{ cluster_network_range }}"
|
||||
vars:
|
||||
ipv: "ipv4"
|
||||
mon_interface: "{{ vars['hypervisor']['nmcli_con_names']['ceph_public'] }}"
|
||||
mon_ip: "{{ hostvars[inventory_hostname]['ansible_' + mon_interface][ipv].address }}"
|
||||
cluster_network_name: "{{ vars['hypervisor']['nmcli_con_names']['ceph_cluster'] }}"
|
||||
# network: "{{ vars['hypervisor']['cluster_networks'][cluster_network_name]['network'] }}"
|
||||
# netmask: "{{ vars['hypervisor']['cluster_networks'][cluster_network_name]['netmask'] }}"
|
||||
network: "{{ vars[config_namespace]['cluster_networks'][cluster_network_name]['network'] }}"
|
||||
netmask: "{{ vars[config_namespace]['cluster_networks'][cluster_network_name]['netmask'] }}"
|
||||
cluster_network_range: "{{ network }}/{{ (network + '/' + netmask) | ansible.utils.ipaddr('prefix') }}"
|
||||
register: cephadm_bootstrap
|
||||
when: not cephadm_check_ceph_conf.stat.exists
|
||||
|
||||
- name: store SSH pubkey as a variable
|
||||
ansible.builtin.command:
|
||||
cmd: cat /etc/ceph/ceph.pub
|
||||
changed_when:
|
||||
- ceph_rsa_pub.rc is defined
|
||||
- ceph_rsa_pub.rc > 0
|
||||
register: ceph_rsa_pub
|
||||
|
||||
- name: authorize the SSH keypair on all hosts
|
||||
authorized_key:
|
||||
key: "{{ ceph_rsa_pub.stdout_lines[0] }}"
|
||||
user: root
|
||||
state: present
|
||||
loop: "{{ groups['ceph'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
delegate_to: "{{ entry }}"
|
||||
|
||||
- debug:
|
||||
msg: "ceph orch host set-addr {{ inventory_hostname }} {{ mon_ip }}"
|
||||
vars:
|
||||
ipv: "ipv4"
|
||||
mon_interface: "{{ vars['hypervisor']['nmcli_con_names']['ceph_public'] }}"
|
||||
mon_ip: "{{ hostvars[inventory_hostname]['ansible_' + mon_interface][ipv].address }}"
|
||||
|
||||
- name: set host addr in orchestrator
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph orch host set-addr {{ inventory_hostname }} {{ mon_ip }}"
|
||||
changed_when: false
|
||||
vars:
|
||||
ipv: "ipv4"
|
||||
mon_interface: "{{ vars['hypervisor']['nmcli_con_names']['ceph_public'] }}"
|
||||
mon_ip: "{{ hostvars[inventory_hostname]['ansible_' + mon_interface][ipv].address }}"
|
||||
|
||||
- name: add other ceph hosts
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph orch host add {{ host }} {{ mon_ip }}"
|
||||
changed_when: false
|
||||
loop: "{{ groups['ceph'] | difference(inventory_hostname) }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
host: "{{ entry }}"
|
||||
ipv: "ipv4"
|
||||
mon_interface: "{{ vars['hypervisor']['nmcli_con_names']['ceph_public'] }}"
|
||||
mon_ip: "{{ hostvars[host]['ansible_' + mon_interface][ipv].address }}"
|
||||
|
||||
vars:
|
||||
target_host: "{{ groups['ceph'] | first }}"
|
||||
when:
|
||||
- target_host == inventory_hostname
|
||||
- groups['ceph'] | length >0
|
||||
|
||||
# https://github.com/jcmdln/cephadm-playbook
|
||||
|
||||
# "stderr_lines": [
|
||||
# "Verifying podman|docker is present...",
|
||||
# "Verifying lvm2 is present...",
|
||||
# "Verifying time synchronization is in place...",
|
||||
# "Unit chronyd.service is enabled and running",
|
||||
# "Repeating the final host check...",
|
||||
# "podman (/usr/bin/podman) version 4.1.1 is present",
|
||||
# "systemctl is present",
|
||||
# "lvcreate is present",
|
||||
# "Unit chronyd.service is enabled and running",
|
||||
# "Host looks OK",
|
||||
# "Cluster fsid: 00699884-38f2-11ed-9df2-b8975acfd7d3",
|
||||
# "Verifying IP 172.24.0.11 port 3300 ...",
|
||||
# "Verifying IP 172.24.0.11 port 6789 ...",
|
||||
# "Mon IP `172.24.0.11` is in CIDR network `172.24.0.0/16`",
|
||||
# "Pulling container image quay.io/ceph/ceph:v16...",
|
||||
# "Ceph version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
|
||||
# "Extracting ceph user uid/gid from container image...",
|
||||
# "Creating initial keys...",
|
||||
# "Creating initial monmap...",
|
||||
# "Creating mon...",
|
||||
# "Waiting for mon to start...",
|
||||
# "Waiting for mon...",
|
||||
# "mon is available",
|
||||
# "Assimilating anything we can from ceph.conf...",
|
||||
# "Generating new minimal ceph.conf...",
|
||||
# "Restarting the monitor...",
|
||||
# "Setting mon public_network to 172.24.0.0/16",
|
||||
# "Setting cluster_network to 172.25.0.0/24",
|
||||
# "Wrote config to /etc/ceph/ceph.conf",
|
||||
# "Wrote keyring to /etc/ceph/ceph.client.admin.keyring",
|
||||
# "Creating mgr...",
|
||||
# "Verifying port 9283 ...",
|
||||
# "Waiting for mgr to start...",
|
||||
# "Waiting for mgr...",
|
||||
# "mgr not available, waiting (1/15)...",
|
||||
# "mgr not available, waiting (2/15)...",
|
||||
# "mgr not available, waiting (3/15)...",
|
||||
# "mgr not available, waiting (4/15)...",
|
||||
# "mgr not available, waiting (5/15)...",
|
||||
# "mgr not available, waiting (6/15)...",
|
||||
# "mgr not available, waiting (7/15)...",
|
||||
# "mgr is available",
|
||||
# "Enabling cephadm module...",
|
||||
# "Waiting for the mgr to restart...",
|
||||
# "Waiting for mgr epoch 5...",
|
||||
# "mgr epoch 5 is available",
|
||||
# "Setting orchestrator backend to cephadm...",
|
||||
# "Generating ssh key...",
|
||||
# "Wrote public SSH key to /etc/ceph/ceph.pub",
|
||||
# "Adding key to root@localhost authorized_keys...",
|
||||
# "Adding host qemu01...",
|
||||
# "Deploying mon service with default placement...",
|
||||
# "Deploying mgr service with default placement...",
|
||||
# "Deploying crash service with default placement...",
|
||||
# "Deploying prometheus service with default placement...",
|
||||
# "Deploying grafana service with default placement...",
|
||||
# "Deploying node-exporter service with default placement...",
|
||||
# "Deploying alertmanager service with default placement...",
|
||||
# "Enabling the dashboard module...",
|
||||
# "Waiting for the mgr to restart...",
|
||||
# "Waiting for mgr epoch 9...",
|
||||
# "mgr epoch 9 is available",
|
||||
# "Generating a dashboard self-signed certificate...",
|
||||
# "Creating initial admin user...",
|
||||
# "Fetching dashboard port number...",
|
||||
# "Ceph Dashboard is now available at:",
|
||||
# "",
|
||||
# "\t URL: https://qemu01.cluster.local:8443/",
|
||||
# "\t User: admin",
|
||||
# "\tPassword: shm7es74de",
|
||||
# "",
|
||||
# "Enabling client.admin keyring and conf on hosts with \"admin\" label",
|
||||
# "You can access the Ceph CLI with:",
|
||||
# "",
|
||||
# "\tsudo /usr/sbin/cephadm shell --fsid 00699884-38f2-11ed-9df2-b8975acfd7d3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring",
|
||||
# "",
|
||||
# "Please consider enabling telemetry to help improve Ceph:",
|
||||
# "",
|
||||
# "\tceph telemetry on",
|
||||
# "",
|
||||
# "For more information see:",
|
||||
# "",
|
||||
# "\thttps://docs.ceph.com/docs/pacific/mgr/telemetry/",
|
||||
# "",
|
||||
# "Bootstrap complete."
|
||||
# ],
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for roles/role-template
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
cephadm_packages:
|
||||
cephadm_host_packages:
|
||||
- podman
|
||||
- epel-release
|
||||
- ceph-common
|
||||
- cephadm
|
||||
|
||||
# - python3-pip
|
||||
# - python3-virtualenv
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# handlers file for roles/role-template
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
---
|
||||
- name: download cephadm
|
||||
ansible.builtin.get_url:
|
||||
url: https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
|
||||
dest: ~/
|
||||
mode: '0750'
|
||||
|
||||
- name: install ceph repo
|
||||
ansible.builtin.command:
|
||||
cmd: "~/cephadm add-repo --release quincy"
|
||||
|
||||
# use repo method instead
|
||||
# - name: install cephadm
|
||||
# ansible.builtin.command:
|
||||
# cmd: "~/cephadm install"
|
||||
|
||||
# ceph from rhel repos, not latest release
|
||||
# - name: add ceph repository
|
||||
# package:
|
||||
# name: "centos-release-ceph-pacific"
|
||||
# state: present
|
||||
|
||||
- name: install ceph packages
|
||||
package:
|
||||
name: "{{ cephadm_packages['cephadm_host_packages'] }}"
|
||||
state: present
|
||||
|
||||
- name: create ssh keypair
|
||||
openssh_keypair:
|
||||
path: /tmp/cephadm_rsa
|
||||
size: 4096
|
||||
owner: "{{ lookup('env', 'USER') }}"
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: store SSH pubkey as a variable
|
||||
command: >-
|
||||
cat /tmp/cephadm_rsa.pub
|
||||
changed_when:
|
||||
- cephadm_rsa_pub.rc is defined
|
||||
- cephadm_rsa_pub.rc > 0
|
||||
delegate_to: localhost
|
||||
register: cephadm_rsa_pub
|
||||
run_once: true
|
||||
|
||||
- name: create ~/.ssh
|
||||
ansible.builtin.file:
|
||||
path: ~/.ssh
|
||||
state: directory
|
||||
mode: '0700'
|
||||
|
||||
- name: create ~/.ssh/authorized_keys
|
||||
ansible.builtin.file:
|
||||
path: ~/.ssh/authorized_keys
|
||||
state: touch
|
||||
mode: '0644'
|
||||
|
||||
- name: copy SSH keypair to all hosts
|
||||
copy:
|
||||
src: /tmp/{{ entry }}
|
||||
dest: "~/.ssh/{{ entry }}"
|
||||
force: true
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0600
|
||||
loop:
|
||||
- cephadm_rsa
|
||||
- cephadm_rsa.pub
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file: "{{ entry | regex_replace('cephadm_', 'id_') }}"
|
||||
# become: yes
|
||||
# become_user: root
|
||||
# become_method: sudo
|
||||
|
||||
- name: Authorize the SSH keypair on all hosts
|
||||
authorized_key:
|
||||
key: "{{ cephadm_rsa_pub.stdout_lines[0] }}"
|
||||
user: root
|
||||
state: present
|
||||
|
||||
# - name: Authorize local SSH pub key on all hosts
|
||||
# authorized_key:
|
||||
# key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
|
||||
# comment: ""
|
||||
# user: root
|
||||
# state: present
|
||||
|
||||
- name: add ~/.ssh/config referencing all other ceph hosts
|
||||
blockinfile:
|
||||
block: |
|
||||
{% for host in groups['ceph'] | difference(inventory_hostname) %}
|
||||
Host {{ hostvars[host]['inventory_hostname'] }}
|
||||
HostName {{ hostvars[host]['ansible_' + interface][ipv]['address'] }}
|
||||
IdentityFile ~/.ssh/cephadm_rsa
|
||||
PreferredAuthentications publickey
|
||||
User root
|
||||
StrictHostKeyChecking accept-new
|
||||
{% if not loop.last %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
create: true
|
||||
dest: ~/.ssh/config
|
||||
vars:
|
||||
# use nmcli connection as the interface name to find the ip, this relies on the hypervisor_vxlan role creating bridge interfaces rather than physical interfaces such as eth0
|
||||
interface: "{{ vars['hypervisor']['nmcli_con_names']['ceph_public'] }}"
|
||||
ipv: "ipv4"
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for roles/role-template
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# defaults file for roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# handlers file for roles/role-template
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
||||
|
|
@ -0,0 +1,267 @@
|
|||
---
|
||||
######## runtime_facts
|
||||
# - name: runtime facts
|
||||
# ansible.builtin.set_fact:
|
||||
# _tmp_service_location: "/root/ceph_service_definition"
|
||||
|
||||
# set the networks
|
||||
# ceph config set global public_network 192.168.101.0/24
|
||||
# ceph config set global cluster_network 192.168.101.0/24
|
||||
|
||||
# default service spec
|
||||
# https://docs.ceph.com/en/latest/cephadm/services/#updating-service-specifications
|
||||
# osd explained
|
||||
# https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/
|
||||
# https://docs.ceph.com/en/latest/cephadm/services/osd/#drivegroups # great ref for spinning, ssd and nvme as data db and wal respectively
|
||||
# ceph orch ls --export
|
||||
# ceph orch apply -i myservice.yaml [--dry-run] # import excellent
|
||||
# ceph orch redeploy grafana - this seems very important after changes
|
||||
#
|
||||
|
||||
######## configure ceph and provision ceph services
|
||||
|
||||
- name: configure ceph and provision ceph services via the first ceph host
|
||||
block:
|
||||
|
||||
- name: set networks, PG autoscaling on, memory autotune on
|
||||
ansible.builtin.command:
|
||||
cmd: "{{ entry }}"
|
||||
loop:
|
||||
- "ceph config set global public_network {{ public_network_range }}"
|
||||
- "ceph config set global cluster_network {{ cluster_network_range }}"
|
||||
- "ceph config set global osd_pool_default_pg_autoscale_mode on"
|
||||
- "ceph config set osd osd_memory_target_autotune true"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
public_network_name: "{{ vars['hypervisor']['nmcli_con_names']['ceph_public'] }}"
|
||||
public_network: "{{ vars[config_namespace]['cluster_networks'][public_network_name]['network'] }}"
|
||||
public_netmask: "{{ vars[config_namespace]['cluster_networks'][public_network_name]['netmask'] }}"
|
||||
public_network_range: "{{ public_network }}/{{ (public_network + '/' + public_netmask) | ansible.utils.ipaddr('prefix') }}"
|
||||
cluster_network_name: "{{ vars['hypervisor']['nmcli_con_names']['ceph_cluster'] }}"
|
||||
cluster_network: "{{ vars[config_namespace]['cluster_networks'][cluster_network_name]['network'] }}"
|
||||
cluster_netmask: "{{ vars[config_namespace]['cluster_networks'][cluster_network_name]['netmask'] }}"
|
||||
cluster_network_range: "{{ cluster_network }}/{{ (cluster_network + '/' + cluster_netmask) | ansible.utils.ipaddr('prefix') }}"
|
||||
|
||||
- name: apply ceph service labels to hosts
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph orch host label add {{ host }} {{ label }}"
|
||||
with_subelements:
|
||||
- "{{ hypervisor['ceph_service_placement'] }}"
|
||||
- labels
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
host: "{{ entry[0]['host'] }}"
|
||||
label: "{{ entry[1] }}"
|
||||
|
||||
- name: create yaml ceph service definition fact
|
||||
set_fact:
|
||||
_ceph_service_definition: "{{ _ceph_service_definition | default() + '---\n' + content }}"
|
||||
loop: "{{ hypervisor['ceph_service_spec'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
content: "{{ entry | to_nice_yaml(indent=2,sort_keys=False) }}"
|
||||
when:
|
||||
- not entry['service_type'] == 'nfs'
|
||||
|
||||
- name: create cephadm service spec file
|
||||
copy:
|
||||
content: "{{ _ceph_service_definition }}"
|
||||
dest: "/root/ceph_service_spec.yml"
|
||||
force: yes
|
||||
validate: "ceph orch apply -i %s --dry-run"
|
||||
register: _ceph_service_definition_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: stop ceph deployment where services file does not validate
|
||||
meta: end_play
|
||||
when:
|
||||
- _ceph_service_definition_check['exit_status'] is defined and not _ceph_service_definition_check['exit_status'] == 0
|
||||
|
||||
- name: apply ceph service spec
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph orch apply -i /root/ceph_service_spec.yml"
|
||||
register: _apply_ceph_service_spec
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ _apply_ceph_service_spec }}"
|
||||
|
||||
- name: wait for OSD provision
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph orch ls -f json"
|
||||
register: _ceph_service
|
||||
until: osd_running == osd_count
|
||||
retries: 12
|
||||
delay: 10
|
||||
vars:
|
||||
osd_service: "{{ _ceph_service['stdout'] | from_json | selectattr('service_type', '==', 'osd') | first }}"
|
||||
osd_count: "{{ osd_service['status']['size'] | int }}"
|
||||
osd_running: "{{ osd_service['status']['running'] | int }}"
|
||||
|
||||
- name: query ceph osds
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph osd df -f json"
|
||||
register: _ceph_osd_info
|
||||
|
||||
- name: determine if too many placement groups are being requested
|
||||
debug:
|
||||
msg:
|
||||
- "too many placement groups are being requested, consider adding more OSDs, provisioning less pools or setting placement groups manually"
|
||||
- "available placement groups: {{ available_pg }}"
|
||||
- "requested placement groups: {{ requested_pg }}"
|
||||
vars:
|
||||
pg_per_osd: 250
|
||||
pool_default_pg: 32
|
||||
device_health_metrics_pg: 1
|
||||
osd_count: "{{ (_ceph_osd_info['stdout'] | from_json)['nodes'] | length }}"
|
||||
rgw_pg: "{{ hypervisor['ceph_service_spec'] | selectattr('service_type', '==', 'rgw') | length * 104 }}"
|
||||
nfs_pg: "{{ hypervisor['ceph_service_spec'] | selectattr('service_type', '==', 'nfs') | length * 1 }}"
|
||||
static_pg_allocation: "{{ hypervisor['ceph_pools'] | selectattr('pg', 'defined') | map(attribute='pg') | sum }}"
|
||||
default_pg_allocation: "{{ hypervisor['ceph_pools'] | selectattr('pg', 'undefined') | length * 32 }}"
|
||||
available_pg: "{{ (osd_count | int) * pg_per_osd - device_health_metrics_pg }}"
|
||||
requested_pg: "{{ ((static_pg_allocation | int) + (default_pg_allocation | int) + (rgw_pg | int) + (nfs_pg | int) + device_health_metrics_pg) * (osd_count | int) }}"
|
||||
too_many_pg: "{{ (requested_pg > available_pg) }}"
|
||||
register: _too_many_pg
|
||||
when:
|
||||
- too_many_pg
|
||||
|
||||
- name: stop ceph deployment where too many placement groups are being requested
|
||||
meta: end_play
|
||||
when:
|
||||
- not _too_many_pg['skipped']
|
||||
|
||||
# this will skip pools that already exist
|
||||
- name: create pools
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph osd pool create {{ name }} {{ pg }}"
|
||||
loop: "{{ hypervisor['ceph_pools'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
name: "{{ entry['name'] }}"
|
||||
pg: "{{ entry['pg'] | default() | int }}"
|
||||
|
||||
# this will skip volumes that already exist
|
||||
- name: create cephfs volumes
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph fs new {{ cephfs_volume_name }} {{ cephfs_meta_pool }} {{ cephfs_data_pool }}"
|
||||
loop: "{{ cephfs_volumes }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
cephfs_volumes: "{{ hypervisor['ceph_pools'] | selectattr('type', '==', 'cephfs') | map(attribute='volume') | unique }}"
|
||||
cephfs_volume_name: "{{ entry }}"
|
||||
cephfs_data_pool: "{{ hypervisor['ceph_pools'] | selectattr('type', '==', 'cephfs') | selectattr('volume', '==', entry) | selectattr('cephfs_type', '==', 'data') | map(attribute='name') | first | default() }}"
|
||||
cephfs_meta_pool: "{{ hypervisor['ceph_pools'] | selectattr('type', '==', 'cephfs') | selectattr('volume', '==', entry) | selectattr('cephfs_type', '==', 'meta') | map(attribute='name') | first | default() }}"
|
||||
cephfs_mds_service_present: "{{ hypervisor['ceph_service_spec'] | selectattr('service_type', '==', 'mds') | length >0 }}"
|
||||
when:
|
||||
- cephfs_mds_service_present
|
||||
- cephfs_data_pool | length >0 and cephfs_meta_pool | length >0
|
||||
|
||||
# rgw multisite config is required here
|
||||
|
||||
# if nfs service(s) exist in hypervisor['ceph_service_spec'], provision service first to ensure the ceph_service_spec can validate nfs service_type entry
|
||||
# nfs config file requires an rgw pool or cephfs namespace(volume)
|
||||
- name: provision ceph nfs
|
||||
block:
|
||||
|
||||
- name: deploy nfs service
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph nfs cluster create {{ nfs_service }}"
|
||||
loop: "{{ hypervisor['ceph_service_spec'] | selectattr('service_type', '==', 'nfs') }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
nfs_service: "{{ entry['service_id'] }}"
|
||||
|
||||
- name: create yaml ceph nfs service definition fact
|
||||
set_fact:
|
||||
_ceph_nfs_service_definition: "{{ _ceph_nfs_service_definition | default() + '---\n' + content }}"
|
||||
loop: "{{ hypervisor['ceph_service_spec'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
content: "{{ entry | to_nice_yaml(indent=2,sort_keys=False) }}"
|
||||
when:
|
||||
- entry['service_type'] == 'nfs'
|
||||
|
||||
- name: create cephadm nfs service spec file
|
||||
copy:
|
||||
content: "{{ _ceph_nfs_service_definition }}"
|
||||
dest: "/root/ceph_nfs_service_spec.yml"
|
||||
force: yes
|
||||
validate: "ceph orch apply -i %s --dry-run"
|
||||
register: _ceph_nfs_service_definition_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: stop ceph deployment where nfs services file does not validate
|
||||
meta: end_play
|
||||
when:
|
||||
- _ceph_nfs_service_definition_check['exit_status'] is defined and not _ceph_nfs_service_definition_check['exit_status'] == 0
|
||||
|
||||
- name: apply ceph nfs service spec
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph orch apply -i /root/ceph_nfs_service_spec.yml"
|
||||
register: _apply_ceph_service_spec
|
||||
|
||||
# some kind of nfs config required here
|
||||
# https://docs.ceph.com/en/quincy/mgr/nfs/
|
||||
# https://docs.ceph.com/en/latest/mgr/nfs/#mgr-nfs
|
||||
# ceph nfs cluster config set <cluster_id> -i <config_file>
|
||||
# https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
|
||||
|
||||
vars:
|
||||
nfs_service_present: "{{ hypervisor['ceph_service_spec'] | selectattr('service_type', '==', 'nfs') | length >0 }}"
|
||||
when:
|
||||
- nfs_service_present
|
||||
|
||||
# ceph fs volume rm cephfs_cluster_volume1 --yes-i-really-mean-it
|
||||
# ceph fs volume rm cephfs_cluster_volume --yes-i-really-mean-it
|
||||
# ceph orch rm nfs.ganesha
|
||||
# ceph orch rm rgw.object
|
||||
# ceph orch rm mds.cephfs
|
||||
# ceph orch ls
|
||||
# ceph osd lspools
|
||||
# ceph osd pool rm .nfs .nfs --yes-i-really-really-mean-it
|
||||
# ceph osd pool rm .rgw.root .rgw.root --yes-i-really-really-mean-it
|
||||
# ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
|
||||
# ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
|
||||
# ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
|
||||
# ceph osd pool rm vms --yes-i-really-really-mean-it
|
||||
|
||||
# set dashboard password here
|
||||
# echo Password0 > password.txt
|
||||
# ceph dashboard ac-user-set-password admin -i password.txt
|
||||
# rm -f password.txt
|
||||
|
||||
- name: create dashboard password file
|
||||
copy:
|
||||
content: "{{ hypervisor['ceph_dash_admin_password'] }}"
|
||||
dest: "/root/dashboard_admin_password.txt"
|
||||
force: yes
|
||||
|
||||
- name: apply ceph nfs service spec
|
||||
ansible.builtin.command:
|
||||
cmd: "ceph dashboard ac-user-set-password admin -i /root/dashboard_admin_password.txt"
|
||||
|
||||
- name: remove dashboard password file
|
||||
ansible.builtin.file:
|
||||
path: "/root/dashboard_admin_password.txt"
|
||||
state: absent
|
||||
|
||||
vars:
|
||||
target_host: "{{ groups['ceph'] | first }}"
|
||||
when:
|
||||
- target_host == inventory_hostname
|
||||
- groups['ceph'] | length >0
|
||||
|
||||
# you need to split rgw and nfs into specialized services as the logic is different
|
||||
# - cephfs configuration - need to determine if there is service type mds in ceph_service_spec
|
||||
# - nfs - check if mds is there - not going to use rgw
|
||||
# - rgw - add multisite later
|
||||
|
||||
# do pools - vms / cephfs data + meta cephfs.cluster_volume.data cephfs.cluster_volume.meta /
|
||||
# do cephfs + nfs
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for roles/role-template
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
This role configures firewalld.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
The role handles overlay configuration merged from inventory/group_vars/firewalld.yml.
|
||||
To merge configurations there is a dependency on the merge_vars role to facilitate deep merging of dictionaries with nested lists, the merge vars role depends on 3rd party plugin ansible_merge_vars.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
This role accepts custom configuration from inventory/group_vars/firewalld.yml.
|
||||
As the role creates dynamic firewall rulesets read the comments in the following files to understand the behaviour.
|
||||
- firewalld/defaults/main.yml
|
||||
- inventory/group_vars/firewalld.yml (as listed below)
|
||||
|
||||
An example of custom rulesets injected at inventory/group_vars/firewalld.yml follows:
|
||||
|
||||
```yml
|
||||
# This is an example to demonstrate
|
||||
# - behaviour of the role
|
||||
# - how to add overlay/merged custom configuration items to group_vars inventory/group_vars/firewalld.xml
|
||||
|
||||
firewalld:
|
||||
enable: true
|
||||
|
||||
# create new ruleset
|
||||
# - each xcat_network with a corresponding entry in inventroy/networks.yml will have an ipset automatically generated
|
||||
# - each service with an xcat_network entry will assign the service to a zone of that name, the zone accepts ingress from the corresponding ipset
|
||||
# - xcat_groups will assign the ruleset to hosts in groups
|
||||
#
|
||||
# this ruleset applies inbound ftp to cluster and infiniband zones on hosts in groups all/compute/slurm/ansible
|
||||
firewalld_services:
|
||||
- name: ftp
|
||||
short: "FTP"
|
||||
description: "FTP service"
|
||||
port:
|
||||
- port: 21
|
||||
protocol: tcp
|
||||
xcat_groups:
|
||||
- compute
|
||||
- all
|
||||
- slurm
|
||||
- ansible
|
||||
xcat_networks:
|
||||
- cluster
|
||||
- infiniband
|
||||
|
||||
# create new ruleset with a custom zone
|
||||
# - the xcat_networks entry zabbix is not present in inventory/networks.yml, a new zone zabbix will be created
|
||||
# - the zone requires an ipset named zabbix to add an ingress source
|
||||
- name: zabbix
|
||||
short: "Zabbix"
|
||||
description: "Zabbix Ports"
|
||||
port:
|
||||
- port: 10050
|
||||
protocol: tcp
|
||||
- port: 10051
|
||||
protocol: tcp
|
||||
xcat_groups:
|
||||
- all
|
||||
xcat_networks:
|
||||
- zabbix
|
||||
|
||||
# create new ipset
|
||||
# - this ipset is for the corresponding auto-generated zabbix zone required by the zabbix service(ruleset)
|
||||
firewalld_ipsets:
|
||||
zabbix:
|
||||
short: zabbix
|
||||
description: zabbix ipset
|
||||
type: 'hash:ip'
|
||||
targets:
|
||||
- 172.22.1.220/32
|
||||
# - 172.22.1.0/24
|
||||
# - 10.0.10.0/16
|
||||
|
||||
# create new zone
|
||||
# - this zone example has an embedded ruleset to allow ANY inbound from IP range, no service or ipset is required
|
||||
firewalld_zones:
|
||||
- name: mgt
|
||||
short: "MGT"
|
||||
description: "management host"
|
||||
target: "ACCEPT"
|
||||
source:
|
||||
- address: 172.22.1.220/32
|
||||
|
||||
# network <-> network allow all rule
|
||||
# - ipset cluster has a corresponding inventory/group_vars/network.yml entry and is thus auto generated and populated with source address range
|
||||
# - ipsets can only be bound to a single zone, to use this format of rule, any service with a 'cluster' entry in 'xcat_networks:' list requires 'cluster' to be removed.
|
||||
#
|
||||
# - name: cluster2cluster
|
||||
# short: "cluster2cluster"
|
||||
# description: "allow ingress from cluster network"
|
||||
# target: "ACCEPT"
|
||||
# source:
|
||||
# - ipset: cluster
|
||||
```
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,213 @@
|
|||
# Copyright 2022 OCF Ltd. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: ft=yaml
|
||||
---
|
||||
|
||||
firewalld:
|
||||
## Toggle firewalld as installed and started
|
||||
enable: true
|
||||
|
||||
## INI entries to overide
|
||||
firewalld_conf_file: /etc/firewalld/firewalld.conf
|
||||
firewalld_conf:
|
||||
DefaultZone: "public"
|
||||
LogDenied: "off"
|
||||
|
||||
## Configure permanent firewalld services (xml config file)
|
||||
firewalld_services:
|
||||
- name: ssh
|
||||
short: "SSH"
|
||||
description: "SSH service"
|
||||
port:
|
||||
- port: 22
|
||||
protocol: tcp
|
||||
xcat_groups:
|
||||
- all
|
||||
xcat_networks:
|
||||
- campus
|
||||
- cluster
|
||||
- infiniband
|
||||
- ipmi
|
||||
- lustre
|
||||
- name: dhcpd
|
||||
short: "dhcp"
|
||||
description: "DHCP Service"
|
||||
port:
|
||||
- port: 7911
|
||||
protocol: tcp
|
||||
xcat_groups:
|
||||
- compute
|
||||
- all
|
||||
xcat_networks:
|
||||
- cluster
|
||||
# #
|
||||
# # Sample rulesets
|
||||
# #
|
||||
# - name: zabbix
|
||||
# short: "Zabbix"
|
||||
# description: "Zabbix Ports"
|
||||
# port:
|
||||
# - port: 10050
|
||||
# protocol: tcp
|
||||
# - port: 10051
|
||||
# protocol: tcp
|
||||
# xcat_groups:
|
||||
# - compute
|
||||
# - all
|
||||
# - slurm
|
||||
# - ansible
|
||||
# xcat_networks:
|
||||
# - cluster
|
||||
# - infiniband
|
||||
# - name: bacula
|
||||
# short: "Bacula"
|
||||
# description: "Bacula Client"
|
||||
# port:
|
||||
# - port: 9102
|
||||
# protocol: tcp
|
||||
# xcat_groups:
|
||||
# - compute
|
||||
# - all
|
||||
# - slurm
|
||||
# - ansible
|
||||
# xcat_networks:
|
||||
# - cluster
|
||||
# - infiniband
|
||||
# - name: ftp
|
||||
# short: "FTP"
|
||||
# description: "FTP Client/Server"
|
||||
# port:
|
||||
# - port: 21
|
||||
# protocol: tcp
|
||||
# xcat_groups:
|
||||
# - compute
|
||||
# - all
|
||||
# - slurm
|
||||
# - ansible
|
||||
# xcat_networks:
|
||||
# - cluster
|
||||
# - infiniband
|
||||
# - name: xCAT
|
||||
# short: "xcatd"
|
||||
# description: "xCAT Services"
|
||||
# port:
|
||||
# - port: 3001
|
||||
# protocol: tcp
|
||||
# - port: 3002
|
||||
# protocol: tcp
|
||||
# - port: 3003
|
||||
# protocol: tcp
|
||||
# - port: 623
|
||||
# protocol: udp
|
||||
# xcat_groups:
|
||||
# - compute
|
||||
# - all
|
||||
# - slurm
|
||||
# - ansible
|
||||
# xcat_networks:
|
||||
# - cluster
|
||||
# - infiniband
|
||||
# - name: rsyslogd
|
||||
# short: "rsyslogd"
|
||||
# description: "Rsyslog Service"
|
||||
# port:
|
||||
# - port: 514
|
||||
# protocol: tcp
|
||||
# xcat_groups:
|
||||
# - compute
|
||||
# - all
|
||||
# - slurm
|
||||
# - ansible
|
||||
# xcat_networks:
|
||||
# - cluster
|
||||
# - infiniband
|
||||
# - name: named
|
||||
# short: "named"
|
||||
# description: "DNS Service"
|
||||
# port:
|
||||
# - port: 53
|
||||
# protocol: tcp
|
||||
# - port: 953
|
||||
# protocol: tcp
|
||||
# xcat_groups:
|
||||
# - compute
|
||||
# - all
|
||||
# - slurm
|
||||
# - ansible
|
||||
# xcat_networks:
|
||||
# - cluster
|
||||
# - infiniband
|
||||
|
||||
## Configure permanent firewalld zones (xml config file)
|
||||
firewalld_zones:
|
||||
#
|
||||
# network <-> network allow all rules (ipset cluster is auto generated from xcat_networks)
|
||||
# ipsets can only be bound to a single zone, to use this format of rule, any service with a 'cluster' entry in 'xcat_networks:' list requires 'cluster' to be removed.
|
||||
#
|
||||
# - name: cluster2cluster
|
||||
# short: "cluster2cluster"
|
||||
# description: "allow ingress from cluster network"
|
||||
# target: "ACCEPT"
|
||||
# source:
|
||||
# - ipset: cluster
|
||||
#
|
||||
# inbuilt safety rule
|
||||
#
|
||||
- name: public
|
||||
short: "Public"
|
||||
description: "For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted."
|
||||
service:
|
||||
- name: "ssh"
|
||||
#
|
||||
# accept any traffic from management hosts
|
||||
#
|
||||
# - name: mgt
|
||||
# short: "MGT"
|
||||
# description: "Trust my management hosts"
|
||||
# target: "ACCEPT"
|
||||
# source:
|
||||
# - address: 172.22.1.220/32
|
||||
# - address: 172.22.1.221/32
|
||||
|
||||
## Configure permanent firewalld ipsets (xml config file)
|
||||
firewalld_ipsets:
|
||||
fail2ban-ssh:
|
||||
short: fail2ban-ssh
|
||||
description: fail2ban-ssh ipset
|
||||
type: 'hash:ip'
|
||||
options:
|
||||
maxelem:
|
||||
- 65536
|
||||
timeout:
|
||||
- 300
|
||||
hashsize:
|
||||
- 1024
|
||||
targets:
|
||||
- 10.0.0.1
|
||||
# fail2ban-ssh-ipv6:
|
||||
# short: fail2ban-ssh-ipv6
|
||||
# description: fail2ban-ssh-ipv6 ipset
|
||||
# type: 'hash:ip'
|
||||
# options:
|
||||
# family:
|
||||
# - inet6
|
||||
# maxelem:
|
||||
# - 65536
|
||||
# timeout:
|
||||
# - 300
|
||||
# hashsize:
|
||||
# - 1024
|
||||
# targets:
|
||||
# - 2a01::1
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
# This is an example to demonstrate
|
||||
# - behaviour of the role
|
||||
# - how to add overlay/merged custom configuration items to group_vars inventory/group_vars/firewalld.xml
|
||||
|
||||
firewalld:
|
||||
enable: true
|
||||
|
||||
# create new ruleset
|
||||
# - each xcat_network with a corresponding entry in inventroy/networks.yml will have an ipset automatically generated
|
||||
# - each xcat_network entry will assign service to a zone of that name, the zone accepts ingress from the corresponding ipset
|
||||
# - xcat_groups will assign the ruleset to hosts in groups
|
||||
#
|
||||
# this ruleset applies inbound ftp to cluster and infiniband zones on hosts in groups all/compute/slurm/ansible
|
||||
firewalld_services:
|
||||
- name: ftp
|
||||
short: "FTP"
|
||||
description: "FTP service"
|
||||
port:
|
||||
- port: 21
|
||||
protocol: tcp
|
||||
xcat_groups:
|
||||
- compute
|
||||
- all
|
||||
- slurm
|
||||
- ansible
|
||||
xcat_networks:
|
||||
- cluster
|
||||
- infiniband
|
||||
|
||||
# create new ruleset with a custom zone
|
||||
# - the xcat_networks entry zabbix is not present in inventroy/networks.yml, a new zone zabbix will be created
|
||||
# - the zone requires an ipset named zabbix to add an ingress source
|
||||
- name: zabbix
|
||||
short: "Zabbix"
|
||||
description: "Zabbix Ports"
|
||||
port:
|
||||
- port: 10050
|
||||
protocol: tcp
|
||||
- port: 10051
|
||||
protocol: tcp
|
||||
xcat_groups:
|
||||
- all
|
||||
xcat_networks:
|
||||
- zabbix
|
||||
|
||||
# create new ipset
|
||||
# - this ipset is for the corresponding auto-generated zabbix zone required by the zabbix service(ruleset)
|
||||
firewalld_ipsets:
|
||||
zabbix:
|
||||
short: zabbix
|
||||
description: zabbix ipset
|
||||
type: 'hash:ip'
|
||||
targets:
|
||||
- 172.22.1.220/32
|
||||
# - 172.22.1.0/24
|
||||
# - 10.0.10.0/16
|
||||
|
||||
# create new zone
|
||||
# - this zone has an embedded ruleset to allow ANY inbound from IP range, no ipset is required
|
||||
firewalld_zones:
|
||||
- name: mgt
|
||||
short: "MGT"
|
||||
description: "management host"
|
||||
target: "ACCEPT"
|
||||
source:
|
||||
- address: 172.22.1.220/32
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
# Copyright 2022 OCF Ltd. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: ft=yaml
|
||||
---
|
||||
|
||||
- name: reload/enable firewalld
|
||||
ansible.builtin.systemd:
|
||||
name: firewalld
|
||||
state: reloaded
|
||||
enabled: true
|
||||
listen: "reload_firewalld"
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.1
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
|
@ -0,0 +1,493 @@
|
|||
# Copyright 2022 OCF Ltd. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: ft=yaml
|
||||
---
|
||||
|
||||
######## inherit custom variables from inventory/host_vars/firewalld.yml
|
||||
|
||||
- name: merge custom vars
|
||||
block:
|
||||
|
||||
- name: set role variable sources
|
||||
set_fact:
|
||||
role_info:
|
||||
role_defaults_file: "{{ role_path }}/defaults/main.yml"
|
||||
role_override_file: "{{ ansible_inventory_sources[0] | dirname }}/group_vars/{{ role_name }}.yml"
|
||||
vars_return: "placeholder"
|
||||
|
||||
- set_fact:
|
||||
source_role: "{{ role_name }}"
|
||||
|
||||
- name: run merge_vars role
|
||||
include_role:
|
||||
name: "merge_vars"
|
||||
vars:
|
||||
a_config_file: "{{ role_info['role_defaults_file'] }}"
|
||||
b_config_file: "{{ role_info['role_override_file'] }}"
|
||||
calling_role: "{{ source_role }}"
|
||||
|
||||
- name: merge custom vars to vars[]
|
||||
set_fact:
|
||||
{ "{{ entry }}": "{{ role_info['vars_return'][entry] }}" }
|
||||
loop: "{{ role_info['vars_return'] | list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
when:
|
||||
- not role_info['vars_return'] == 'placeholder'
|
||||
|
||||
delegate_to: localhost
|
||||
|
||||
######## setup packages
|
||||
|
||||
- name: update package facts
|
||||
ansible.builtin.package_facts:
|
||||
manager: auto
|
||||
strategy: all
|
||||
when: ansible_facts['packages'] is not defined
|
||||
|
||||
- name: install firewalld packages
|
||||
block:
|
||||
|
||||
- name: install firewalld
|
||||
ansible.builtin.package:
|
||||
name:
|
||||
- firewalld
|
||||
- ipset
|
||||
- nftables
|
||||
state: latest
|
||||
|
||||
- name: Install python-firewall
|
||||
package:
|
||||
name: python-firewall
|
||||
state: present
|
||||
when:
|
||||
- ansible_facts['os_family'] == 'RedHat' and ansible_facts['distribution_major_version'] == '7'
|
||||
|
||||
- name: Install python3-firewall
|
||||
package:
|
||||
name: python3-firewall
|
||||
state: present
|
||||
when:
|
||||
- ansible_facts['os_family'] == 'RedHat' and ansible_facts['distribution_major_version'] == '8'
|
||||
|
||||
when:
|
||||
- vars['firewalld']['enable'] | bool
|
||||
- ansible_facts['packages']['firewalld'] is not defined or
|
||||
ansible_facts['packages']['ipset'] is not defined or
|
||||
ansible_facts['packages']['nftables'] is not defined
|
||||
|
||||
- name: update service facts
|
||||
ansible.builtin.service_facts:
|
||||
|
||||
######## disable firewall
|
||||
|
||||
- name: disable firewalld
|
||||
ansible.builtin.systemd:
|
||||
name: firewalld
|
||||
enabled: no
|
||||
state: stopped
|
||||
when:
|
||||
- ansible_facts['services']['firewalld.service'] is not defined
|
||||
- not vars['firewalld']['enable'] | bool
|
||||
|
||||
######## render firewalld config file
|
||||
|
||||
- name: update INI entries in firewalld config
|
||||
ini_file:
|
||||
path: "{{ firewalld['firewalld_conf_file'] }}"
|
||||
no_extra_spaces: true
|
||||
# write to root of document not under a section
|
||||
section: null
|
||||
option: "{{ entry.key }}"
|
||||
value: "{{ entry.value }}"
|
||||
loop: "{{ firewalld['firewalld_conf'] | dict2items }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- firewalld['enable'] | bool
|
||||
|
||||
######## map services to zones and networks
|
||||
|
||||
# map host 'xcat_groups' (hostvars[ansible_hostname]) to services 'xcat_groups' (vars['firewalld']['firewalld_services'] list item ['xcat_groups'])
|
||||
# determine if the service (firewall rule) is applicable to the host
|
||||
|
||||
- name: map services to zones
|
||||
block:
|
||||
|
||||
- name: find firewalld services to be applied to each xcat_groups that this host is a member of
|
||||
set_fact:
|
||||
target_services: "{{ target_services | default([]) + [service] }}"
|
||||
when: xcat_group in hostvars[ansible_hostname]['xcat_groups']
|
||||
with_subelements:
|
||||
- "{{ firewalld['firewalld_services'] }}"
|
||||
- xcat_groups
|
||||
- skip_missing: True
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
xcat_group: "{{ entry.1 }}"
|
||||
service: "{{ entry.0 }}"
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ target_services }}"
|
||||
|
||||
- name: remove duplicate service entries where host in multiple xcat_groups
|
||||
set_fact:
|
||||
target_services: "{{ target_services | unique }}"
|
||||
|
||||
when:
|
||||
- firewalld['enable'] | bool
|
||||
|
||||
######## configure ipsets
|
||||
|
||||
- name: configure ipsets
|
||||
block:
|
||||
|
||||
- name: list existing ipsets in /etc/firewalld/ipsets
|
||||
find:
|
||||
paths: "/etc/firewalld/ipsets/"
|
||||
patterns: "*.xml"
|
||||
recurse: no
|
||||
file_type: file
|
||||
register: ipsets_files_all
|
||||
|
||||
- name: exclude ipsets managed by ansible
|
||||
set_fact:
|
||||
ipsets_files: "{{ ipsets_files | default([]) + [file_path] }}"
|
||||
loop: "{{ ipsets_files_all['files'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry['path'] }}"
|
||||
file_name: "{{ entry['path'].split('/')[-1].split('.')[0] }}"
|
||||
when:
|
||||
- ipsets_files_all['files'] | length >0
|
||||
- file_name not in firewalld['firewalld_ipsets']
|
||||
- file_name not in vars['steel']['xcat_networks'] | list
|
||||
|
||||
- name: disable ipsets not managed by ansible
|
||||
copy:
|
||||
remote_src: yes
|
||||
src: "{{ file_path }}"
|
||||
dest: "{{ new_file_path }}"
|
||||
loop: "{{ ipsets_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
new_file_path: "{{ entry.split('.')[0] }}.ansible_disabled"
|
||||
register: ipsets_disabled
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- ipsets_files is defined
|
||||
- ipsets_files | length >0
|
||||
|
||||
- file:
|
||||
path: "{{ file_path }}"
|
||||
state: absent
|
||||
loop: "{{ ipsets_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
when:
|
||||
- not ipsets_disabled['skipped'] | bool
|
||||
|
||||
- name: generate ipsets from steel['xcat_networks']
|
||||
set_fact:
|
||||
generated_ipsets: "{{ generated_ipsets | default({}) | combine({ 'firewalld_ipsets': { network_name: { 'short': network_name, 'description': description, 'type': 'hash:ip', 'targets': [network_cidr] } } }, recursive=True) }}"
|
||||
# generated_ipsets: "{{ generated_ipsets | default({}) | combine({ 'firewalld_ipsets': { network_name: { 'short': network_name, 'description': description, 'type': 'hash:ip', 'options': { 'maclem': [65536], 'timeout': [300], 'hashsize': [1024] }, 'targets': [network_cidr] } } }, recursive=True) }}" # example with additional options
|
||||
loop: "{{ steel['xcat_networks'] | dict2items }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
network_name: "{{ entry.key }}"
|
||||
network_range: "{{ entry.value['network'] }}"
|
||||
network_mask: "{{ entry.value['netmask'] }}"
|
||||
network_cidr: "{{ network_range }}/{{ (network_range + '/' + network_mask) | ansible.utils.ipaddr('prefix') }}"
|
||||
description: "{{ network_name }} ipset"
|
||||
|
||||
# required where we have provided custom ipsets
|
||||
- name: merge generated generate ipsets
|
||||
set_fact:
|
||||
firewalld: "{{ firewalld | default({}) | combine( generated_ipsets, recursive=True) }}"
|
||||
when:
|
||||
- generated_ipsets is defined
|
||||
|
||||
- name: render firewalld ipsets
|
||||
template:
|
||||
src: "{{ role_path }}/templates/ipset_template.xml.j2"
|
||||
dest: /etc/firewalld/ipsets/{{ entry }}.xml
|
||||
loop: "{{ firewalld['firewalld_ipsets'] | list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
short: "{{ firewalld['firewalld_ipsets'][entry]['short'] }}"
|
||||
description: "{{ firewalld['firewalld_ipsets'][entry]['description'] }}"
|
||||
type: "{{ firewalld['firewalld_ipsets'][entry]['type'] }}"
|
||||
options: "{{ firewalld['firewalld_ipsets'][entry]['options'] }}"
|
||||
targets: "{{ firewalld['firewalld_ipsets'][entry]['targets'] }}"
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- firewalld['firewalld_ipsets'] is defined
|
||||
|
||||
when:
|
||||
- firewalld['enable'] | bool
|
||||
|
||||
######## configure services
|
||||
|
||||
- name: configure services
|
||||
block:
|
||||
|
||||
- name: list existing services in /etc/firewalld/services
|
||||
find:
|
||||
paths: "/etc/firewalld/services/"
|
||||
patterns: "*.xml"
|
||||
recurse: no
|
||||
file_type: file
|
||||
register: services_files_all
|
||||
|
||||
- name: exclude services managed by ansible
|
||||
set_fact:
|
||||
services_files: "{{ services_files | default([]) + [file_path] }}"
|
||||
loop: "{{ services_files_all['files'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry['path'] }}"
|
||||
file_name: "{{ entry['path'].split('/')[-1].split('.')[0] }}"
|
||||
when:
|
||||
- services_files_all['files'] | length >0
|
||||
- file_name not in firewalld['firewalld_services'] | map(attribute='name')
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ services_files }}"
|
||||
|
||||
- name: disable services not managed by ansible
|
||||
copy:
|
||||
remote_src: yes
|
||||
src: "{{ file_path }}"
|
||||
dest: "{{ new_file_path }}"
|
||||
loop: "{{ services_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
new_file_path: "{{ entry.split('.')[0] }}.ansible_disabled"
|
||||
register: services_disabled
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- services_files is defined
|
||||
- services_files | length >0
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ services_disabled }}"
|
||||
|
||||
- file:
|
||||
path: "{{ file_path }}"
|
||||
state: absent
|
||||
loop: "{{ services_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
when:
|
||||
- not services_disabled['skipped'] | bool
|
||||
|
||||
- name: render firewalld services
|
||||
template:
|
||||
src: "{{ role_path }}/templates/service_template.xml.j2"
|
||||
dest: /etc/firewalld/services/{{ name }}.xml
|
||||
loop: "{{ target_services }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
name: "{{ entry['name'] }}"
|
||||
short: "{{ entry['short'] }}"
|
||||
description: "{{ entry['description'] }}"
|
||||
port: "{{ entry['port'] }}"
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- firewalld['firewalld_services'] is defined
|
||||
- firewalld['firewalld_services'] | length >0
|
||||
|
||||
when:
|
||||
- firewalld['enable'] | bool
|
||||
|
||||
######## configure zones
|
||||
|
||||
- name: configure zones
|
||||
block:
|
||||
|
||||
# there are no preset zone names, zones are dynamically generated from top level source inventory/networks.yml
|
||||
# to create a custom zone
|
||||
# - a custom firewalld_services entry with an (arbritrary) xcat_networks list item will generate a new zone
|
||||
# - a custom firewalld_ipsets entry named the same as the custom services entry will be required to control ingress
|
||||
#
|
||||
# - name: generate all zone names from xcat_networks entry in 'firewalld_merged['firewalld_services']'
|
||||
- name: generate all zone names from xcat_networks entry in 'firewalld['firewalld_services']'
|
||||
set_fact:
|
||||
zone_list: "{{ zone_list | default([]) + zone }}"
|
||||
loop: "{{ target_services }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
zone: "{{ entry['xcat_networks'] }}"
|
||||
|
||||
- name: filter on unique zones from services
|
||||
set_fact:
|
||||
zone_list: "{{ zone_list | unique }}"
|
||||
|
||||
# this is the pivotal task in the playbook to ensure the zones dictionary is in the format accepted for the jinja2 loops in the zone_template.xml.j2
|
||||
# loop unique zones, match all services bound to the zone using xcat_networks, get a list of service names and format into a list of dicts each with the same key 'name:', render zones template in compatible format for jinja
|
||||
#
|
||||
- name: create zones dictionary
|
||||
set_fact:
|
||||
firewalld_zones: "{{ firewalld_zones | default([]) + ([{ 'name': zone_name, 'short': zone_name, 'description': zone_description, 'source': [{ 'ipset': zone_name }], 'service': service_trim }] ) }}"
|
||||
# firewalld_zones: "{{ firewalld_zones | default([]) + ([{ 'name': zone_name, 'short': zone_name, 'description': zone_description, 'source': [{ 'ipset': zone_name }], 'service': [{ 'name': 'ssh' }, { 'name': 'ftp' }] }] ) }}" # format required
|
||||
loop: "{{ zone_list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
zone_name: "{{ entry }}"
|
||||
zone_description: "{{ entry }} zone"
|
||||
# use mapping to return list of services
|
||||
service: "{{ target_services | selectattr('xcat_networks', 'search', entry) | map(attribute='name') }}"
|
||||
#
|
||||
# inline jinja to create a list of dicts for the services used in this zone
|
||||
service_format: >-
|
||||
{% set results = [] %}
|
||||
{% for svc in service|default([]) %}
|
||||
{% set sub_results = {} %}
|
||||
{% set _ = sub_results.update({"name": svc}) %}
|
||||
{% set _ = results.append(sub_results) %}
|
||||
{% endfor -%}
|
||||
{{results}}
|
||||
# trim whitespaces to allow ansible to interperet as list item in the firewalld_zones dict
|
||||
service_trim: "{{ service_format | trim }}"
|
||||
|
||||
# - name: add pre-defined zones from firewalld_merged['firewalld_zones']
|
||||
- name: add pre-defined zones from firewalld['firewalld_zones']
|
||||
set_fact:
|
||||
firewalld_zones: "{{ firewalld_zones | default([]) + [entry] }}"
|
||||
loop: "{{ firewalld['firewalld_zones'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
when:
|
||||
- firewalld['firewalld_zones'] is defined
|
||||
- firewalld['firewalld_zones'] | length >0
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ firewalld_zones }}"
|
||||
|
||||
- name: list existing zones in /etc/firewalld/zones
|
||||
find:
|
||||
paths: "/etc/firewalld/zones/"
|
||||
patterns: "*.xml"
|
||||
recurse: no
|
||||
file_type: file
|
||||
register: zones_files_all
|
||||
|
||||
- name: exclude zones managed by ansible
|
||||
set_fact:
|
||||
zone_files: "{{ zone_files | default([]) + [file_path] }}"
|
||||
loop: "{{ zones_files_all['files'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry['path'] }}"
|
||||
file_name: "{{ entry['path'].split('/')[-1].split('.')[0] }}"
|
||||
when:
|
||||
- zones_files_all['files'] | length >0
|
||||
- file_name not in firewalld_zones | map(attribute='name')
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ zone_files }}"
|
||||
|
||||
- name: disable zones not managed by ansible
|
||||
copy:
|
||||
remote_src: yes
|
||||
src: "{{ file_path }}"
|
||||
dest: "{{ new_file_path }}"
|
||||
loop: "{{ zone_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
new_file_path: "{{ entry.split('.')[0] }}.ansible_disabled"
|
||||
register: zones_disabled
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- zone_files is defined
|
||||
- zone_files | length >0
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ zones_disabled }}"
|
||||
|
||||
- file:
|
||||
path: "{{ file_path }}"
|
||||
state: absent
|
||||
loop: "{{ zone_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
when:
|
||||
- not zones_disabled['skipped'] | bool
|
||||
|
||||
- name: render firewalld zones
|
||||
template:
|
||||
src: "{{ role_path }}/templates/zone_template.xml.j2"
|
||||
dest: /etc/firewalld/zones/{{ name }}.xml
|
||||
loop: "{{ firewalld_zones | list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
name: "{{ entry['name'] }}"
|
||||
short: "{{ entry['short'] }}"
|
||||
description: "{{ entry['description'] }}"
|
||||
service: "{{ entry['service'] }}"
|
||||
ipset: "{{ entry['name'] }}"
|
||||
notify: reload_firewalld
|
||||
when:
|
||||
- firewalld_zones is defined
|
||||
- firewalld_zones | length >0
|
||||
|
||||
when:
|
||||
- firewalld['enable'] | bool
|
||||
|
||||
######## start firewalld
|
||||
#
|
||||
# handler starts/reloads/enables firewalld service
|
||||
|
||||
# - name: Flush handlers
|
||||
# meta: flush_handlers
|
||||
|
||||
# - name: Start and enable firewalld
|
||||
# ansible.builtin.systemd:
|
||||
# name: firewalld.service
|
||||
# state: restarted
|
||||
# # daemon_reload: yes
|
||||
# enabled: yes
|
||||
# when:
|
||||
# - ansible_facts['services']['firewalld.service'] is defined
|
||||
# - firewalld['enable'] | bool
|
||||
|
|
@ -0,0 +1,670 @@
|
|||
# Copyright 2022 OCF Ltd. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: ft=yaml
|
||||
---
|
||||
|
||||
######## inherit custom variables from hostvars/firewalld.yml
|
||||
|
||||
# this works - much tidier solution - dont have to load EVERYTHING under steel just overrides - will this work with deep dicts though?? put a control task in here to notify a user of ANY clash
|
||||
# - name: merge steel['firewalld'] over role defaults
|
||||
# set_fact:
|
||||
# firewalld: "{{ firewalld | default({}) | combine( steel['firewalld'], recursive=True) }}"
|
||||
# when: steel['firewalld'] is defined
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
######## inherit custom variables from inventory/host_vars/firewalld.yml
|
||||
|
||||
# deep merging of dicts for variable overlay is handled 'once' by a 3rd party ansible plugin
|
||||
# the rest of the playbook explicitly does not use this plugin in case it is deprecated or functionally duplicated by ansible native in future
|
||||
# - ansible_merge_vars
|
||||
|
||||
- name: merge custom vars over inbuilt vars
|
||||
block:
|
||||
|
||||
- name: copy sets of 'to merge' vars to ansible_merge_vars compatible named variables
|
||||
include_vars:
|
||||
file: "{{ role_path }}/defaults/main.yml"
|
||||
name: merge
|
||||
|
||||
# merge precedence is controlled by alphabetical name of merge variables, variable declation order seems to also influence behaviour in some circumstances
|
||||
# use the order and format as below
|
||||
- name: copy sets of 'to merge' vars to ansible_merge_vars compatible named variables
|
||||
set_fact:
|
||||
a_inbuilt_config__to_merge: "{{ merge['firewalld'] }}"
|
||||
b_custom_config__to_merge: "{{ vars['steel']['firewalld'] }}"
|
||||
|
||||
- name: merge custom vars over inbuilt vars
|
||||
merge_vars:
|
||||
suffix_to_merge: _config__to_merge
|
||||
merged_var_name: firewalld_merged
|
||||
expected_type: 'dict'
|
||||
recursive_dict_merge: true
|
||||
|
||||
when:
|
||||
- steel['firewalld'] is defined
|
||||
|
||||
- name: fallback behaviour - where no custom vars inventory/host_vars/firewalld.yml
|
||||
set_fact:
|
||||
firewalld_merged: "{{ firewalld }}"
|
||||
when:
|
||||
- steel['firewalld'] is not defined
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ firewalld_merged }}"
|
||||
|
||||
# - fail:
|
||||
# msg:
|
||||
|
||||
######## setup packages
|
||||
|
||||
- name: update package facts
|
||||
ansible.builtin.package_facts:
|
||||
manager: auto
|
||||
strategy: all
|
||||
when: ansible_facts['packages'] is not defined
|
||||
|
||||
- name: install firewalld packages
|
||||
block:
|
||||
|
||||
- name: install firewalld
|
||||
ansible.builtin.package:
|
||||
name:
|
||||
- firewalld
|
||||
- ipset
|
||||
- nftables
|
||||
state: latest
|
||||
|
||||
- name: Install python-firewall
|
||||
package:
|
||||
name: python-firewall
|
||||
state: present
|
||||
when:
|
||||
- ansible_facts['os_family'] == 'RedHat' and ansible_facts['distribution_major_version'] == '7'
|
||||
|
||||
- name: Install python3-firewall
|
||||
package:
|
||||
name: python3-firewall
|
||||
state: present
|
||||
when:
|
||||
- ansible_facts['os_family'] == 'RedHat' and ansible_facts['distribution_major_version'] == '8'
|
||||
|
||||
when:
|
||||
- vars['firewalld']['enable'] | bool
|
||||
- ansible_facts['packages']['firewalld'] is not defined or
|
||||
ansible_facts['packages']['ipset'] is not defined or
|
||||
ansible_facts['packages']['nftables'] is not defined
|
||||
|
||||
- name: update service facts
|
||||
ansible.builtin.service_facts:
|
||||
|
||||
######## disable firewall
|
||||
|
||||
- name: disable firewalld
|
||||
ansible.builtin.systemd:
|
||||
name: firewalld
|
||||
enabled: no
|
||||
state: stopped
|
||||
when:
|
||||
- ansible_facts['services']['firewalld.service'] is not defined
|
||||
# - not vars['firewalld']['enable'] | bool
|
||||
- not vars['firewalld_merged']['enable'] | bool
|
||||
|
||||
######## render firewalld config file
|
||||
|
||||
- name: update INI entries in firewalld config
|
||||
ini_file:
|
||||
# path: "{{ firewalld.firewalld_conf_file }}"
|
||||
path: "{{ firewalld_merged['firewalld_conf_file'] }}"
|
||||
no_extra_spaces: true
|
||||
# write to root of document not under a section
|
||||
section: null
|
||||
option: "{{ entry.key }}"
|
||||
value: "{{ entry.value }}"
|
||||
# loop: "{{ firewalld['firewalld_conf'] | dict2items }}"
|
||||
loop: "{{ firewalld_merged['firewalld_conf'] | dict2items }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
# - vars['firewalld']['enable'] | bool
|
||||
- firewalld_merged['enable'] | bool
|
||||
|
||||
######## map services to zones and networks
|
||||
|
||||
# map host 'xcat_groups' (hostvars[ansible_hostname]) to services 'xcat_groups' (vars['firewalld']['firewalld_services'] list item ['xcat_groups'])
|
||||
# determine if the service (firewall rule) is applicable to the host
|
||||
|
||||
- name: map services to zones and networks
|
||||
block:
|
||||
|
||||
- name: find firewalld services to be applied to each xcat_groups that this host is a member of
|
||||
set_fact:
|
||||
target_services: "{{ target_services | default([]) + [service] }}"
|
||||
# target_services: "{{ target_services | default([]) | combine({ 'firewalld_services': [firewalld_service] }, recursive=True) }}"
|
||||
when: xcat_group in hostvars[ansible_hostname]['xcat_groups']
|
||||
with_subelements:
|
||||
# - "{{ vars['firewalld']['firewalld_services'] }}"
|
||||
- "{{ firewalld_merged['firewalld_services'] }}"
|
||||
- xcat_groups
|
||||
- skip_missing: True
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
xcat_group: "{{ entry.1 }}"
|
||||
service: "{{ entry.0 }}"
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ target_services }}"
|
||||
|
||||
- name: remove duplicate service entries where host in multiple xcat_groups
|
||||
set_fact:
|
||||
target_services: "{{ target_services | unique }}"
|
||||
|
||||
- name: find all networks where the host has an interface (source = hostvars[ansible_hostname]['xcat_nics'])
|
||||
set_fact:
|
||||
nic_list: "{{ nic_list | default([]) + [xcat_nic] }}"
|
||||
loop: "{{ hostvars[ansible_hostname]['xcat_nics'] | list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
xcat_nic: "{{ entry['network'] }}"
|
||||
|
||||
# we need a double check task here - why? - what is in xcat is not necessarily what is configured on the host, i.e no ib0 adapter
|
||||
# LAB has no ib0 - write something here to remove infiniband from nic_list
|
||||
#
|
||||
# - name: find all networks where the host has an interface (source = nmcli ), remove where adapter is not present on host
|
||||
|
||||
|
||||
# UPDATE - prob dont need this! we can assign services to multiple zones
|
||||
#
|
||||
# account for the following condition:
|
||||
# - where a service has multiple 'xcat_networks' entries (i.e cluster and infiniband)
|
||||
# - and the host has network adapters in multiple 'xcat_networks' (i.e cluster and infiniband)
|
||||
# we must ensure that the service is duplicated for each network
|
||||
# - make unique services with the service name suffixed with _network (i.e ssh_cluster and ssh_infiniband)
|
||||
# write role defensively even though in usual operation this may not occur
|
||||
#
|
||||
# - name: find firewalld services to be applied to each xcat_networks that this host is a member of
|
||||
# set_fact:
|
||||
# target_services_by_network: "{{ target_services_by_network | default([]) + [service] }}"
|
||||
# when: network in nic_list
|
||||
# with_subelements:
|
||||
# - "{{ target_services }}"
|
||||
# - xcat_networks
|
||||
# - skip_missing: True
|
||||
# loop_control:
|
||||
# loop_var: entry
|
||||
# vars:
|
||||
# tmp_service: "{{ entry.0 }}"
|
||||
# network: "{{ entry.1 }}"
|
||||
# name: "{{ tmp_service['name'] }}_{{ network }}"
|
||||
# description: "{{ tmp_service['description'] }} IPset {{ network }}"
|
||||
# service: "{{ tmp_service | default({}) | combine({ 'name': name, 'short': name, 'description': description, 'network': network }) }}"
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ target_services_by_network }}"
|
||||
|
||||
when:
|
||||
# - vars['firewalld']['enable'] | bool
|
||||
- firewalld_merged['enable'] | bool
|
||||
# - not vars['firewalld']['enable'] | bool
|
||||
|
||||
######## configure ipsets
|
||||
|
||||
- name: configure ipsets
|
||||
block:
|
||||
|
||||
- name: list existing ipsets in /etc/firewalld/ipsets
|
||||
find:
|
||||
paths: "/etc/firewalld/ipsets/"
|
||||
patterns: "*.xml"
|
||||
recurse: no
|
||||
file_type: file
|
||||
register: ipsets_files_all
|
||||
|
||||
- name: exclude ipsets managed by ansible
|
||||
set_fact:
|
||||
ipsets_files: "{{ ipsets_files | default([]) + [file_path] }}"
|
||||
loop: "{{ ipsets_files_all['files'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry['path'] }}"
|
||||
file_name: "{{ entry['path'].split('/')[-1].split('.')[0] }}"
|
||||
when:
|
||||
- ipsets_files_all['files'] | length >0
|
||||
# - file_name not in firewalld['firewalld_ipsets']
|
||||
- file_name not in firewalld_merged['firewalld_ipsets']
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ ipset_files }}"
|
||||
|
||||
- name: disable ipsets not managed by ansible
|
||||
copy:
|
||||
remote_src: yes
|
||||
src: "{{ file_path }}"
|
||||
dest: "{{ new_file_path }}"
|
||||
loop: "{{ ipsets_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
new_file_path: "{{ entry.split('.')[0] }}.ansible_disabled"
|
||||
register: ipsets_disabled
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
- ipsets_files is defined
|
||||
- ipsets_files | length >0
|
||||
|
||||
- file:
|
||||
path: "{{ file_path }}"
|
||||
state: absent
|
||||
loop: "{{ ipsets_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
when:
|
||||
- not ipsets_disabled['skipped'] | bool
|
||||
|
||||
- name: generate ipsets from steel['xcat_networks']
|
||||
set_fact:
|
||||
generated_ipsets: "{{ generated_ipsets | default({}) | combine({ 'firewalld_ipsets': { network_name: { 'short': network_name, 'description': description, 'type': 'hash:ip', 'targets': [network_cidr] } } }, recursive=True) }}"
|
||||
# example with additional options
|
||||
# generated_ipsets: "{{ generated_ipsets | default({}) | combine({ 'firewalld_ipsets': { network_name: { 'short': network_name, 'description': description, 'type': 'hash:ip', 'options': { 'maclem': [65536], 'timeout': [300], 'hashsize': [1024] }, 'targets': [network_cidr] } } }, recursive=True) }}"
|
||||
loop: "{{ steel['xcat_networks'] | dict2items }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
network_name: "{{ entry.key }}"
|
||||
network_range: "{{ entry.value['network'] }}"
|
||||
network_mask: "{{ entry.value['netmask'] }}"
|
||||
network_cidr: "{{ network_range }}/{{ (network_range + '/' + network_mask) | ansible.utils.ipaddr('prefix') }}"
|
||||
description: "{{ network_name }} ipset"
|
||||
|
||||
# required where we have provided custom ipsets
|
||||
- name: merge generated generate ipsets
|
||||
set_fact:
|
||||
# firewalld: "{{ firewalld | default({}) | combine( generated_ipsets, recursive=True) }}"
|
||||
firewalld_merged: "{{ firewalld_merged | default({}) | combine( generated_ipsets, recursive=True) }}"
|
||||
when:
|
||||
- generated_ipsets is defined
|
||||
|
||||
- name: render firewalld ipsets
|
||||
template:
|
||||
src: "{{ role_path }}/templates/ipset_template.xml.j2"
|
||||
dest: /etc/firewalld/ipsets/{{ entry }}.xml
|
||||
# loop: "{{ firewalld['firewalld_ipsets'] | list }}"
|
||||
loop: "{{ firewalld_merged['firewalld_ipsets'] | list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
# short: "{{ firewalld['firewalld_ipsets'][entry]['short'] }}"
|
||||
# description: "{{ firewalld['firewalld_ipsets'][entry]['description'] }}"
|
||||
# type: "{{ firewalld['firewalld_ipsets'][entry]['type'] }}"
|
||||
# options: "{{ firewalld['firewalld_ipsets'][entry]['options'] }}"
|
||||
# targets: "{{ firewalld['firewalld_ipsets'][entry]['targets'] }}"
|
||||
short: "{{ firewalld_merged['firewalld_ipsets'][entry]['short'] }}"
|
||||
description: "{{ firewalld_merged['firewalld_ipsets'][entry]['description'] }}"
|
||||
type: "{{ firewalld_merged['firewalld_ipsets'][entry]['type'] }}"
|
||||
options: "{{ firewalld_merged['firewalld_ipsets'][entry]['options'] }}"
|
||||
targets: "{{ firewalld_merged['firewalld_ipsets'][entry]['targets'] }}"
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
# - firewalld['firewalld_ipsets'] is defined
|
||||
- firewalld_merged['firewalld_ipsets'] is defined
|
||||
|
||||
when:
|
||||
# - vars['firewalld']['enable'] | bool
|
||||
- firewalld_merged['enable'] | bool
|
||||
# - not vars['firewalld']['enable'] | bool
|
||||
|
||||
######## configure services
|
||||
|
||||
- name: configure services
|
||||
block:
|
||||
|
||||
- name: list existing services in /etc/firewalld/services
|
||||
find:
|
||||
paths: "/etc/firewalld/services/"
|
||||
patterns: "*.xml"
|
||||
recurse: no
|
||||
file_type: file
|
||||
register: services_files_all
|
||||
|
||||
- name: exclude services managed by ansible
|
||||
set_fact:
|
||||
services_files: "{{ services_files | default([]) + [file_path] }}"
|
||||
loop: "{{ services_files_all['files'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry['path'] }}"
|
||||
file_name: "{{ entry['path'].split('/')[-1].split('.')[0] }}"
|
||||
when:
|
||||
- services_files_all['files'] | length >0
|
||||
# - file_name not in firewalld['firewalld_services'] | map(attribute='name')
|
||||
- file_name not in firewalld_merged['firewalld_services'] | map(attribute='name')
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ services_files }}"
|
||||
|
||||
- name: disable services not managed by ansible
|
||||
copy:
|
||||
remote_src: yes
|
||||
src: "{{ file_path }}"
|
||||
dest: "{{ new_file_path }}"
|
||||
loop: "{{ services_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
new_file_path: "{{ entry.split('.')[0] }}.ansible_disabled"
|
||||
register: services_disabled
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
- services_files is defined
|
||||
- services_files | length >0
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ services_disabled }}"
|
||||
|
||||
- file:
|
||||
path: "{{ file_path }}"
|
||||
state: absent
|
||||
loop: "{{ services_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
when:
|
||||
- not services_disabled['skipped'] | bool
|
||||
|
||||
|
||||
# UPDATE - dont use this dict use a simpler one that hasnt had network added - pointless with the behaviour - logic is in zones
|
||||
# the source of truth for services is actually "{{ target_services_by_network }}" - your loops above need to use this!!
|
||||
# list of dicts so not changed the format - ok
|
||||
# {
|
||||
# "description": "DHCP Service",
|
||||
# "name": "dhcpd_cluster",
|
||||
# "network": "cluster",
|
||||
# "port": [
|
||||
# {
|
||||
# "port": 7911,
|
||||
# "protocol": "tcp"
|
||||
# }
|
||||
# ],
|
||||
# "short": "dhcp",
|
||||
# "zone": "public"
|
||||
# }
|
||||
# NOW
|
||||
# {
|
||||
# "description": "TSEED service",
|
||||
# "name": "tseed",
|
||||
# "port": [
|
||||
# {
|
||||
# "port": 22,
|
||||
# "protocol": "tcp"
|
||||
# }
|
||||
# ],
|
||||
# "short": "TSEED",
|
||||
# "xcat_networks": [
|
||||
# "cluster",
|
||||
# "infiniband",
|
||||
# "tseed"
|
||||
# ],
|
||||
# "zone": "mgt"
|
||||
# },
|
||||
|
||||
|
||||
- name: render firewalld services
|
||||
template:
|
||||
src: "{{ role_path }}/templates/service_template.xml.j2"
|
||||
dest: /etc/firewalld/services/{{ name }}.xml
|
||||
# debug:
|
||||
# msg:
|
||||
# - "name {{name}}"
|
||||
# - "short {{short}}"
|
||||
# - "description {{description}}"
|
||||
# - "rules {{port}}"
|
||||
# # - "ipset {{network}}"
|
||||
# loop: "{{ target_services_by_network }}"
|
||||
loop: "{{ target_services }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
name: "{{ entry['name'] }}"
|
||||
short: "{{ entry['short'] }}"
|
||||
description: "{{ entry['description'] }}"
|
||||
port: "{{ entry['port'] }}"
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
# - firewalld['firewalld_services'] is defined
|
||||
# - firewalld['firewalld_services'] | length >0
|
||||
- firewalld_merged['firewalld_services'] is defined
|
||||
- firewalld_merged['firewalld_services'] | length >0
|
||||
|
||||
when:
|
||||
# - vars['firewalld']['enable'] | bool
|
||||
- firewalld_merged['enable'] | bool
|
||||
# - not vars['firewalld']['enable'] | bool
|
||||
|
||||
######## configure zones
|
||||
|
||||
- name: configure zones
|
||||
block:
|
||||
|
||||
# Update: rename firewalld_services xcat_networks to zones_or_xcat_networks - will have to put in a qualifying test to match an ipset though
|
||||
|
||||
# there are no preset zone names, zones are dynamically generated from top level source inventory/networks.yml
|
||||
# to create a custom zone
|
||||
# - a custom firewalld_services entry with an (arbritrary) xcat_networks list item will generate a new zone
|
||||
# - a custom firewalld_ipsets entry named the same as the custom services entry will be required to control ingress
|
||||
#
|
||||
- name: generate all zone names from xcat_networks entry in 'firewalld_merged['firewalld_services']'
|
||||
set_fact:
|
||||
# zone_list: "{{ zone_list | default([]) + [zone] }}"
|
||||
zone_list: "{{ zone_list | default([]) + zone }}"
|
||||
# loop: "{{ target_services_by_network }}"
|
||||
loop: "{{ target_services }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
# zone: "{{ entry['zone'] }}"
|
||||
zone: "{{ entry['xcat_networks'] }}"
|
||||
|
||||
- name: filter on unique zones from services
|
||||
set_fact:
|
||||
zone_list: "{{ zone_list | unique }}"
|
||||
|
||||
# # loop unique zones, match all services bound to the zone using xcat_networks, get a list of service names
|
||||
# - name: create zones dictionary
|
||||
# set_fact:
|
||||
# firewalld_zones: "{{ firewalld_zones | default([]) + ([{ 'name': zone_name, 'short': zone_name, 'description': zone_description, 'source': [{ 'ipset': zone_name }], 'service': service }] ) }}"
|
||||
# # firewalld_zones: "{{ firewalld_zones | default([]) + ([{ 'name': zone_name, 'short': zone_name, 'description': zone_description, 'source': [{ 'ipset': zone_name }], 'service': [{ 'name': 'ssh' }] }] ) }}" # format we are looking for
|
||||
# loop: "{{ zone_list }}"
|
||||
# loop_control:
|
||||
# loop_var: entry
|
||||
# vars:
|
||||
# zone_name: "{{ entry }}"
|
||||
# zone_description: "{{ entry }} zone"
|
||||
# service: "{{ target_services | selectattr('xcat_networks', 'search', entry) | map(attribute='name') }}"
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ target_services }}"
|
||||
|
||||
# this is the pivotal task in the playbook to ensure the zones dictionary is in the format accepted for the jinja2 loops in the zone_template.xml.j2
|
||||
# this is voodoo, sneeze and its gone
|
||||
# loop unique zones, match all services bound to the zone using xcat_networks, get a list of service names
|
||||
#
|
||||
- name: create zones dictionary
|
||||
set_fact:
|
||||
# firewalld_zones: "{{service_format1}}"
|
||||
firewalld_zones: "{{ firewalld_zones | default([]) + ([{ 'name': zone_name, 'short': zone_name, 'description': zone_description, 'source': [{ 'ipset': zone_name }], 'service': service_trim }] ) }}"
|
||||
# firewalld_zones: "{{ firewalld_zones | default([]) + ([{ 'name': zone_name, 'short': zone_name, 'description': zone_description, 'source': [{ 'ipset': zone_name }], 'service': [{ 'name': 'ssh' }] }] ) }}" # format we are looking for
|
||||
# debug:
|
||||
# msg:
|
||||
# - "{{ service_trim }}"
|
||||
loop: "{{ zone_list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
zone_name: "{{ entry }}"
|
||||
zone_description: "{{ entry }} zone"
|
||||
# use mapping to find list of services
|
||||
service: "{{ target_services | selectattr('xcat_networks', 'search', entry) | map(attribute='name') }}"
|
||||
#
|
||||
# inline jinja to create a list of dicts for the services used in this zone
|
||||
service_format: >-
|
||||
{% set results = [] %}
|
||||
{% for svc in service|default([]) %}
|
||||
{% set sub_results = {} %}
|
||||
{% set _ = sub_results.update({"name": svc}) %}
|
||||
{% set _ = results.append(sub_results) %}
|
||||
{% endfor -%}
|
||||
{{results}}
|
||||
#
|
||||
# create a list of items that 'look' like one element dicts
|
||||
# service_format: >-
|
||||
# {% set results = [] %}
|
||||
# {% for svc in service|default([]) %}
|
||||
# {% set d = ({"name": svc}) %}
|
||||
# {% set _ = results.append(d) %}
|
||||
# {% endfor -%}
|
||||
# {{ results }}
|
||||
#
|
||||
# trim whitespaces to allow ansible to interperet as list item in the firewalld_zones dict
|
||||
service_trim: "{{ service_format | trim }}"
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ firewalld_zones }}"
|
||||
|
||||
- name: list existing zones in /etc/firewalld/zones
|
||||
find:
|
||||
paths: "/etc/firewalld/zones/"
|
||||
patterns: "*.xml"
|
||||
recurse: no
|
||||
file_type: file
|
||||
register: zones_files_all
|
||||
|
||||
- name: exclude zones managed by ansible
|
||||
set_fact:
|
||||
zone_files: "{{ zone_files | default([]) + [file_path] }}"
|
||||
loop: "{{ zones_files_all['files'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry['path'] }}"
|
||||
file_name: "{{ entry['path'].split('/')[-1].split('.')[0] }}"
|
||||
when:
|
||||
- zones_files_all['files'] | length >0
|
||||
- file_name not in firewalld_zones | map(attribute='name')
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ zone_files }}"
|
||||
|
||||
- name: disable zones not managed by ansible
|
||||
copy:
|
||||
remote_src: yes
|
||||
src: "{{ file_path }}"
|
||||
dest: "{{ new_file_path }}"
|
||||
loop: "{{ zone_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
new_file_path: "{{ entry.split('.')[0] }}.ansible_disabled"
|
||||
register: zones_disabled
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
- zone_files is defined
|
||||
- zone_files | length >0
|
||||
|
||||
# - debug:
|
||||
# msg:
|
||||
# - "{{ zones_disabled }}"
|
||||
|
||||
- file:
|
||||
path: "{{ file_path }}"
|
||||
state: absent
|
||||
loop: "{{ zone_files }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
file_path: "{{ entry }}"
|
||||
when:
|
||||
- not zones_disabled['skipped'] | bool
|
||||
|
||||
- name: render firewalld zones
|
||||
template:
|
||||
src: "{{ role_path }}/templates/zone_template.xml.j2"
|
||||
dest: /etc/firewalld/zones/{{ name }}.xml
|
||||
# debug:
|
||||
# msg:
|
||||
# # - "{{ name }}"
|
||||
# # - "{{ short }}"
|
||||
# # - "{{ description }}"
|
||||
# # - "{{ service }}"
|
||||
# # - "{{ ipset }}"
|
||||
# - "{{ entry }}"
|
||||
loop: "{{ firewalld_zones | list }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
name: "{{ entry['name'] }}"
|
||||
short: "{{ entry['short'] }}"
|
||||
description: "{{ entry['description'] }}"
|
||||
service: "{{ entry['service'] }}"
|
||||
ipset: "{{ entry['name'] }}"
|
||||
# notify: reload firewalld
|
||||
when:
|
||||
- firewalld_zones is defined
|
||||
- firewalld_zones | length >0
|
||||
|
||||
when:
|
||||
# - vars['firewalld']['enable'] | bool
|
||||
- firewalld_merged['enable'] | bool
|
||||
# - not vars['firewalld']['enable'] | bool
|
||||
|
||||
######## start firewalld
|
||||
|
||||
- name: Start and enable firewalld
|
||||
service:
|
||||
name: firewalld
|
||||
state: started
|
||||
enabled: true
|
||||
when:
|
||||
- ansible_facts['services']['firewalld.service'] is defined
|
||||
# - vars['firewalld']['enable'] | bool
|
||||
- firewalld_merged['enable'] | bool
|
||||
|
||||
# - fail:
|
||||
# msg:
|
||||
# - "stop"
|
||||
|
||||
# - name: Flush all handlers
|
||||
# meta: flush_handlers
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
# firewalld config file
|
||||
|
||||
# default zone
|
||||
# The default zone used if an empty zone string is used.
|
||||
# Default: public
|
||||
DefaultZone=public
|
||||
|
||||
# Minimal mark
|
||||
# Marks up to this minimum are free for use for example in the direct
|
||||
# interface. If more free marks are needed, increase the minimum
|
||||
# Default: 100
|
||||
MinimalMark=100
|
||||
|
||||
# Clean up on exit
|
||||
# If set to no or false the firewall configuration will not get cleaned up
|
||||
# on exit or stop of firewalld
|
||||
# Default: true
|
||||
#CleanupOnExit=yes
|
||||
CleanupOnExit=no
|
||||
|
||||
# Lockdown
|
||||
# If set to enabled, firewall changes with the D-Bus interface will be limited
|
||||
# to applications that are listed in the lockdown whitelist.
|
||||
# The lockdown whitelist file is lockdown-whitelist.xml
|
||||
# Default: false
|
||||
Lockdown=no
|
||||
|
||||
# IPv6_rpfilter
|
||||
# Performs a reverse path filter test on a packet for IPv6. If a reply to the
|
||||
# packet would be sent via the same interface that the packet arrived on, the
|
||||
# packet will match and be accepted, otherwise dropped.
|
||||
# The rp_filter for IPv4 is controlled using sysctl.
|
||||
# Default: true
|
||||
IPv6_rpfilter=yes
|
||||
|
||||
# IndividualCalls
|
||||
# Do not use combined -restore calls, but individual calls. This increases the
|
||||
# time that is needed to apply changes and to start the daemon, but is good for
|
||||
# debugging.
|
||||
# Default: false
|
||||
IndividualCalls=no
|
||||
|
||||
# LogDenied
|
||||
# Add logging rules right before reject and drop rules in the INPUT, FORWARD
|
||||
# and OUTPUT chains for the default rules and also final reject and drop rules
|
||||
# in zones. Possible values are: all, unicast, broadcast, multicast and off.
|
||||
# Default: off
|
||||
LogDenied=off
|
||||
|
||||
# AutomaticHelpers
|
||||
# For the secure use of iptables and connection tracking helpers it is
|
||||
# recommended to turn AutomaticHelpers off. But this might have side effects on
|
||||
# other services using the netfilter helpers as the sysctl setting in
|
||||
# /proc/sys/net/netfilter/nf_conntrack_helper will be changed.
|
||||
# With the system setting, the default value set in the kernel or with sysctl
|
||||
# will be used. Possible values are: true, no and system.
|
||||
# Default: system
|
||||
AutomaticHelpers=system
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<ipset type="{{ type|default('hash:ip') }}">
|
||||
{% if short is defined %}
|
||||
<short>{{ short }}</short>
|
||||
{% endif %}
|
||||
{% if description is defined %}
|
||||
<description>{{ description }}</description>
|
||||
{% endif %}
|
||||
{% for name,value in (option|default({})).items() %}
|
||||
<option name="{{ name }}" value="{{ value }}"/>
|
||||
{% endfor %}
|
||||
{% for entry in targets|default([]) %}
|
||||
<entry>{{ entry }}</entry>
|
||||
{% endfor %}
|
||||
</ipset>
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<service>
|
||||
{% if short is defined %}
|
||||
<short>{{ short }}</short>
|
||||
{% endif %}
|
||||
{% if description is defined %}
|
||||
<description>{{ description }}</description>
|
||||
{% endif %}
|
||||
{% for tag in entry %}
|
||||
{# Tags which can be used several times #}
|
||||
{% if tag in ['port','protocol','source-port','module'] %}
|
||||
{% for subtag in entry[tag] %}
|
||||
<{{ tag }}{% for name,value in subtag.items() %} {{ name }}="{{ value }}"{% endfor %}/>
|
||||
{% endfor %}
|
||||
{# Tags which can be used once #}
|
||||
{% elif tag in ['destination'] %}
|
||||
<{{ tag }}{% for name,value in tag.items()|default({}) %} {{ name }}="{{ value }}"{% endfor %}/>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</service>
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<zone{% if entry.target is defined %} target="{{ entry.target }}"{% endif %}>
|
||||
<short>{{ short|default(name)|upper }}</short>
|
||||
{% if description is defined %}
|
||||
<description>{{ description }}</description>
|
||||
{% endif %}
|
||||
{% for tag in entry %}
|
||||
{# Settings which can be used several times #}
|
||||
{% if tag in ['interface','source','service','port','protocol','icmp-block','forward-port','source-port'] %}
|
||||
{% for subtag in entry[tag] %}
|
||||
<{{ tag }}{% for name,value in subtag.items() %} {{ name }}="{{ value }}"{% endfor %}/>
|
||||
{% endfor %}
|
||||
{# Settings which can be used once #}
|
||||
{% elif tag in ['icmp-block-inversion','masquerade'] and item[tag] == true %}
|
||||
<{{ tag }}/>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{# Begin rich rule #}
|
||||
{% for rule in entry.rule|default([]) %}
|
||||
<rule{% if rule.family is defined %} family="{{ rule.family }}"{% endif %}>
|
||||
{% for tag in rule %}
|
||||
{% if tag in ['source','destination','service','port','icmp-block','icmp-type','masquerade','forward-port'] %}
|
||||
<{{ tag }}{% for name,value in tag.items()|default({}) %} {{ name }}="{{ value }}"{% endfor %}/>
|
||||
{% elif tag in ['log','audit','accept','drop','mark','reject'] %}
|
||||
<{{ tag }}{% for name,value in tag.items() %} {{ name }}="{{ value }}"{% endfor %}>
|
||||
{% endif %}
|
||||
{% if tag.limit is defined %}
|
||||
<limit value="{{ tag.limit }}"/>
|
||||
{% endif %}
|
||||
</{{ tag }}>
|
||||
{% endfor %}
|
||||
</rule>
|
||||
{# End rich rule #}
|
||||
{% endfor %}
|
||||
</zone>
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- prometheus
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for template_role
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# defaults file for roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# handlers file for roles/role-template
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
---
|
||||
- name: find active interfaces
|
||||
ansible.builtin.command:
|
||||
cmd: ip -j a
|
||||
register: _interfaces
|
||||
|
||||
- name: match interface to mac
|
||||
set_fact:
|
||||
_interface: "{{ interface }}"
|
||||
vars:
|
||||
host: "{{ inventory_hostname }}"
|
||||
mac: "{{ (hostvars['localhost']['mac_map'] | selectattr('host', '==', host) | map(attribute='mac'))[0] }}"
|
||||
query: "[?address=='{{ mac }}'].ifname"
|
||||
interface: "{{ (_interfaces['stdout'] | from_json | json_query(query))[0] }}"
|
||||
|
||||
- name: find nmcli connections
|
||||
ansible.builtin.command:
|
||||
cmd: nmcli --get-values device,con-uuid,connection device
|
||||
register: _connections
|
||||
|
||||
- name: match nmcli connection to interface
|
||||
set_fact:
|
||||
_connection_remove_uuid: "{{ nmcli_con_uuid }}"
|
||||
_connection_remove_name: "{{ nmcli_con_name }}"
|
||||
loop: "{{ _connections['stdout_lines'] }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
interface: "{{ entry.split(':')[0] }}"
|
||||
nmcli_con_uuid: "{{ entry.split(':')[1] }}"
|
||||
nmcli_con_name: "{{ entry.split(':')[2] }}"
|
||||
when:
|
||||
- _interface == interface
|
||||
|
||||
- name: update nmcli connection
|
||||
block:
|
||||
|
||||
- name: create primary nmcli connection
|
||||
ansible.builtin.command:
|
||||
cmd: "{{ entry }}"
|
||||
loop:
|
||||
- nmcli con add con-name "{{ conn_name }}" type ethernet ifname "{{ ifname }}" ipv4.method manual ipv4.address "{{ ip4 }}" ipv4.gateway "{{ gw4 }}" ipv4.dns "{{ dns4 }}" ipv6.method link-local ipv6.addr-gen-mode eui64 connection.autoconnect yes
|
||||
- nmcli con mod "{{ _connection_remove_uuid }}" connection.autoconnect no
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
nmcli_con: "primary"
|
||||
conn_name: "{{ vars[config_namespace]['hypervisor']['nmcli_con_names'][nmcli_con] }}"
|
||||
ifname: "{{ _interface }}"
|
||||
host: "{{ inventory_hostname }}"
|
||||
ip: "{{ (hostvars['localhost']['mac_map'] | selectattr('host', '==', host) | selectattr('nmcli_con', '==', nmcli_con) | map(attribute='ip'))[0] }}"
|
||||
network: "{{ vars[config_namespace]['hypervisor']['cluster_networks'][conn_name]['network'] }}"
|
||||
netmask: "{{ vars[config_namespace]['hypervisor']['cluster_networks'][conn_name]['netmask'] }}"
|
||||
ip4: "{{ ip }}/{{ (network + '/' + netmask) | ansible.utils.ipaddr('prefix') }}"
|
||||
gw4: "{{ vars[config_namespace]['hypervisor']['cluster_networks'][conn_name]['gateway'] }}"
|
||||
dns4: "{{ vars[config_namespace]['hypervisor']['cluster_networks'][conn_name]['nameserver'] }}"
|
||||
register: provision_connection
|
||||
|
||||
- name: set new connection live
|
||||
ansible.builtin.command:
|
||||
cmd: "nmcli con up {{ conn_uuid }}"
|
||||
vars:
|
||||
conn_uuid: "{{ provision_connection['results'][0]['stdout'].split('(')[1].split(')')[0] }}"
|
||||
async: 1
|
||||
poll: 0
|
||||
|
||||
- name: add "{{ inventory_hostname }}" to in-memory inventory with static ip
|
||||
# ansible.builtin.add_host: >
|
||||
# name={{ host }}
|
||||
# groups={{ ['all', 'hypervisor'] }}
|
||||
# ansible_ssh_host={{ ansible_ssh_host }}
|
||||
# ansible_ssh_common_args='-o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no"'
|
||||
# ansible_user={{ ansible_user }}
|
||||
# ansible_password={{ ansible_password }}
|
||||
ansible.builtin.add_host: >
|
||||
name={{ host }}
|
||||
groups={{ active_role_groups }}
|
||||
ansible_ssh_host={{ ansible_ssh_host }}
|
||||
ansible_ssh_common_args='-o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no"'
|
||||
ansible_user={{ ansible_user }}
|
||||
ansible_password={{ ansible_password }}
|
||||
vars:
|
||||
host: "{{ inventory_hostname }}"
|
||||
ansible_ssh_host: "{{ (hostvars['localhost']['mac_map'] | selectattr('host', '==', host) | map(attribute='ip'))[0] }}"
|
||||
ansible_user: "{{ vars[config_namespace]['hypervisor']['ssh_user'] }}"
|
||||
ansible_password: "{{ vars[config_namespace]['hypervisor']['ssh_password'] }}"
|
||||
|
||||
- name: remove old connection
|
||||
ansible.builtin.command:
|
||||
cmd: "nmcli con del {{ _connection_remove_uuid }}"
|
||||
|
||||
- name: update facts to include new interface
|
||||
setup:
|
||||
gather_subset:
|
||||
- all_ipv4_addresses
|
||||
- all_ipv6_addresses
|
||||
- default_ipv4
|
||||
- default_ipv6
|
||||
- interfaces
|
||||
|
||||
vars:
|
||||
host: "{{ inventory_hostname }}"
|
||||
ip: "{{ (hostvars['localhost']['mac_map'] | selectattr('host', '==', host) | selectattr('nmcli_con', '==', nmcli_con) | map(attribute='ip'))[0] }}"
|
||||
dhcp_ip: "{{ (hostvars['localhost']['mac_map'] | selectattr('host', '==', host) | selectattr('nmcli_con', '==', nmcli_con) | map(attribute='dhcp_ip'))[0] }}"
|
||||
connection_remove: "{{ _connection_remove_name }}"
|
||||
nmcli_con: "primary"
|
||||
conn_name: "{{ vars[config_namespace]['hypervisor']['nmcli_con_names'][nmcli_con] }}"
|
||||
when:
|
||||
- not ip == dhcp_ip or
|
||||
not connection_remove == conn_name
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for roles/role-template
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
hypervisor_prep:
|
||||
container_dir: "/opt/containers"
|
||||
compose_dir: "/compose"
|
||||
bin_dir: "/bin"
|
||||
data_dir: "/data"
|
||||
etc_dir: "/etc"
|
||||
docker_compose_url: "https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-linux-x86_64"
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# handlers file for roles/role-template
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
######## runtime_facts
|
||||
- name: runtime facts
|
||||
ansible.builtin.set_fact:
|
||||
_docker_compose_url: "{{ hypervisor_prep['docker_compose_url'] }}"
|
||||
_container_dir: "{{ hypervisor_prep['container_dir'] }}"
|
||||
_compose_directory: "{{ hypervisor_prep['container_dir'] }}{{ hypervisor_prep['compose_dir'] }}"
|
||||
_bin_directory: "{{ hypervisor_prep['container_dir'] }}{{ hypervisor_prep['bin_dir'] }}"
|
||||
_etc_directory: "{{ hypervisor_prep['container_dir'] }}{{ hypervisor_prep['etc_dir'] }}"
|
||||
_data_directory: "{{ hypervisor_prep['container_dir'] }}{{ hypervisor_prep['data_dir'] }}"
|
||||
|
||||
######## copy ssh pub key
|
||||
|
||||
- name: Authorize local SSH pub key on all hosts
|
||||
authorized_key:
|
||||
key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
|
||||
comment: ""
|
||||
user: root
|
||||
state: present
|
||||
|
||||
######## set hostname
|
||||
|
||||
- name: change hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname }}"
|
||||
|
||||
- name: add hostname to /etc/hosts ipv4
|
||||
lineinfile:
|
||||
dest: /etc/hosts
|
||||
regexp: '^127\.0\.0\.1[ \t]+localhost'
|
||||
line: "127.0.0.1 {{ inventory_hostname }}.{{ vars[config_namespace]['env']['cluster_domain'] }} {{ inventory_hostname }} localhost"
|
||||
state: present
|
||||
|
||||
- name: add hostname to /etc/hosts ipv6
|
||||
lineinfile:
|
||||
dest: /etc/hosts
|
||||
regexp: '^\:\:1[ \t]+localhost'
|
||||
line: "::1 {{ inventory_hostname }}.{{ vars[config_namespace]['env']['cluster_domain'] }} {{ inventory_hostname }} localhost localhost.localdomain localhost6 localhost6.localdomain6"
|
||||
state: present
|
||||
|
||||
- name: add cluster hosts to /etc/hosts
|
||||
lineinfile:
|
||||
path: /etc/hosts
|
||||
regexp: ".*[ \t]+{{ host }}"
|
||||
line: "{{ ip }} {{ host }}.{{ vars[config_namespace]['env']['cluster_domain'] }} {{ host }}"
|
||||
state: present
|
||||
loop: "{{ groups['hypervisor'] | list | difference(inventory_hostname) }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
vars:
|
||||
host: "{{ entry }}"
|
||||
# ip: "{{ hostvars[host]['ansible_default_ipv4']['address'] }}"
|
||||
ip: "{{ hypervisor['mac_map'] | selectattr('host', '==', entry) | map(attribute='ip') | first }}"
|
||||
|
||||
######## change security
|
||||
|
||||
- name: set SELinux to permissive mode, podman requires selinux
|
||||
ansible.posix.selinux:
|
||||
policy: targeted
|
||||
state: permissive
|
||||
|
||||
- name: disable firewalld
|
||||
ansible.builtin.systemd:
|
||||
state: stopped
|
||||
enabled: no
|
||||
name: firewalld
|
||||
|
||||
######## install podman
|
||||
|
||||
- name: install podman on first hypervisor or all ceph nodes
|
||||
block:
|
||||
|
||||
- name: update package facts
|
||||
ansible.builtin.package_facts:
|
||||
manager: auto
|
||||
strategy: all
|
||||
|
||||
- name: install podman
|
||||
package:
|
||||
name: "podman"
|
||||
state: present
|
||||
|
||||
# start/stop podman in the shell via service podman.socket, this will restart podman.service, podman.socket is used for docker-compose integration
|
||||
- name: enable podman services
|
||||
ansible.builtin.systemd:
|
||||
name: podman.socket
|
||||
enabled: yes
|
||||
state: started
|
||||
|
||||
- name: install docker-compose
|
||||
ansible.builtin.get_url:
|
||||
url : "{{ _docker_compose_url }}"
|
||||
dest: /usr/local/bin/docker-compose
|
||||
mode: 0750
|
||||
|
||||
- name: softlink podman socket to docker socket
|
||||
ansible.builtin.file:
|
||||
src: /run/podman/podman.sock
|
||||
dest: /var/run/docker.sock
|
||||
owner: root
|
||||
group: root
|
||||
state: link
|
||||
|
||||
- name: create container directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ entry }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0755
|
||||
loop:
|
||||
- "{{ _container_dir }}"
|
||||
- "{{ _compose_directory }}"
|
||||
- "{{ _bin_directory }}"
|
||||
- "{{ _etc_directory }}"
|
||||
- "{{ _data_directory }}"
|
||||
loop_control:
|
||||
loop_var: entry
|
||||
|
||||
vars:
|
||||
hypervisor_host: "{{ groups['hypervisor'] | first }}"
|
||||
host: "{{ inventory_hostname }}"
|
||||
when:
|
||||
- hypervisor_host == host or
|
||||
host in groups['ceph']
|
||||
|
||||
######## setup LVM for ceph
|
||||
|
||||
- name: setup LVM for ceph
|
||||
block:
|
||||
|
||||
- name: read device information
|
||||
community.general.parted:
|
||||
device: "{{ hypervisor['ceph_disk'] }}"
|
||||
unit: "MiB"
|
||||
register: device_info
|
||||
|
||||
- name: create new primary partition for LVM
|
||||
community.general.parted:
|
||||
device: "{{ hypervisor['ceph_disk'] }}"
|
||||
number: "{{ partition_number }}"
|
||||
unit: MiB
|
||||
part_start: "{{ part_start | int + 1 }}MiB"
|
||||
part_end: "100%"
|
||||
flags: [ lvm ]
|
||||
label: gpt
|
||||
part_type: primary
|
||||
state: present
|
||||
vars:
|
||||
last_partition: "{{ device_info['partitions'] | length - 1 }}"
|
||||
part_start: "{{ device_info['partitions'][last_partition | int ]['end'] }}"
|
||||
partition_number: "{{ device_info['partitions'] | length + 1 }}"
|
||||
|
||||
- name: create volume group
|
||||
community.general.lvg:
|
||||
vg: ceph
|
||||
# /dev/nvme0n1p4
|
||||
pvs: "{{ hypervisor['ceph_disk'] }}p{{ partition_number }}"
|
||||
vars:
|
||||
partition_number: "{{ device_info['partitions'] | length + 1 }}"
|
||||
|
||||
- name: create logical volume
|
||||
community.general.lvol:
|
||||
vg: ceph
|
||||
lv: ceph_data
|
||||
size: 100%FREE
|
||||
|
||||
when:
|
||||
- not ansible_lvm['lvs']['ceph_data'] is defined
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
localhost
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# vars file for roles/role-template
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Use the new container infrastructure
|
||||
sudo: false
|
||||
|
||||
# Install ansible
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-pip
|
||||
|
||||
install:
|
||||
# Install ansible
|
||||
- pip install ansible
|
||||
|
||||
# Check ansible version
|
||||
- ansible --version
|
||||
|
||||
# Create ansible.cfg with correct roles_path
|
||||
- printf '[defaults]\nroles_path=../' >ansible.cfg
|
||||
|
||||
script:
|
||||
# Basic role syntax check
|
||||
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
|
||||
|
||||
notifications:
|
||||
webhooks: https://galaxy.ansible.com/api/v1/notifications/
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# defaults file for roles/role-template
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
# handlers file for roles/role-template
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
galaxy_info:
|
||||
author: your name
|
||||
description: your role description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Choose a valid license ID from https://spdx.org - some suggested licenses:
|
||||
# - BSD-3-Clause (default)
|
||||
# - MIT
|
||||
# - GPL-2.0-or-later
|
||||
# - GPL-3.0-only
|
||||
# - Apache-2.0
|
||||
# - CC-BY-4.0
|
||||
license: license (GPL-2.0-or-later, MIT, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue