initial commit

main
tseed 2022-10-26 17:39:31 +01:00
commit d091576a4a
16 changed files with 878 additions and 0 deletions

56
README.md Executable file
View File

@ -0,0 +1,56 @@
## What is this demo?
An Ansible playbook that will create a Fedora Core OS (FCOS) based virtual machine on Proxmox to host containers, the resultant containerised application stack is a web based evernote/wiki like application called Bookstack.
FCOS is designed to run in cloud or a bare-metal farm environment where network services such as DHCP / DNS exist. This demo aimed to illustrate that FCOS can be assigned a static IP and could be used as the bootstrap component of a system from code.
By default every boot starts as if there had been a fresh install aside from the changes made by the original boot configuration, for this demo the behavior has been changed to allow for persistent disk writes in $HOME and /var as it is hosting a wiki.
## What is Fedora Core OS?
FCOS is the successor to CoreOS, a minimal operating system designed to be run in idempotent compute systems, it is minimal and designed solely to run containers via Docker and now Podman. Unlike traditional operating systems the operating system is set secure out of the box, does not (easily) offer any package management, is self updating and is crucially designed to be blown away and spun up as a component in PaaS. Typically it has been used for Docker Swarm and Kubernetes.
FCOS has stepped away from cloud-init and introduced its own version of boot time configuration named ignition whilst introducing a tool set for users to validate the ignition configuration, a clear sign that Redhat are looking for wide spread adoption. Whilst ignition offers less flexibility for boot time configuration than cloud-init, it does guide users to use Systemd in creative ways to configure the run time environment, this is much more powerful reusable than would be expected.
Redhat acquired CoreOS to offer a no/low-ops foundation for Openshift. At the same time Redhat is forging a path away from Docker for container management and has chosen to invest in Podman.
Podman is very familiar in operation to Docker for users but is early in its development, it offers many security enhancements and the ability to group containers into pods using Kubernetes pod container integration. It should be stated that Kubernetes does not yet manage containers through Podman but Docker and CRI-O.
## Why not demo in the cloud?
IMHO Ansible is the wrong tool to provision in the cloud (but can be the right tool for cloud instance configuration), the heavy lifting is performed by the ignition configuration, ARM, Cloudformation or Terrafrom could replace any need for Ansible in this demo.
Proxmox by virtue of its API and QEMU hypervisor underpinnings with an exposed Linux OS is extensible and easy to script, it serves to showcase using Ansible to bootstrap an app stack.
## What is in the playbook?
- Retrieving large binary files from the internet, using two Ansible URL
modules for download and error checking.
- Uploading large binary files to a host with error checking, making
use of in-line inventory (edit/override the groupvars and run, no inventory required).
- Provisioning a virtual machine to use the new Fedora Core OS (CoreOS
successor) using the Proxmox Ansible module.
The proxmox_node_provision play is the fun part of the playbook, demoing an API call to Proxmox and using the proxmox_kvm module. The next step is to render an ignition configuration that controls the flow of starting containers using Systemd and Podman.
Systemd is used to demo timers and Podman job dependency, then acquire and start container images.
Podman is used to demo running heterogeneous containers in a single non root pod (to illustrate the difference from Docker) with intra-container networking isolated from the host/public and expose services only to the hosts loopback.
Traefik is used to demo the use of an HTTPS ingress controller (reverse proxy with bells and whistles) running on the hosts network adapter exposing ports <1024 to the app stack pod listening on the hosts loopback. Traefik is typically used where the Docker or Kubernetes API offer a mechanism for service discovery and dynamic configuration of URL routing, but it can still be statically configured and run as a container retaining all the features of an HTTP proxy and a classic layer 4 loadbalancer for all of your faux cloud needs.
## How to run
Edit groupvars/all for Proxmox credentials and network attributes
ansible-playbook site.yml
## Why is my host... ?
### Rebooting 5+ minutes after first boot ?
FCOS by default will reboot whenever a new image is available, it is designed to run in a redundant farm, we are using an older version. This behavior can be changed with a Systemd update to a config file.
### Asking for to trust an SSL certificate after every boot ?
Traefik has no SSL certificate set, it auto generated an untrusted placeholder certificate on init, this can be changed in Systemd by not deleting the container as a pre-execute task or by supplying a certificate chain.

10
ansible.cfg Executable file
View File

@ -0,0 +1,10 @@
[defaults]
inventory = inventory/nodes.ini
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[persistent_connection]
[accelerate]
[selinux]
[colors]
[diff]

29
bookstack/tasks/main.yml Executable file
View File

@ -0,0 +1,29 @@
- name: get uid/gid of service account
getent:
database: passwd
key: "{{ node_account }}"
- set_fact:
uid: "{{ getent_passwd[node_account][1] }}"
gid: "{{ getent_passwd[node_account][2] }}"
- name: create docker data directory
become: yes
file:
path: /var/bookstack
state: directory
mode: '0755'
- name: render docker compose
template:
src: docker-compose.j2
dest: ~/docker-compose.yaml
# if were to continue:
#
# finish this play - upload compose - create systemd to run compose
# add a docker network to compose
# add minio for s3 backend to bookstack pictures/artifacts
# traefik - rproxy endpoints for bookstack and minio - maybe LE
# systemd timer job to run container to do dbdump and s3 backup to cloud

View File

@ -0,0 +1,34 @@
---
version: "2"
services:
bookstack:
image: linuxserver/bookstack
container_name: bookstack
environment:
- PUID={{ uid }}
- PGID={{ gid }}
- DB_HOST=bookstack_db
- DB_USER=bookstack
- DB_PASS={{ node_account_password }}
- DB_DATABASE=bookstackapp
volumes:
- /opt/bookstack:/config
ports:
- 80:80
restart: unless-stopped
depends_on:
- bookstack_db
bookstack_db:
image: linuxserver/mariadb
container_name: bookstack_db
environment:
- PUID={{ uid }}
- PGID={{ gid }}
- MYSQL_ROOT_PASSWORD=<yourdbpass>
- TZ=Europe/London
- MYSQL_DATABASE=bookstackapp
- MYSQL_USER=bookstack
- MYSQL_PASSWORD={{ node_account_password }}
volumes:
- /var/bookstack:/config
restart: unless-stopped

0
fcct-x86_64-unknown-linux-gnu Executable file
View File

View File

@ -0,0 +1,34 @@
- set_fact:
image_name: "{{ ((image_url | urlsplit).path).split ('/')[-1] }}"
- name: check local image present
stat:
path: ./{{ image_name }}
register: img_local_present
- name: check site is available
uri:
url: "{{ image_url }}"
follow_redirects: none
method: HEAD
register: _result
until: _result.status == 200
retries: 2
delay: 5 # seconds
when: img_local_present.stat.exists == false
- name: download image
get_url:
url: "{{ image_url }}"
dest: .
when: img_local_present.stat.exists == false
- name: check local image present
stat:
path: ./{{ image_name }}
register: img_local_present
- name: report image downloaded
fail:
msg: "image {{ image_name }} not present, download failed"
when: img_local_present.stat.exists == false

35
group_vars/all Executable file
View File

@ -0,0 +1,35 @@
---
# Pmox host
proxmox_host: 192.168.1.20
# Pmox API creds
proxmox_node: pve
proxmox_user: root@pam
proxmox_pass: <some-password>
# Pmox ssh creds
proxmox_ssh_user: root
proxmox_ssh_pass: <some-password>
# Pmox storage
proxmox_vm_datastore: local-lvm
proxmox_img_datastore: local
proxmox_img_datastore_path: /var/lib/vz
proxmox_node_disk_size: 10G
# Pmox Network
proxmox_vmbr: vmbr1
proxmox_vlan: 2
# node image
# testing release allows setting static ip by moving network manager to initramfs, FCOS is very new
#image_url: https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/31.20200310.3.0/x86_64/fedora-coreos-31.20200310.3.0-qemu.x86_64.qcow2.xz
image_url: https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/31.20200323.2.1/x86_64/fedora-coreos-31.20200323.2.1-qemu.x86_64.qcow2.xz
# FCOS tool
fcct_url: https://github.com/coreos/fcct/releases/download/v0.5.0/fcct-x86_64-unknown-linux-gnu
# node attributes
node_name: wiki
node_type: wiki
node_ip: 192.168.1.80
node_gateway: 192.168.1.1
node_subnet: 255.255.255.0
node_dns: 192.168.1.1
domain: terratech.internal
node_account: ocfadmin
node_account_password: Password0

24
inventory/nodes.ini Executable file
View File

@ -0,0 +1,24 @@
[ungrouped]
[wiki]
wiki ansible_host=192.168.140.81
wiki1 ansible_host=192.168.140.81
[all]
proxmox_server ansible_host=192.168.140.11
wiki ansible_host=192.168.140.81
wiki1 ansible_host=192.168.140.81
[proxmox]
proxmox_server ansible_host=192.168.140.11
[proxmox:vars]
ansible_ssh_user=root
ansible_ssh_pass=W1ck3rm@n
become=true
[all:vars]
ansible_password=Password0
ansible_user=ocfadmin
ansible_ssh_extra_args="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
ansible_python_interpreter=/usr/bin/python3

View File

@ -0,0 +1,213 @@
---
- name: add proxmox host to in-memory inventory
add_host: >
name=proxmox_server
groups=proxmox
ansible_ssh_host={{ proxmox_host }}
ansible_host={{ proxmox_host }}
ansible_ssh_user={{ proxmox_ssh_user }}
ansible_user={{ proxmox_ssh_user }}
ansible_ssh_pass="{{ proxmox_ssh_pass }}"
- set_fact:
image_name: "{{ (((image_url | urlsplit).path).split ('/')[-1]).split ('.')[:-1] | join ('.') }}"
- name: get remote image attributes
stat:
path: "{{ proxmox_img_datastore_path }}/template/iso/{{ image_name }}"
register: img_remote_attributes
delegate_to: proxmox_server
- name: report image not present
fail:
msg: "image {{ image_name }} not present on proxmox node {{ proxmox_node }} at storage {{ proxmox_img_datastore }}"
when: img_remote_attributes.stat.exists == false
- name: generate password hash for ignition
set_fact:
node_account_password_hash: "{{ node_account_password | password_hash('sha512') }}"
- name: get ssh pub key for ignition
set_fact:
node_account_pub_ssh: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
- name: get proxmox API auth cookie
uri:
url: "https://{{ proxmox_host }}:8006/api2/json/access/ticket"
validate_certs: no
method: POST
body_format: form-urlencoded
body:
username: "{{ proxmox_user }}"
password: "{{ proxmox_pass }}"
status_code: 200
register: login
- name: query proxmox next free vmid
uri:
url: "https://{{ proxmox_host }}:8006/api2/json/cluster/nextid"
validate_certs: no
method: GET
headers:
Cookie: "PVEAuthCookie={{ login.json.data.ticket }}"
CSRFPreventionToken: "{{ login.json.data.CSRFPreventionToken }}"
register: next_pvid
# https://forum.proxmox.com/threads/howto-startup-vm-using-an-ignition-file.63782/
- name: provision nodes on proxmox kvm host
proxmox_kvm:
api_user : "{{ proxmox_user }}"
api_password: "{{ proxmox_pass }}"
api_host : "{{ proxmox_host }}"
node : "{{ proxmox_node }}"
vmid : "{{ next_pvid.json.data }}"
boot : c # n network,d cdrom,c harddisk, combine in any order cdn
kvm : yes
agent : yes
name : "{{ node_name }}"
sockets : 1
cores : 2
memory : 2048
serial : '{"serial0": "socket"}'
vga : serial0
scsihw : virtio-scsi-single
net : '{"net0":"virtio,bridge={{ proxmox_vmbr }},tag={{ proxmox_vlan }},firewall=0"}'
#net : '{"net0":"virtio,bridge={{ proxmox_vmbr }},firewall=0"}'
args : "-fw_cfg name=opt/com.coreos/config,file={{ proxmox_img_datastore_path }}/snippets/{{ next_pvid.json.data }}-ignition.ign"
ostype : l26
state : present
register: _result
- name: end run with failure where host(s) pre-exist
fail:
msg: "node {{ node_name }} already exists"
when: _result.msg == "VM with name <{{ node_name }}> already exists"
- name: get node MAC
uri:
url: "https://{{ proxmox_host }}:8006/api2/json/nodes/{{ proxmox_node }}/qemu/{{ next_pvid.json.data }}/config"
validate_certs: no
method: GET
headers:
Cookie: "PVEAuthCookie={{ login.json.data.ticket }}"
CSRFPreventionToken: "{{ login.json.data.CSRFPreventionToken }}"
register: _result
- name: register node MAC
set_fact:
node_mac: "{{ ((_result.json.data.net0).split(',')[0]).split('=')[1] | lower }}"
# requires python-netaddr, apt-get install python-netaddr
- name: register netmask CIDR prefix
set_fact:
node_cidr: "{{ ip | ipaddr('prefix') }}"
vars:
ip: "{{ node_ip }}/{{ node_subnet }}"
- set_fact:
fcct_binary: "{{ ((fcct_url | urlsplit).path).split ('/')[-1] }}"
- name: check if fcct is present
stat:
path: "{{ fcct_binary }}"
register: fcct_present
- name: download fcct
get_url:
url: "{{ fcct_url }}"
dest: .
mode: '0755'
when: fcct_present.stat.exists == false
# fcct wont accept string input with redirection?
# leaving this here to have another go at input redirection with fcct to negate the need for an itermediate file+copy operation
# - name: template FCOS ignition yaml config
# set_fact:
# fcos_rendered: "{{ lookup('template', 'FCOS-ignition.j2') }}"
- name: render FCOS ignition yaml config
template:
src: FCOS-ignition.j2
dest: FCOS-ignition.yaml
- name: run FCCT to get ignition file output
command: "./{{ fcct_binary }} FCOS-ignition.yaml"
register: fcos_json
- name: write ignition config to proxmox server
copy:
content: "{{ fcos_json.stdout | from_json }}"
dest: "{{ proxmox_img_datastore_path }}/snippets/{{ next_pvid.json.data }}-ignition.ign"
owner: "{{ proxmox_ssh_user }}"
group: "{{ proxmox_ssh_user }}"
mode: '0755'
delegate_to: proxmox_server
- name: remove fcct source
file:
path: FCOS-ignition.yaml
state: absent
- name: create primary disk using cloud-init ready image
shell:
cmd: |
qemu-img create -f qcow2 -b "{{ image_name }}" backing-"{{ image_name }}"
qm importdisk "{{ next_pvid.json.data }}" backing-"{{ image_name }}" "{{ proxmox_vm_datastore }}"
rm -f backing-"{{ image_name }}"
qm set "{{ next_pvid.json.data }}" --scsihw virtio-scsi-pci --scsi0 "{{ proxmox_vm_datastore }}":vm-"{{ next_pvid.json.data }}"-disk-0
qm set "{{ next_pvid.json.data }}" --boot c --bootdisk scsi0
chdir: "{{ proxmox_img_datastore_path }}/template/iso/"
register: _result
delegate_to: proxmox_server
- name: resize node disk
uri:
url: "https://{{ proxmox_host }}:8006/api2/json/nodes/{{ proxmox_node }}/qemu/{{ next_pvid.json.data }}/resize"
validate_certs: no
method: PUT
headers:
Cookie: "PVEAuthCookie={{ login.json.data.ticket }}"
CSRFPreventionToken: "{{ login.json.data.CSRFPreventionToken }}"
body_format: form-urlencoded
body:
disk: scsi0
size: "{{ proxmox_node_disk_size }}"
- name: start nodes on proxmox kvm host
proxmox_kvm:
api_user : "{{ proxmox_user }}"
api_password: "{{ proxmox_pass }}"
api_host : "{{ proxmox_host }}"
node : "{{ proxmox_node }}"
name : "{{ node_name }}"
state : started
register: _result
- name: add node to in-memory inventory
add_host: >
name="{{ node_name }}"
groups="{{ node_type }}"
ansible_host="{{ node_ip }}"
ansible_ssh_user="{{ node_account }}"
ansible_ssh_pass="{{ node_account_password }}"
ansible_ssh_extra_args="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
# inventory is only read at runtime
- name: create ansible inventory
template:
src: ansible-hosts.j2
dest: "inventory/nodes.ini"
- name: wait for node to become available
local_action: command sshpass -p "{{node_account_password|default('')}}" ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" "{{ node_account }}@{{ node_ip }}" "sleep 5;ls /var/{{node_account}}/traefik/traefik-dynamic.yaml"
changed_when: False
register: ready
until: ready.rc == 0
retries: 100
- debug:
msg:
- "{{ node_name }} is ready"
- "wiki @ http://{{node_ip}} u:admin@admin.com p:password"
- "dashboard @ http://{{node_ip}}:8080"
when: ready.rc == 0

View File

@ -0,0 +1,276 @@
variant: fcos
version: 1.0.0
passwd:
groups:
- name: {{node_account}}
gid: 1001
users:
# core user password auto generated on boot, we want consistent credentials
# core user has gid/uid 1000, choosing lower uid's maybe troublesome, start all gid/uid at 1001
- name: {{node_account}}
uid: 1001
primary_group: {{node_account}}
groups:
- sudo
password_hash: "{{node_account_password_hash}}"
ssh_authorized_keys:
- "{{node_account_pub_ssh}}"
storage:
disks:
- device: /dev/sda
wipe_table: false
partitions:
# size 0 will expand to all available space
- size_mib: 0
start_mib: 0
label: var
filesystems:
# we cannot create a directory off / that doesnt already exist during the boot process
# we will use /var to host podman containers
- path: /var
device: /dev/disk/by-partlabel/var
format: xfs
files:
# fcos stable build runs networkmanager after initramfs, use dev fcos build to override networking before system boot
# the goal is to start containers in a baremetal like environment without a dhcp service
- path: /etc/NetworkManager/system-connections/eth0.nmconnection
mode: 0600
overwrite: true
contents:
inline: |
[connection]
type=ethernet
interface-name=eth0
[ethernet]
mac-address={{node_mac}}
[ipv4]
method=manual
addresses={{node_ip}}/{{node_cidr}}
gateway={{node_gateway}}
dns={{node_dns}};1.1.1.1;8.8.8.8
dns-search={{domain}}
- path: /etc/hostname
mode: 420
overwrite: true
contents:
inline: |
{{node_name}}.{{ domain }}
- path: /root/traefik.yaml
mode: 640
overwrite: true
contents:
inline: |
global:
checkNewVersion: true
log:
level: "DEBUG"
filePath: "/etc/traefik/log-file.log"
accessLog:
filePath: "/etc/traefik/log-access.log"
bufferingSize: 100
api:
insecure: true
dashboard: true
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
providers:
file:
filename: /etc/traefik/traefik-dynamic.yaml
watch: true
- path: /root/traefik-dynamic.yaml
mode: 640
overwrite: true
contents:
inline: |
http:
routers:
bookstack-http-route:
rule: "PathPrefix(`/`)"
entryPoints:
- web
middlewares:
- https-redirect
service: bookstack-service
bookstack-https-route:
rule: "PathPrefix(`/`)"
entryPoints:
- websecure
service: bookstack-service
tls: {}
middlewares:
https-redirect:
redirectScheme:
scheme: https
permanent: true
services:
bookstack-service:
loadBalancer:
servers:
- url: "http://127.0.0.1:9080"
systemd:
units:
- name: var.mount
enabled: true
contents: |
[Unit]
Before=local-fs.target
[Mount]
Where=/var
What=/dev/disk/by-partlabel/var
[Install]
WantedBy=local-fs.target
- name: sshd.service
# allow ssh password login
dropins:
- name: allowpasswordauth.conf
contents: |
[Service]
Environment=OPTIONS='-oPasswordAuthentication=yes'
- name: {{node_account}}_data.service
# would like to use RuntimeDirectory= to create folders, however this implementation of systemd does not play well
enabled: true
contents: |
[Unit]
Description=create podman data directory for {{node_account}}
After=var.mount
Requires=var.mount
[Service]
Type=oneshot
ExecStartPre=/usr/bin/install -d -o {{node_account}} -g {{node_account}} -v /var/{{node_account}}/bookstack
ExecStartPre=/usr/bin/install -d /var/{{node_account}}/traefik
ExecStartPre=/usr/bin/install -d /var/{{node_account}}/backup
ExecStart=/bin/true
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
- name: bookstack_pod.service
# all containers in the pod will listen on the exposed port(s), in this case a web and db container will potentially have 9080:80 exposed, the db container doesnt listen on 80 so no issue
# bookstack listens on 80 + 443 (-p 9080:80/tcp,9081:443/tcp), our pod will only expose 80 on loopback (via 9080) as traefik will expose 80 + 443
enabled: false
contents: |
[Unit]
Description=bookstack_pod
Wants=network-online.target
[Service]
User={{node_account}}
Group={{node_account}}
TimeoutStartSec=30
ExecStartPre=-/usr/bin/podman pod kill bookstack-pod
ExecStartPre=-/usr/bin/podman pod rm bookstack-pod
ExecStart=/usr/bin/podman pod create --name bookstack-pod -p 127.0.0.1:9080:80/tcp
ExecStop=/usr/bin/podman pod stop bookstack-pod
[Install]
WantedBy=multi-user.target
- name: bookstack_pod.timer
# the bookstack_pod service is disabled and a service timer is enabled to facilitate a boot delay before running the service, this ensures the pod always assigns and binds ports
# if the service is too quick to execute (redhat eager systemd networking) the pod may fail to bind to ports intermittently on boot
# pod networking must wait for network.target and network-online.target, this is a redhat thing with the way they order networking systemd unit files for parallel execution
enabled: true
contents: |
[Unit]
Description=bookstack_pod timer
Wants=network-online.target
After=network.target network-online.target
[Timer]
OnbootSec=10
[Install]
WantedBy=timers.target
- name: bookstack_db.service
# note that the container is privileged, this container is docker compliant but maybe not oci compliant, or podman is just too new
# timeout set high to allow initial database creation in container entrypoint
# in k8s using a ci pipeline you'd likely have a oneshot job to seed (and backup) the database, this could be achieved with systemd service dependency and firstrun logic and systemd timer
enabled: true
contents: |
[Unit]
Description=bookstack_db
Requires=bookstack_pod.service
After=bookstack_pod.service
[Service]
User={{node_account}}
Group={{node_account}}
TimeoutStartSec=30
Restart=always
ExecStartPre=-/usr/bin/podman kill bookstack-db
ExecStartPre=-/usr/bin/podman rm bookstack-db
ExecStartPre=/usr/bin/podman pull docker.io/linuxserver/mariadb:latest
ExecStart=/usr/bin/podman run \
--privileged=true \
--name bookstack-db \
--pod bookstack-pod \
-v /var/{{node_account}}/bookstack:/config \
-e PUID=1001 \
-e PGID=1001 \
-e MYSQL_ROOT_PASSWORD={{node_account_password}} \
-e TZ=Europe/London \
-e MYSQL_DATABASE=bookstackapp \
-e MYSQL_USER=bookstack \
-e MYSQL_PASSWORD={{node_account_password}} \
docker.io/linuxserver/mariadb:latest
ExecStop=/usr/bin/podman stop bookstack-db
[Install]
WantedBy=multi-user.target
- name: bookstack_app.service
# this shouldnt require a base_url but it does, we are just proxying ports not urls? APP_URL=
enabled: true
contents: |
[Unit]
Description=bookstack-app
Requires=bookstack_db.service
After=bookstack_db.service
[Service]
User={{node_account}}
Group={{node_account}}
TimeoutStartSec=30
Restart=always
ExecStartPre=-/usr/bin/podman kill bookstack-app
ExecStartPre=-/usr/bin/podman rm bookstack-app
ExecStartPre=/usr/bin/podman pull docker.io/linuxserver/bookstack:latest
ExecStart=/usr/bin/podman run \
--privileged=true \
--name bookstack-app \
--pod bookstack-pod \
-v /var/{{node_account}}/bookstack:/config \
-e PUID=1001 \
-e PGID=1001 \
-e DB_HOST=bookstack-pod \
-e DB_USER=bookstack \
-e DB_PASS={{ node_account_password }} \
-e DB_DATABASE=bookstackapp \
-e APP_URL=https://{{node_ip}} \
docker.io/linuxserver/bookstack:latest
ExecStop=/usr/bin/podman stop bookstack-app
[Install]
WantedBy=multi-user.target
- name: traefik.service
# run privilieged for port <1024, traefik has many params to minimise attack vectors when run as root (not shown)
# run on host network to attach to pods @ host loopback
# must run after bookstack_pod.service - podman gets knickers in a twist mixing networking on pods and networking on privileged containers
enabled: true
contents: |
[Unit]
Description=traefik
Wants=network-online.target
After=network.target network-online.target bookstack_pod.service
[Service]
TimeoutStartSec=30
Restart=always
ExecStartPre=-/usr/bin/podman kill traefik
ExecStartPre=-/usr/bin/podman rm traefik
ExecStartPre=/usr/bin/podman pull docker.io/library/traefik:latest
ExecStartPre=-/usr/bin/install -m 640 -o root -g root /root/traefik.yaml /var/{{node_account}}/traefik
ExecStartPre=-/usr/bin/install -m 640 -o root -g root /root/traefik-dynamic.yaml /var/{{node_account}}/traefik
ExecStart=/usr/bin/podman run \
--privileged=true \
--name=traefik \
--net=host \
-v /var/{{node_account}}/traefik:/etc/traefik \
-p 8080:8080/tcp,80:80/tcp,443:443/tcp \
docker.io/library/traefik:latest
ExecStop=/usr/bin/podman stop traefik
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,57 @@
variant: fcos
version: 1.0.0
passwd:
groups:
- name: {{node_account}}
gid: 10000
users:
- name: {{node_account}}
uid: 10000
primary_group: {{node_account}}
groups:
- sudo
password_hash: "{{node_account_password_hash}}"
ssh_authorized_keys:
- "{{node_account_pub_ssh}}"
storage:
files:
- path: /etc/NetworkManager/NetworkManager.conf
mode: 644
overwrite: true
contents:
inline: |
[main]
no-auto-default=*
ignore-carrier=*
- path: /etc/NetworkManager/system-connections/eth0.nmconnection
mode: 0600
overwrite: true
contents:
inline: |
[connection]
type=ethernet
interface-name=eth0
[ethernet]
mac-address={{node_mac}}
[ipv4]
method=manual
addresses={{node_ip}}/{{node_cidr}}
gateway={{node_gateway}}
dns={{node_dns}};1.1.1.1;8.8.8.8
dns-search={{domain}}
- path: /etc/hostname
mode: 420
overwrite: true
contents:
inline: |
{{node_name}}.{{ domain }}
systemd:
units:
- name: sshd.service
dropins:
- name: allowpasswordauth.conf
contents: |
[Service]
Environment=OPTIONS='-oPasswordAuthentication=yes'

View File

@ -0,0 +1,18 @@
{% for item in groups %}
[{{item}}]
{% for entry in groups[item] %}
{% set ip = hostvars[entry].ansible_host -%}
{{ entry }} ansible_host={{ ip }}
{% endfor %}
{% endfor %}
[proxmox:vars]
ansible_ssh_user={{ hostvars.proxmox_server.proxmox_ssh_user }}
ansible_ssh_pass={{ hostvars.proxmox_server.proxmox_ssh_pass }}
become=true
[all:vars]
ansible_password={{ node_account_password }}
ansible_user={{ node_account }}
ansible_ssh_extra_args="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
ansible_python_interpreter=/usr/bin/python3

83
proxmox_upload/tasks/main.yml Executable file
View File

@ -0,0 +1,83 @@
---
- name: add proxmox host to in-memory inventory
add_host: >
name=proxmox_server
groups=proxmox
ansible_ssh_host={{ proxmox_host }}
ansible_host={{ proxmox_host }}
ansible_ssh_user={{ proxmox_ssh_user }}
ansible_user={{ proxmox_ssh_user }}
ansible_ssh_pass="{{ proxmox_ssh_pass }}"
- set_fact:
image_name: "{{ ((image_url | urlsplit).path).split ('/')[-1] }}"
- name: get local image attributes
stat:
path: ./{{ image_name }}
register: img_local_present
- fail:
msg: "{{ image_name }} not present"
when: img_local_present.stat.exists == false
# get qcow2 image name by removing .xz suffix, extraction file name dependent step
- set_fact:
extracted_image_name: "{{ (((image_url | urlsplit).path).split ('/')[-1]).split ('.')[:-1] | join ('.') }}"
- name: get local extracted image attributes
stat:
path: ./{{ extracted_image_name }}
register: img_extracted_local_present
# requires local install xz-utils
- name: extract coreos image
command: xz -kd "{{ image_name }}"
when: img_extracted_local_present.stat.exists == false
- name: get local extracted image attributes
stat:
path: ./{{ extracted_image_name }}
register: img_extracted_local_present
- fail:
msg: "problem extracting image {{ image_name }}"
when: img_extracted_local_present.stat.exists == false
- set_fact:
local_img_size: "{{ img_extracted_local_present.stat.size }}"
- name: get remote image attributes
stat:
path: "{{ proxmox_img_datastore_path }}/template/iso/{{ extracted_image_name }}"
register: img_remote_attributes
delegate_to: proxmox_server
- name: upload image, proxmox API does not support upload of qcow or plain xz
copy:
src: "{{ extracted_image_name }}"
dest: "{{ proxmox_img_datastore_path }}/template/iso/"
owner: "{{ proxmox_ssh_user }}"
group: "{{ proxmox_ssh_user }}"
mode: '0640'
delegate_to: proxmox_server
when: img_remote_attributes.stat.exists == false
- name: get remote image attributes
stat:
path: "{{ proxmox_img_datastore_path }}/template/iso/{{ extracted_image_name }}"
register: img_remote_attributes
delegate_to: proxmox_server
- fail:
msg: "problem uploading image {{ image_name }} to proxmox server"
when: img_remote_attributes.stat.exists == false
- set_fact:
remote_img_size: "{{ img_remote_attributes.stat.size }}"
when: img_remote_attributes.stat.exists == true
- name: compare local and remote image size
fail:
msg: "image {{ extracted_image_name }} present on proxmox node {{ proxmox_node }} storage {{ proxmox_img_datastore }} has a different size {{ remote_img_size }} to local image {{ local_img_size }}, upload failed"
when: local_img_size|int != remote_img_size|int

9
site.yml Executable file
View File

@ -0,0 +1,9 @@
---
- hosts: localhost
gather_facts: false
become: false
roles:
- get_cloud-init_image
- proxmox_upload
- proxmox_node_provision
# - bookstack # not used, only writes a docker compose file - bookstack is provisioned via ignition