openstack_rhosp16.2_nvidia_.../6) Multi-tenancy.md

1207 lines
81 KiB
Markdown
Raw Permalink Normal View History

2022-10-26 16:58:48 +00:00
# Foreword
All operations via the CLI can generally be achieved through the web admin console, the subset of commands listed in this document are here to provide context and assist with understanding the usage model, the web interface can be confusing.
When creating objects via the CLI checking back with the web console for the new item, to clarify how to create, use and navigate items.
The CLI commands may then be used in a scripted manner to quickly create projects, networks, instances, security groups and user access patterns to get the end-user up and running with a new environment without much admin overhead.
# Load environment variables to use the Overcloud CLI
Just like the environment required for the undercloud (~/stackrc), the overcloud requires its own variables (~/overcloudrc).
```sh
[stack@undercloud ~]$ source ~/stackrc
(undercloud) [stack@undercloud ~]$ source ~/overcloudrc
(overcloud) [stack@undercloud ~]$
```
# Domains, Projects, Roles, Users and Groups
> https://docs.openstack.org/security-guide/identity/domains.html
Components of the access model.
- (keystone) Domains are high level containers for projects, users and groups. The keystone authentication provider can manage multiple domains for top level logical segregation of a cluster and allowing for different authentication backends (LDAP) per domain. A fresh cluster deployment has one domain 'Default' unless it has been joined to a directory service.
- Projects in OpenStack (also known as tenants or accounts), are organizational units in the cluster to which you can assign users (zero or more users). A user can be a member of one or more projects.
- Roles are used to define which actions a user can perform on one or more projects, they are the glue between users and projects, limiting the scope of permissions (access) to compute/network/storage resources, the permissions model is described as role-based access control (RBAC), simply, roles define which actions users can perform.
There are three main predefined roles in OpenStack.
- admin : This is an administrative role that enables non-admin users to administer the environment.
- member: Default role assigned to new users. This gets attached to a tenant.
- reader: Mostly used for read-only APIs and operations.
> Many of the following steps can be performed in the web admin console, however it is easy to script your unique access model with the CLI commands. Often when adding users you will want to add the user to multiple projects and set quotas in oneshot.
## List domains
```sh
openstack domain list
+----------------------------------+------------+---------+--------------------+
| ID | Name | Enabled | Description |
+----------------------------------+------------+---------+--------------------+
| 08e3b578ac4042838f05149543813d94 | heat_stack | True | |
| bdea557c7baf43ad92239a420255d7ec | ldap | True | |
| default | Default | True | The default domain |
+----------------------------------+------------+---------+--------------------+
# check users available in default domain
openstack user list --domain 'default' | head -n 10
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| e2ea49d4ae1d4670b8546aab65deba2b | admin |
| 23a89a9d1e394a2ebd46a472ffda5246 | cinder |
| b35722d148bd41a68dcdc02b5819096d | cinderv2 |
| d8889253c81441fb9c4b6ed092aaf387 | cinderv3 |
| af34adf270d1489d8a778e3b590e4ffc | glance |
| 3386b28253814d0cb885810464bd7c81 | heat |
| 15a87bed8b1646888911e19cb7bc2d0c | heat-cfn |
# check users available in the 'ldap' domain
openstack user list --domain 'ldap' | head -n 10
+------------------------------------------------------------------+----------+
| ID | Name |
+------------------------------------------------------------------+----------+
| 9bf2aa8c4fc59c5c58cb3269444676e213f490a03953bfa32bc071b188db7069 | ptfrost |
| cb6e3861d3f0958d1f921d4c24cd55710bc7e62583b3b8c0ce70e76a1e016c55 | mcw204 |
| 2d6ae76eecf2ab0352d00c8ebfd02a19df42858201d677d939e8225dd9bd7eac | snfieldi |
| 30c37dfef0e95e5aeab3a2c20aaa34cfe9211b5dfb705ed093dca9b2b7a83dcb | jh288 |
| b81e267c54ec6ae8c4d3bd678cdc95d74ddb249649e4d736dd2a1771c5060f28 | rnb203 |
| 591c1fed34ebc33fb7d2fe7a27be732904bda493c059af0c2cf26e4384b0660a | kebrown |
| 190e55d7af652fdac505463ed3beedc8f40e97235c21c7321fa87705ffe20bdb | arichard |
```
## Create project
Create a a project in the 'ldap' domain to enable AD users access.
The default 'service' and 'admin' projects were created in the deployment.
```sh
openstack project create --domain 'ldap' --description "University Guest Project" guest
openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 45e6f96ee6cc4ba3a348c38a212fd8b8 | guest |
| 98df2c2796ba41c09f314be1a83c9aa9 | service |
| 9c7f7d54441841a6b990e928c8e08b8a | admin |
+----------------------------------+---------+
openstack project list --domain 'ldap'
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 45e6f96ee6cc4ba3a348c38a212fd8b8 | guest |
+----------------------------------+-------+
```
### Useful project commands
```sh
## rename
openstack project set <PROJECT_ID> --name newprojectname
## disable
openstack project set <PROJECT_ID> --disable
openstack project set <PROJECT_ID> --enable
## delete and accociated instances TEST TEST
openstack project delete <PROJECT_ID>
```
## Testing - Create local user/group
To assist with testing access control create a native keystone user that is not in AD/LDAP that has access to the guest project, for AD users this is not required.
A local keystone user can interact with resources owned by AD users/groups, think of keystone as an AD sysnc/caching layer
> It is important to add valid email addresses for functionality and to chase owners of virtual machines.
> the --project parameter does not add access to the project it only sets it as the users default project, it is likely that you would set all users to a default/guest project in a self service environment.
```sh
# inline password
#openstack user create \
# --project guest \
# --password 'Password0' \
# --email toby.n.seed@gmail.com \
#toby.n.seed
# interactive password with no project (can add later)
openstack user create \
--password-prompt \
--email tseed@ocf.co.uk \
tseed
# change password for a local user (will not work for a domain user)
openstack user set --domain 'default' --password-prompt tseed
# get users ID
openstack user list --domain 'default' | tail -n 2
| 0c4c66edb7ca4f899620a500af1546c9 | tseed |
+----------------------------------+-----------+
# set the default project in the web console for the user tseed
openstack user set --project guest tseed
openstack project show $(openstack user show tseed --domain 'default' -f json | jq -r .default_project_id) -f json | jq -r .description
University Guest Project
```
### Assign role to the local user
Give user 'tseed' member access to:
- The default 'admin' project
- The new 'guest' project
```sh
openstack role add --project 'guest' --user 'tseed' 'member'
openstack role add --project 'admin' --user 'tseed' 'member'
```
The projects are in different domains but the user is able to switch between projects using the toggle at the top right of the web console.
Browse to `https://stack.university.ac.uk/dashboard`.
- user: tseed
- password: Password0
- domain: default
The local user can also be set with a role for the entire domain encompassing all projects in the domain, typically this would only be performed with the 'admin' role.
```sh
# Where objects have the same name, the unique ID can be used, generally with admin permissions use the object ID for safety
# get ID of the 'ldap' openstack domain
#openstack domain show 'ldap' -f json | jq -r .id
#c0543515d22f45f88a69008b5b884ebf
# get ID of the 'tseed' user
#openstack user list --domain 'default' -f json | jq -r '.[] | select(."Name" == "tseed") | .ID'
#fa1fc5885a074a64b2d41958d3fc9dcf
# get the ID of the 'admin' role
#openstack role list -f json | jq -r '.[] | select(."Name" == "admin") | .ID'
#5730ea7153a84c77adb9350293ea1ed9
# bind the 'tseed' local user to the 'admin' role for the entire 'ldap' domain
#openstack role add --domain c0543515d22f45f88a69008b5b884ebf --user fa1fc5885a074a64b2d41958d3fc9dcf d5bb1123771a45229adc57787709d3eb
```
Remove the role ready to use group based role assignment instead.
```sh
openstack role remove --project 'guest' --user 'tseed' 'member'
openstack role remove --project 'admin' --user 'tseed' 'member'
```
### Assign role to local group (assigning roles to groups rather than users)
Create local group in the default domain.
```sh
# (underscores in object names are generally more compatible with AD in a unix type environments)
openstack group create --domain 'Default' --description 'local group access to guest project' guest_member
openstack group list --long
+----------------------------------+--------------+-----------+-------------------------------------+
| ID | Name | Domain ID | Description |
+----------------------------------+--------------+-----------+-------------------------------------+
| 1e0cc781c0684920a020d1f57d5f2f60 | guest_member | default | local group access to guest project |
+----------------------------------+--------------+-----------+-------------------------------------+
# add local user to group
# note the group-domain and user-domain parameters, this can facilitate users from different domains access resources in a parallel keystone domains
openstack group add user --group-domain 'Default' --user-domain 'Default' guest_member tseed
# find groups that a user belongs to
openstack group list --user tseed
+----------------------------------+--------------+
| ID | Name |
+----------------------------------+--------------+
| 1e0cc781c0684920a020d1f57d5f2f60 | guest_member |
+----------------------------------+--------------+
# simple group membership check that can be easily incorporated into scripts
openstack group contains user guest_member tseed
tseed in group guest_member
# add role for group members to access the guest and admin projects
openstack role add --group-domain 'Default' --group guest_member --project guest --project-domain 'ldap' member
openstack role add --group-domain 'Default' --group guest_member --project admin --project-domain 'Default' member
```
Selecting an AD user from the domain 'ldap' should show automatic AD group membership, however the limitation is that the user object ID must be used in the query.
```sh
openstack group list --user $(openstack user show kmgoodin --domain 'ldap' -f json | jq -r .id) --domain 'ldap'
+------------------------------------------------------------------+----------------------+
| ID | Name |
+------------------------------------------------------------------+----------------------+
| 7052afb8e616072c4f30e989b381e1a9e9cb012d19851774e6fa96ccd618a12f | ISCA-Openstack-Users |
+------------------------------------------------------------------+----------------------+
```
## Assign roles - AD/LDAP
Users can be individually assigned roles (admin/member/reader) for domains or projects as illustrated above.
Alternatively a group can be created and assigned the role with users being members of the group.
Use of groups is more convenient where the keystone service uses an LDAP/AD back end.
> With LDAP/AD typically you would create an AD group, add AD members to the AD group, then create a project (for ease name the same as the AD group), create an internal network+router for the project and finally add a 'member' role to bind the AD group to the project. You may select an AD user (typically but not necessarily in the AD group) to also have the 'admin' role for the associated project to act as caretaker.
Individual user role assignments, add an AD user to a new project
```sh
# find user in the 'ldap' domain
openstack user list --domain 'ldap' | head -n 10
# add users as members of the guest project
openstack role add --user-domain 'ldap' --user kmgoodin --project-domain 'ldap' --project guest member
# check role assignment
openstack role assignment list --user kmgoodin --user-domain 'ldap' --names
+--------+---------------+-------+------------+--------+--------+-----------+
| Role | User | Group | Project | Domain | System | Inherited |
+--------+---------------+-------+------------+--------+--------+-----------+
| member | kmgoodin@ldap | | guest@ldap | | | False |
+--------+---------------+-------+------------+--------+--------+-----------+
# remove, we will likely want to use AD group based role assignment
openstack role remove --user-domain 'ldap' --user kmgoodin --project-domain 'ldap' --project guest member
```
Group based role assignments.
- Note that the groups are searched from the AD tree at a specific level, this is set by parameter 'group\_tree\_dn' in the environment file 'keystone\_domain\_specific\_ldap\_backend.yaml'
- group\_tree\_dn: OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk
- Create your Openstack groups at this location in the AD tree.
- With large ADs you need to set the 'group\_tree\_dn' for performance, with a directory the size of University when not setting the parameter the lookup queries will actually timeout and never enumerate groups.
```sh
# check groups in 'ldap' domain
openstack group list --domain 'ldap'
+------------------------------------------------------------------+----------------------+
| ID | Name |
+------------------------------------------------------------------+----------------------+
| 7052afb8e616072c4f30e989b381e1a9e9cb012d19851774e6fa96ccd618a12f | ISCA-Openstack-Users |
| 6f7f42957f5919ba70b407328ee7695d2c9100e2debb95fb3ea82f4d1ad73693 | ISCA-Admins |
| 824be73c53019246778e3f22f4a77895a4755abe6b6620df3ee80715a5a42471 | ISCA-Users |
| 91979c969386a984b70e63c81c2779e01c05cafa44c33f989c498a6655f94c06 | ISCA-module-stata |
+------------------------------------------------------------------+----------------------+
# add group with 'member' role assignment to the 'guest' project
# the 'ISCA-Openstack-Users' AD group contains all the AD users with potential access to the Openstack cluster
# we want to add every user as a member to the guest project, they will each be able to create a small VM instance
openstack role add --group-domain 'ldap' --group 'ISCA-Openstack-Users' --project-domain 'ldap' --project guest member
# check role assignment
openstack role assignment list --group 'ISCA-Openstack-Users' --group-domain 'ldap' --names
+--------+------+---------------------------+------------+--------+--------+-----------+
| Role | User | Group | Project | Domain | System | Inherited |
+--------+------+---------------------------+------------+--------+--------+-----------+
| member | | ISCA-Openstack-Users@ldap | guest@ldap | | | False |
+--------+------+---------------------------+------------+--------+--------+-----------+
# add the default Openstack 'admin' user with an admin role to the guest project (to assist with housekeeping)
openstack role add --user-domain 'Default' --user admin --project-domain 'ldap' --project guest admin
openstack role assignment list --user admin --names
+-------+---------------+-------+---------------+--------+--------+-----------+
| Role | User | Group | Project | Domain | System | Inherited |
+-------+---------------+-------+---------------+--------+--------+-----------+
| admin | admin@Default | | admin@Default | | | False |
| admin | admin@Default | | guest@ldap | | | False |
| admin | admin@Default | | | | all | False |
+-------+---------------+-------+---------------+--------+--------+-----------+
```
A small 'flavour' (the spec of a VM Instance) will be available to members of the 'guest' project, these will reside on an internal/private Openstack network that will be created for the project.
# Provider network
The provider network serves as the external traffic route for VM Instances (by various methods), it is a routable range within the network, it is likely a private network but could be a public/DMZ network.
Typically a single provider network is required but multiple provider networks are often used.
In a vanilla Openstack deployment multiple provider networks are served via VLANs on the same physical interface(s) owing to the underlying OVS bridge named 'br-ex' having a special 'datacenter' tag to denote placement of provider networks. When creating a network with parameter `--external` the bridge interface for the network is placed on the OVS bridge with the 'datacenter' tag. To change this behaviour and bind a *new/separate* physical interface to an OVS bridge for external access, not only considerably changes the network interface templates but also falls beyond the RHOSP support model.
Provider networks assign routable IPs to the following objects:
- Routers (virtual), for the best segregation you will have a virtual router with an interface on the provider network (that is routable to the wider customer network) and another interface to an Openstack internal network, you can link (route) multiple internal networks to a single router with an interface on the provider network. VM instances in the internal network will use the virtual router as a gateway for egress traffic.
- Floating IPs, typically you assign a floating IP to a VM instance (1:1 NAT) to gain direct access to the VM instance from the customer estate networks.
- VM Instances, can have a provider network IP assigned directly to their network interfaces.
Provider networks can be assigned to serve the following functions where additional parameters are used:
- virtual router only
(use parameter `--service-type=network:router_gateway` when creating the subnet for the provider network)
- virtual router + floating IP (1:1 NAT is performed on a virtual router in the provider network thus we cannot have only floating IP)
(use parameters `--service-type=network:router_gateway --service-type=network:floatingip` when creating the subnet for the provider network)
- VM Instances only
(use parameter `--service-type=compute:nova`)
Provider networks can be 'shared' allowing virtual routers (also floating IP or direct VM Instance) from any project to bind an interface into the network.
Likewise provider networks can be allocated to a domain or a project.
- Domain = top level organizational unit, bound to the keystone authentication zone, the default domain is used without LDAP, when using LDAP a new domain is automatically created.
- Project = used to be refered to as tenants, they are an organizational unit under domain holding virtual networks and VM instances.
Provider network IPs are valuable and likely limited in the customer environment, especially within a DMZ.
Provider networks can be scoped i.e 10.121.4.130-254 however they cannot be carved up into smaller CIDR ranges where the provider network gateway would lay outside of the address range of the CIDR.
The scarcity of external/provider IPs and strategies to manage are highlighted in the following spec post:
> https://specs.openstack.org/openstack/neutron-specs/specs/newton/subnet-service-types.html
A Provider network with a /24 range can become crowded quickly, it is best to have have a large provider network (/16) or ideally multiple correctly sized provider networks dedicated classes of usage/departments on a project basis. The reasoning behind this is that many customers will want to built as least 1 VM per user that is routable by the wider network as well as the typical (internal) virtual network per project.
Q: How do you access VM instances on an internal Openstack network? (that does not have an IP routable from the customer estate via floating IP or directly assigned from a provider network).
A: You likely have a jump host with a floating/native IP on the provider network that is dual homed to the Openstack internal network(s) hosting the VM instances, however in this model multiple internal networks cannot have overlapping IP ranges.
## Create the University provider network
Create provider network on VLAN 1214, the network team allocated/routable external network.
- --share allows the network to be used by any project.
- --external denotes the network can route to outbound networks.
```sh
# VLAN network using the external bridge
openstack network create provider --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 1214 --share
```
Create provider subnet.
- --dns-nameserver allows use of an estate wide dns service that will be key for permanent service identification and/or when issuing CA certs / SSL certificates.
- Typically the dns servers provided by the subnet inbuilt DHCP service will select the gateway IP, by specifying a DNS server - DHCP will present the --dns-nameserver first and then the gateway IP.
- The range 10.121.4.30-254 is used, IPs 1-30 are used for access to proxmox/undercloud/ceph/switches, the remaining IPs in the range are free for Openstack to use.
```sh
# create the subnet for the virtual router(s) external interface using an external DNS service
openstack subnet create provider-subnet --network provider --dhcp --allocation-pool start=10.121.4.30,end=10.121.4.254 --gateway 10.121.4.1 --subnet-range 10.121.4.0/24 --dns-nameserver=144.173.6.71 --dns-nameserver=1.1.1.1
openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | provider | ce3acd5d-606e-4b59-9a16-8966b4ab9d3c |
+--------------------------------------+----------+--------------------------------------+
openstack subnet list
+--------------------------------------+-----------------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+-----------------+--------------------------------------+---------------+
| ce3acd5d-606e-4b59-9a16-8966b4ab9d3c | provider-subnet | 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | 10.121.4.0/24 |
+--------------------------------------+-----------------+--------------------------------------+---------------+
```
## Create the University default guest network
Create a virtual router to link the provider and internal networks.
```sh
# create a virtual router for the provider network
openstack router create guest-router --project guest
openstack router list
+--------------------------------------+--------------+--------+-------+----------------------------------+
| ID | Name | Status | State | Project |
+--------------------------------------+--------------+--------+-------+----------------------------------+
| 66643aa6-ae44-4f7e-a3ca-afacda8c3acc | guest-router | ACTIVE | UP | 45e6f96ee6cc4ba3a348c38a212fd8b8 |
+--------------------------------------+--------------+--------+-------+----------------------------------+
# add gateway interface to the provider network
openstack router set guest-router --external-gateway provider
# check the IP of the router
openstack router show guest-router -f json | jq .external_gateway_info
{
"network_id": "4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc",
"external_fixed_ips": [
{
"subnet_id": "ce3acd5d-606e-4b59-9a16-8966b4ab9d3c",
"ip_address": "10.121.4.88"
}
],
"enable_snat": true
}
```
Create isolated internal guest network.
```sh
# create an isolated virtual network and subnet named 'guest' and 'guest-subnet' for the virtual machines that will use this router
openstack network create guest --internal --no-share --project guest
openstack subnet create guest-subnet --project guest --network guest --gateway 172.16.0.1 --subnet-range 172.16.0.0/16 --dhcp
```
Attach the guest subnet to the virtual router.
```sh
# add router interface on 'guest-router' to subnet 'guest-subnet'
openstack router add subnet guest-router guest-subnet
# Get interface IPs of the router for the provider network subnet and guest network subnet
openstack router show guest-router -f json | jq -r .external_gateway_info.external_fixed_ips[].ip_address
10.121.4.88
openstack router show guest-router -f json | jq -r .interfaces_info[].ip_address
172.16.0.1
# list all network objects
openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 2c1b7587-94f2-43f9-97ab-ae3b80ab59be | guest | 3917c7de-2855-41fd-acbb-63cc87d65fc7 |
| 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | provider | ce3acd5d-606e-4b59-9a16-8966b4ab9d3c |
+--------------------------------------+----------+--------------------------------------+
openstack subnet list
+--------------------------------------+-----------------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+-----------------+--------------------------------------+---------------+
| 3917c7de-2855-41fd-acbb-63cc87d65fc7 | guest-subnet | 2c1b7587-94f2-43f9-97ab-ae3b80ab59be | 172.16.0.0/16 |
| ce3acd5d-606e-4b59-9a16-8966b4ab9d3c | provider-subnet | 4e0f4ffc-c480-4679-9893-d2e8a2a7d0fc | 10.121.4.0/24 |
+--------------------------------------+-----------------+--------------------------------------+---------------+
openstack router list
+--------------------------------------+--------------+--------+-------+----------------------------------+
| ID | Name | Status | State | Project |
+--------------------------------------+--------------+--------+-------+----------------------------------+
| 66643aa6-ae44-4f7e-a3ca-afacda8c3acc | guest-router | ACTIVE | UP | 45e6f96ee6cc4ba3a348c38a212fd8b8 |
+--------------------------------------+--------------+--------+-------+----------------------------------+
```
Remove router, subnet and network.
```sh
#openstack router remove subnet guest-router guest-subnet
#openstack subnet delete guest-subnet
#openstack network delete guest
#openstack router delete guest-router
```
# Quotas
Quotas can be set project wide to ensure resource usage has a hard limit.
User specific quotas per project for Nova(compute) can also be set.
Block storage quotas can be set per project but not for user by project unfortunately.
## Example guest project
- The guest project is a proof of concept area for each Openstack user to create a single small VM Instance, the project has many limits to enforce this.
- The guest project is available to members of the AD group 'ISCA-Openstack-Users', this group has ~1800 user accounts, some of these are service accounts and some are disabled, limits will be set for 2000 small VM Instances (with no backups or snapshots allowed).
- The VM Instance will be provided by a VM flavour, the spec of the flavour will be 1 core, 2GB ram, 5GB disk.
- For production this example is likely far too much resource to set aside on the cluster as this allocates 4TB of 6TB, an AD 'guest' group with a subset of the users from AD group 'ISCA-Openstack-Users' would likely be used.
Project based quotas:
```sh
# show default quotas for 'guest' project
openstack quota show --fit-width --default guest
+-----------------------+----------------------------------------------------------------------------+
| Field | Value |
+-----------------------+----------------------------------------------------------------------------+
| backup-gigabytes | 1000 |
| backups | 10 |
| cores | 20 |
| floating-ips | 50 |
| gigabytes | 1000 |
# set project wide quotas, RAM = Megabytes
#openstack quota set --QUOTA_NAME QUOTA_VALUE PROJECT_NAME
openstack quota set --instances 2000 guest ;\
openstack quota set --cores 4000 guest ;\
openstack quota set --ram 4096000 guest ;\
openstack quota set --gigabytes 10000 guest ;\
openstack quota set --volumes 2000 guest ;\
openstack quota set --backups 0 guest ;\
openstack quota set --snapshots 0 guest ;\
openstack quota set --key-pairs 6000 guest ;\
openstack quota set --floating-ips 2000 guest ;\
openstack quota set --networks 1 guest ;\
openstack quota set --routers 1 guest ;\
openstack quota set --subnets 1 guest ;\
openstack quota set --secgroups 250 guest ;\
openstack quota set --secgroup-rules 2000 guest
# show applied quotas for 'guest' project
## NOTE: no --default parameter
openstack quota show --fit-width guest
```
User per project based quotas:
- These quotas are set by nova scheduler, the user and tenant(project) objects must be specified by unique ID rather than name.
- There does not seem to be an inbuilt/dynamic way to set a predefined user quota template for a project (quota classes are not yet fully supported).
```sh
# the nova cli has a slightly different syntax for help
nova help quota-update
# show user specific quota per project, notice the project wide quotas are shown
nova quota-show --user $(openstack user show tseed -f json | jq -r .id) --tenant $(openstack project show guest -f json | jq -r .id)
+----------------------+---------+
| Quota | Limit |
+----------------------+---------+
| instances | 2000 |
| cores | 4000 |
| ram | 4096000 |
| metadata_items | 128 |
| key_pairs | 6000 |
| server_groups | 10 |
| server_group_members | 10 |
+----------------------+---------+
# set user specific quotas per project
#nova quota-update --user $projectUser --instance 12 $project
nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --instance 1 $(openstack project show guest -f json | jq -r .id)
nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --cores 2 $(openstack project show guest -f json | jq -r .id)
nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --ram 2048 $(openstack project show guest -f json | jq -r .id) # Megabytes
nova quota-update --user $(openstack user show tseed -f json | jq -r .id) --key-pairs 1 $(openstack project show guest -f json | jq -r .id)
#check quotas
nova quota-show --user $(openstack user show tseed -f json | jq -r .id) --tenant $(openstack project show guest -f json | jq -r .id)
+----------------------+-------+
| Quota | Limit |
+----------------------+-------+
| instances | 1 |
| cores | 2 |
| ram | 2048 |
| metadata_items | 128 |
| key_pairs | 1 |
| server_groups | 10 |
| server_group_members | 10 |
+----------------------+-------+
# user tseed is also a member of the 'admin' project, no user quotas have changed for this project
nova quota-show --user $(openstack user show tseed -f json | jq -r .id) --tenant $(openstack project show admin -f json | jq -r .id)
+----------------------+-------+
| Quota | Limit |
+----------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| metadata_items | 128 |
| key_pairs | 100 |
| server_groups | 10 |
| server_group_members | 10 |
+----------------------+-------+
```
## Apply user-per-project quotas for each user in the AD group
There does not seem to be a dynamic way of applying per user quotas for a project as users are added to a group/project.
Unfortunately per user quotas cannot exceed the per project quota, for example if the project quota is 500 instances and there are 2000 users with a plan to quota 1 instance per user - after 500 users have had a quota applied to them the API will return error.
Following is a rough script to update the per user quota for each member of an LDAP group, you could periodically run the script as new users are added to the group.
```sh
sudo dnf install openldap-clients -y
touch project_quota_per_user.sh
chmod +x project_quota_per_user.sh
nano -cw project_quota_per_user.sh
#!/bin/bash
# ldapsearch required: sudo dnf install openldap-clients
#set -x
source /home/stack/overcloudrc
LDAP_SEARCH_BIND_PASS="3gB=dR=gAfu6CXxx"
LDAP_SEARCH_BASE="OU=ISCA-Groups,OU=HPC,OU=Member Servers,DC=isad,DC=isadroot,DC=university,DC=ac,DC=uk"
LDAP_SEARCH_BIND_DN="svc_iscalookup@university.ac.uk"
LDAP_SEARCH_HOST="ldaps://secureprodad.university.ac.uk"
LDAP_SEARCH_FILTER="(&(objectClass=group)(cn=ISCA-Openstack-Users))"
LDAP_SEARCH_FIELDS="member"
OPENSTACK_DOMAIN="ldap"
OPENSTACK_PROJECT="guest"
USERS=()
function search () {
for i in $(echo -e $1 | awk -F "member:" '{for (i = 1; i <= NF; i++) print $i}' \
| grep -v ^dn\: \
| awk -F "," '{gsub(/CN=/,"", $1); print $1}')
do
USERS+=($i)
done
}
search "$( ldapsearch -LLL -o ldif-wrap=no -x \
-w "$LDAP_SEARCH_BIND_PASS" \
-b "$LDAP_SEARCH_BASE" \
-D "$LDAP_SEARCH_BIND_DN" \
-H "$LDAP_SEARCH_HOST" \
$LDAP_SEARCH_FILTER \
$LDAP_SEARCH_FIELDS)"
function quota () {
PROJECT_ID=$(openstack project show $OPENSTACK_PROJECT -f json | jq -r .id)
for i in "${USERS[@]}"
do
USER_ID=$(openstack user show --domain $OPENSTACK_DOMAIN $i -f json | jq -r .id)
if [ ! -z "$USER_ID" ]
then
nova quota-update --user $USER_ID --instance 1 $PROJECT_ID
nova quota-update --user $USER_ID --cores 2 $PROJECT_ID
nova quota-update --user $USER_ID --ram 2048 $PROJECT_ID
nova quota-update --user $USER_ID --key-pairs 1 $PROJECT_ID
nova quota-show --user $USER_ID --tenant $PROJECT_ID
fi
done
}
quota
```
# Import disk images to Glance image service
- The images have cloud-init enabled to ensure the ssh key can be pushed to the image and any metadata can be accessed and used to perform custom bootstrap actions.
- Images are uploaded with public status meaning any user can use the image, it could be private or public(shared), have metadata or be pushed to only be used by a single project.
```sh
# download the ubuntu image and make available to all projects
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./bionic-server-cloudimg-amd64.img ubuntu_18.04
# download the alma and make available to all projects
wget https://repo.almalinux.org/almalinux/8/cloud/x86_64/images/AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2
openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2 alma_8.6
# download the rocky and make available to all projects
wget https://download.rockylinux.org/pub/rocky/8.6/images/Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2
openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ./Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2 rocky_8.6
# download the cirros test image (only useful to ping/traceroute/curl) to the admin project
wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
openstack image create --disk-format qcow2 --container-format bare --private --project admin --property os_type=linux --file ./cirros-0.5.1-x86_64-disk.img cirros-0.5.1
# check the format of the images to determine if they are qcow format
file cirros-0.5.1-x86_64-disk.img
cirros-0.5.1-x86_64-disk.img: QEMU QCOW Image (v3), 117440512 bytes
file bionic-server-cloudimg-amd64.img
bionic-server-cloudimg-amd64.img: QEMU QCOW Image (v2), 2361393152 bytes
file AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2
AlmaLinux-8-GenericCloud-8.6-20220513.x86_64.qcow2: QEMU QCOW Image (v3), 10737418240 bytes
file Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2
Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2: QEMU QCOW Image (v3), 3492806656 bytes
# list image attributes
openstack image list --long --fit-width
+------------------------------+--------------+-------------+------------------+-----------+------------------------------+--------+------------+-----------+--------------------------------+------+
| ID | Name | Disk Format | Container Format | Size | Checksum | Status | Visibility | Protected | Project | Tags |
+------------------------------+--------------+-------------+------------------+-----------+------------------------------+--------+------------+-----------+--------------------------------+------+
| 633641ac-6686-4a2e-bfec-0459 | alma_8.6 | qcow2 | bare | 555876352 | c7c15ec93e48399187783be828cc | active | public | False | 9c7f7d54441841a6b990e928c8e08b | |
| b41c1e65 | | | | | 1be2 | | | | 8a | |
| 26c0b4ac-0de2-448d-b695-1f43 | cirros-0.5.1 | qcow2 | bare | 16338944 | 1d3062cd89af34e419f7100277f3 | active | private | False | 9c7f7d54441841a6b990e928c8e08b | |
| c2612efb | | | | | 8b2b | | | | 8a | |
| 6535678f-37b3-49a0-ae10-3a5f | rocky_8.6 | qcow2 | bare | 857604096 | 062b60cb6f7cdfe4c5e4d4624b0b | active | public | False | 9c7f7d54441841a6b990e928c8e08b | |
| 15742607 | | | | | a8c3 | | | | 8a | |
| db826067-0bf8-4494-8837-b707 | ubuntu_18.04 | qcow2 | bare | 389808128 | 3cdb7bbbabdcd466002ff23cdd94 | active | public | False | 9c7f7d54441841a6b990e928c8e08b | |
| 0bb8f1c1 | | | | | 8e2b | | | | 8a | |
+------------------------------+--------------+-------------+------------------+-----------+------------------------------+--------+------------+-----------+--------------------------------+------+
```
# Create instance sizes (flavours)
> [https://access.redhat.com/documentation/en-us/red\_hat\_openstack\_platform/16.1/html/director\_installation\_and\_usage/assembly_performing-overcloud-post-installation-tasks#sect-Creating-basic-overcloud-flavors](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/assembly_performing-overcloud-post-installation-tasks#sect-Creating-basic-overcloud-flavors)
Create a single flavour for only the guest project.
- The flavour is set private and bound to a single project
```sh
openstack flavor create guest.tiny --ram 2048 --disk 5 --vcpus 2 --private --project guest
openstack flavor list --all
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| afbb704c-41dd-4165-9c92-c7af79f44d8b | guest.tiny | 2048 | 5 | 0 | 2 | False |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
```
To create instance flavours for all projects the following seems like a good sizing scheme.
```sh
#openstack flavor create m1.tiny --ram 512 --disk 5 --vcpus 1
#openstack flavor create m1.smaller --ram 1024 --disk 5 --vcpus 1
#openstack flavor create m1.small --ram 2048 --disk 10 --vcpus 1
#openstack flavor create m1.medium --ram 3072 --disk 10 --vcpus 2
#openstack flavor create m1.large --ram 8192 --disk 10 --vcpus 4
#openstack flavor create m1.xlarge --ram 8192 --disk 10 --vcpus 8
openstack flavor list
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| 0b4b8b07-7ff3-4d75-974d-899e19fa5a8b | m1.small | 2048 | 10 | 0 | 1 | True |
| 2ba59dbb-c2f8-40f3-90ee-a29a634280e3 | m1.medium | 3072 | 10 | 0 | 2 | True |
| 78b79341-6c28-440a-9f04-c6b0f81e8ac6 | m1.tiny | 512 | 5 | 0 | 1 | True |
| 7c9600d8-7b95-4749-a74b-75033cc94bbd | m1.xlarge | 8192 | 10 | 0 | 8 | True |
| a91ab2d7-5412-4adc-b3cf-824902814098 | m1.smaller | 1024 | 5 | 0 | 1 | True |
| c90fe5c6-a5a1-4ba5-8fe3-7343d2942858 | m1.large | 8192 | 10 | 0 | 4 | True |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
```
# Delete disk volumes
Storage capacity maintenance can be a hands on task in Openstack owing to the following:
- note the available volumes below, these are previous volumes that should be deleted, preferably in a self service environment users will select to 'delete disk on termination' when creating the VM instance
- finding disk content after termination becomes difficult, especially where there is no description or tag
- adding descriptions is good end-user practise, encourage users to add a description to their disk, maybe add their email to the description and a preference for any disk intended to outlive any VM Instance
- admins may have a policy that disk is deleted if not 'in-use' (thus 'available') and no description is set, as this is generally the state of a disk automatically created with now-decomissioned VM instance
- project based quotas can mitigate wider capacity issues for a cluster with many orphaned disks
```sh
openstack volume list --project bioinformatics --long
+--------------------------------------+------+-----------+------+---------+----------+---------------------------------------------------------------+------------+
| ID | Name | Status | Size | Type | Bootable | Attached to | Properties |
+--------------------------------------+------+-----------+------+---------+----------+---------------------------------------------------------------+------------+
| 4f26d90b-4aa1-4150-96b6-aa019761bedd | | in-use | 100 | tripleo | true | Attached to b3692490-39af-45dc-8c4c-d9679ae51fca on /dev/vda | |
| 5dd9ca41-90de-4658-81a4-adfffea99deb | | in-use | 100 | tripleo | true | Attached to 451ccb19-979a-40f9-94da-804bb94d4e04 on /dev/vda | |
| bad3afd7-db09-4a14-bb87-903d4361fa55 | | available | 100 | tripleo | true | | |
| 8ad76e95-b46f-4a49-8517-081f78f14997 | | available | 100 | tripleo | true | | |
| e5061dbb-2f12-4e66-81ca-7900baa24570 | | available | 100 | tripleo | true | | |
+--------------------------------------+------+-----------+------+---------+----------+---------------------------------------------------------------+------------+
```
Basis of a script to periodically run to remove orphaned disks.
```sh
touch delete_orphaned_disk.sh
chmod +x delete_orphaned_disk.sh
nano -cw delete_orphaned_disk.sh
#!/bin/bash
#set -x
source /home/stack/overcloudrc
older_than_days=17
# key 'Status' with value 'available' indicates a disk is not attached to a VM instance
# key 'Name' with an empty value indicates the disk was created when provisioning a VM instance, users should add a meaningful to their disk if they value the data
# when a disk is provisioned independently of a VM instance the ID will be selected by the user rather than being an autogenerated UUID
# you could use any of these fields and behaviours to qualify whether a disk should be selected for deletion
for i in $(openstack volume list --project bioinformatics -f json | jq -r '.[] | select((.Status == "available") and .Name == "").ID')
do
doc="$doc $(openstack volume show $i -f json | jq '. | {"volume_id": .id, "last_used": .updated_at, "status": .status, "user_id": .user_id}')"
done
doc=$(echo $doc | jq -s .)
list_items=$(echo $doc | jq '. | length')
for ((i=0;i<=$(echo $list_items -1);i++))
do
#echo $doc | jq .[$i]
user=$(echo $doc | jq -r .[$i].user_id)
get_last_used=$(echo $doc | jq -r .[$i].last_used)
# get last used date in iso8601 format, convert to unixtime
unixtime_last_used=$(date -d $get_last_used +"%s")
unixtime_older_than=$(date -d "now -$older_than_days days" +"%s")
user_info=$(openstack user show $user -f json | jq '. | {"name": .name, "email": .email}')
get_name=$(echo $user_info | jq -r .name)
get_email=$(echo $user_info | jq -r .email)
if [ $unixtime_last_used -lt $unixtime_older_than ]
then
schedule_removal=true
else
schedule_removal=false
fi
doc1="$doc1 $(echo $doc | jq .[$i] | jq --argjson input1 '{ "name":"'$get_name'", "email":"'$get_email'", "unixtime_last_used":"'$unixtime_last_used'", "schedule_removal":"'$schedule_removal'" }' '. = $input1 + .')"
done
doc=$(echo $doc1 | jq -s .)
# add your logic here to print or email report/output, add logic to accept input file to delete orphaned disk based on 'schedule_removal'
# document content
#[
# {
# "name": "tseed",
# "email": "tseed@ocf.co.uk",
# "unixtime_last_used": "1657093948",
# "schedule_removal": "true",
# "volume_id": "339a923e-4861-4654-9f25-a729b03c7f86",
# "last_used": "2022-07-06T08:52:28.000000",
# "status": "available",
# "user_id": "fa1fc5885a074a64b2d41958d3fc9dcf"
# }
#]
volumes=$(echo $doc | jq -r '.[] | select(.schedule_removal == "true").volume_id')
for i in $volumes
do
echo $i
openstack volume delete --purge $i
done
```
# More CLI commands
## ssh key commands
SSH keys can be created in the web console, on creation a .pem private keyfile will be downloaded automatically in the web browser.
Create a new key, note the private key is presented, you will not be able to retrieve this so ensure you copy this to a safe location.
```sh
# when issuing this command the private key will be displayed, make note of this as it cannot be retrieved
openstack keypair create test
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEArNhyzyD2/ZA5WgnkNp9dSDEl0XjoAx/yfF77dt6NO6iXB3Os
vqAUVJsnPz8faDm1i8qYM7P61ZrUD4FvnK9SfyIIU/jZuByaNi2/M3DL1Cyj5NCH
ORcRYDyz66X2uIJkPTFr6XVXOEQYWTv7dpuxVAoPle3sQ0UbG9pjD7w2sIEjxtbN
DVcLJfXb7L6ZYHKP/AYQTJNSOJTYSemUkAc7lQvR8q9RZnNcXjLC6j5Boy6VXZuX
+MsDdiWdkeebsc0of64xUK8UjY3eVP0VfOufZejrsCiOjqZhSvgP/AJMt0SnCYZW
1Dl5LlQ+t3RTCcYkTrWjcmuf/dNzewIguCb3RQIDAQABAoIBAQCpY/qoGTNVbll2
bwkzqty9WkUo06f1IAMBdghVB2g8Fk3k5K1fp/wkqmU9K3x5JU1RMXwV94WUfwbi
J0SdtohPxaeJu/CK6aUMAatHG3z2c8UvAlnzTjMeMH9XKq/vRQI9okiSZAfVQY7n
LMyVAaI4rR93HNOVXY1ir5SzoA2szVV/vP6Ki0WlUZ6AIULX2uAD8PPrDCdMr9Qc
HUjdXHHX6hBjN7UFcE4uYdDPXc3TvFSG4q5PEn3fXGY9D0NyNcvXxmC4w16zb0d3
8hxVFxcwFcVTvlsTORnTKJ91DBN6jiSY5ABpLriZgZij9T3i0qsouJZ7k2hsb3q/
zGR4GzMFAoGBAOBBOoEtlWdvhOAaLbXr6iTmiAMLudIXjIKofokjBdgPXucK/r0S
rp/uS68g4s9id6RPOUj9mWq0k4JOtRLb5nXCkQ8eF84PJ6XCnJZL2xcnFfYErqED
CL1BFGhJ7i/ExiHWtD8Ew4oYiWlvfvWxVbrgJsElJ1VZHNDmM8uVIWuDAoGBAMVQ
MrCZQFA+Cxb9vOBn0rYYOhDCyNKAYsZHesTZe8IGieysc5UyEA/7Z2Owvv0L3Y03
qi5KSDJfMtR82M+L/oykwFc5l/2wUoLjJexpVdZX/KqDq7VERKTtK1qysr+RY141
a8pof1JN5ojHOTl9BvEnJf/K5clqFfPHuIhmpq+XAoGAFcshBWbJqziyQBkrMg/Q
PG/O7gTYtSsms5fuXCN0MPAld+ygnv1OzSoaXtWiVScrm2M7nPVQUIdmAnblsASA
3BbhhAeXpqXgY4KLNyv+Cbz5rGP+GJWz5riJZC0zIZ9M5gL4l1s+KZCC4iU8wGHQ
hA2+lmym6utzGnYUuIcwrUMCgYBwJi9JlTGq6jjfboVmf1ySx55pXG1MyFBcJtCv
BnaDR7gpX7Oqf3QFwX14ekN0DMR2ucbu3KXAi7+WawfIn+elBReV/FRZi1i6sGUj
xJNXa1dfi8uTEiR6IZvcx2k13Ws/ZtnHiDGmFEUORT5PYLMLapb8ltSY8MVddI18
aewgLQKBgQCCTYwfPuo67Ujla5CxXr9POZCPQXNnPNJsAOAK3IYoizHUHpUmge/G
sQs+IQY774LKv4ZxT5o1qrNQ491oLk6vamyXTBa59cECTTcvIiZW5stWI5j2zWgm
2XE7Am3MnghnLJdyZ7HA/MT9GGrVHyinojmtM9FWEsKwQ1PJWMQwMQ==
-----END RSA PRIVATE KEY-----
```
Import an existing public key, ensure the key is followed by an identifier such as your email, do not use user@server unless it is a single use key for a specifically named VM Instance.
```sh
pubkey="ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAm+l9n70tSvow56eOLhDZT8VLCmU9MCjUa7d2v0fH2ix/mdWy+RUo9c24U9WJmBlxpAmMDpSxlFcOpBwk1y+tWC/24YJ+m0/6YGWTzbl84GCjdBfrWcTuV5MFYvkYfq8lx3VESyZrYVmoC9Shwtj825YjfVpWqWvFw2kJznyOHWSGv60j6AJyzoT8rWCt4tSusEVzwup7UWF8TDIB6GXO3hqBZcCo3mfyuWkAswkEbX8SKIXqlNUZWMsxdS5ZpodigG6pj9fIsob8P+PxXF7YQiPo4W1uDHGoh0033oLb2fQULs4VjwqNVUE4dKkruFdNupBNCY3BJWHMT/mDOnUiww== tseed@ocf.co.uk"
echo $pubkey > /tmp/pubkey.txt
openstack keypair create --public-key /tmp/pubkey.txt tseed
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 8b:ae:ed:4c:63:12:cb:5b:a4:7a:5a:bc:08:83:fc:6c |
| name | tseed |
| type | ssh |
| user_id | e2ea49d4ae1d4670b8546aab65deba2b |
+-------------+-------------------------------------------------+
rm -Rf /tmp/pubkey.txt
```
Keypair operations.
```sh
openstack keypair -h
Command "keypair" matches:
keypair create
keypair delete
keypair list
keypair show
openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| test | 79:e7:10:53:13:fd:ec:47:0e:3e:61:19:3b:84:2b:0a |
| tseed | 8b:ae:ed:4c:63:12:cb:5b:a4:7a:5a:bc:08:83:fc:6c |
+-------+-------------------------------------------------+
openstack keypair show test / openstack keypair show test -f json
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| created_at | 2022-10-12T09:08:59.000000 |
| deleted | False |
| deleted_at | None |
| fingerprint | 79:e7:10:53:13:fd:ec:47:0e:3e:61:19:3b:84:2b:0a |
| id | 2 |
| name | test |
| type | ssh |
| updated_at | None |
| user_id | e2ea49d4ae1d4670b8546aab65deba2b |
+-------------+-------------------------------------------------+
openstack keypair show --public-key test
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCs2HLPIPb9kDlaCeQ2n11IMSXReOgDH/J8Xvt23o07qJcHc6y+oBRUmyc/Px9oObWLypgzs/rVmtQPgW+cr1J/IghT+Nm4HJo2Lb8zcMvULKPk0Ic5FxFgPLPrpfa4gmQ9MWvpdVc4RBhZO/t2m7FUCg+V7exDRRsb2mMPvDawgSPG1s0NVwsl9dvsvplgco/8BhBMk1I4lNhJ6ZSQBzuVC9Hyr1Fmc1xeMsLqPkGjLpVdm5f4ywN2JZ2R55uxzSh/rjFQrxSNjd5U/RV8659l6OuwKI6OpmFK+A/8Aky3RKcJhlbUOXkuVD63dFMJxiROtaNya5/903N7AiC4JvdF Generated-by-Nova
```
## security group commands (firewall rules)
The SG mechanism is very flexible and intuative to use.
```sh
# list security groups
# on a fresh system you will see 2 Default SGs per project, there is the service project (a builtin SG for functional resources like routers) and a default project named admin, until we start to add our own projects we will use the admin project
openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| cc3e3172-66ff-48ae-8b92-96bd43fbbc65 | default | Default security group | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] |
| ff5c38eb-96fd-40f9-b90a-4c1b31745438 | default | Default security group | 9c7f7d54441841a6b990e928c8e08b8a | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
openstack project list # note we also have the guest project we created in the example above)
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 45e6f96ee6cc4ba3a348c38a212fd8b8 | guest |
| 98df2c2796ba41c09f314be1a83c9aa9 | service |
| 9c7f7d54441841a6b990e928c8e08b8a | admin |
+----------------------------------+---------+
openstack security group list --project admin
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ff5c38eb-96fd-40f9-b90a-4c1b31745438 | default | Default security group | 9c7f7d54441841a6b990e928c8e08b8a | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
# check a security group, json output is easier to read
openstack security group show ff5c38eb-96fd-40f9-b90a-4c1b31745438 -f json
# find the rules associated to the security group
openstack security group show ff5c38eb-96fd-40f9-b90a-4c1b31745438 -f json | jq -r .rules[].id
0dbe030c-d556-4553-bf2f-86b2d8f003a3
2de6e7cb-67b8-4df8-9cbd-35de055490b7
72b479d8-e52e-4e3c-ab52-c9645bedb267
f59c6050-ba70-4567-a094-8d026f0be586
# list all rules associated with a security group
# notice we can bind rules with ingress/egress and other security groups, much like AWS we can attach VM Instances to SGs and inherit rules this way
openstack security group rule list ff5c38eb-96fd-40f9-b90a-4c1b31745438
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| 0dbe030c-d556-4553-bf2f-86b2d8f003a3 | None | IPv6 | ::/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 |
| 2de6e7cb-67b8-4df8-9cbd-35de055490b7 | None | IPv4 | 0.0.0.0/0 | | None |
| 72b479d8-e52e-4e3c-ab52-c9645bedb267 | None | IPv6 | ::/0 | | None |
| f59c6050-ba70-4567-a094-8d026f0be586 | None | IPv4 | 0.0.0.0/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
# add a simple ssh access rule to the SG 'default' in the 'admin' project
openstack security group rule create \
--ingress \
--protocol tcp \
--ethertype IPv4 \
--remote-ip '0.0.0.0/0' \
--dst-port 22 \
ff5c38eb-96fd-40f9-b90a-4c1b31745438
# the output of our last command showed a rule created with ID 8e78f3ea-7e07-4db7-ab22-6e59935f76a9
openstack security group rule list ff5c38eb-96fd-40f9-b90a-4c1b31745438
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| 0dbe030c-d556-4553-bf2f-86b2d8f003a3 | None | IPv6 | ::/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 |
| 2de6e7cb-67b8-4df8-9cbd-35de055490b7 | None | IPv4 | 0.0.0.0/0 | | None |
| 72b479d8-e52e-4e3c-ab52-c9645bedb267 | None | IPv6 | ::/0 | | None |
| 8e78f3ea-7e07-4db7-ab22-6e59935f76a9 | tcp | IPv4 | 0.0.0.0/0 | 22:22 | None |
| f59c6050-ba70-4567-a094-8d026f0be586 | None | IPv4 | 0.0.0.0/0 | | ff5c38eb-96fd-40f9-b90a-4c1b31745438 |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
```
Create your own security group for the 'guest' project, a new VM instance will require any non default secuirty group binding to the instance, typically when a user creates a VM Instance they will select the security group from a dropdown menu in the web console.
```sh
# create a new SG for your custom application in the guest project
openstack security group create --project guest MYAPP
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2022-10-12T10:00:04Z |
| description | MYAPP |
| id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 |
| location | cloud='', project.domain_id=, project.domain_name=, project.id='45e6f96ee6cc4ba3a348c38a212fd8b8', project.name=, region_name='regionOne', zone= |
| name | MYAPP |
| project_id | 45e6f96ee6cc4ba3a348c38a212fd8b8 |
| revision_number | 1 |
| rules | created_at='2022-10-12T10:00:04Z', direction='egress', ethertype='IPv6', id='9c64924f-644a-4234-8026-8239fac14c16', updated_at='2022-10-12T10:00:04Z' |
| | created_at='2022-10-12T10:00:04Z', direction='egress', ethertype='IPv4', id='db49e625-da33-4c1e-aab8-9ce4a10cf4f9', updated_at='2022-10-12T10:00:04Z' |
| tags | [] |
| updated_at | 2022-10-12T10:00:04Z |
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
# add some rules to the 'MYAPP' SG
# inbound access from anywhere to port 2000
openstack security group rule create \
--ingress \
--protocol tcp \
--ethertype IPv4 \
--remote-ip '0.0.0.0/0' \
--dst-port 2000 \
03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2022-10-12T10:00:35Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | bfaa42da-8573-4490-8098-45e7befa57f4 |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='9c7f7d54441841a6b990e928c8e08b8a', project.name='admin', region_name='regionOne', zone= |
| name | None |
| port_range_max | 2000 |
| port_range_min | 2000 |
| project_id | 9c7f7d54441841a6b990e928c8e08b8a |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 |
| tags | [] |
| updated_at | 2022-10-12T10:00:35Z |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
# inbound access from VM instances only on a local network to port range 3000-4000, the ip range is the guest network subnet
openstack security group rule create \
--ingress \
--protocol tcp \
--ethertype IPv4 \
--remote-ip '172.16.0.0/16' \
--dst-port 3000:4000 \
03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2022-10-12T10:01:05Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 7685e615-3e1b-4ea1-82e9-1131daf11f69 |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='9c7f7d54441841a6b990e928c8e08b8a', project.name='admin', region_name='regionOne', zone= |
| name | None |
| port_range_max | 4000 |
| port_range_min | 3000 |
| project_id | 9c7f7d54441841a6b990e928c8e08b8a |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 172.16.0.0/16 |
| revision_number | 0 |
| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 |
| tags | [] |
| updated_at | 2022-10-12T10:01:05Z |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
# list new security group
openstack security group list --project guest
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | MYAPP | MYAPP | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] |
| cc3e3172-66ff-48ae-8b92-96bd43fbbc65 | default | Default security group | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
# list rules in security group
openstack security group rule list 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3
+--------------------------------------+-------------+-----------+---------------+------------+-----------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+---------------+------------+-----------------------+
| 7685e615-3e1b-4ea1-82e9-1131daf11f69 | tcp | IPv4 | 172.16.0.0/16 | 3000:4000 | None |
| 9c64924f-644a-4234-8026-8239fac14c16 | None | IPv6 | ::/0 | | None |
| bfaa42da-8573-4490-8098-45e7befa57f4 | tcp | IPv4 | 0.0.0.0/0 | 2000:2000 | None |
| db49e625-da33-4c1e-aab8-9ce4a10cf4f9 | None | IPv4 | 0.0.0.0/0 | | None |
+--------------------------------------+-------------+-----------+---------------+------------+-----------------------+
# show rule
openstack security group rule show 7685e615-3e1b-4ea1-82e9-1131daf11f69
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2022-10-12T10:01:05Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 7685e615-3e1b-4ea1-82e9-1131daf11f69 |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='9c7f7d54441841a6b990e928c8e08b8a', project.name='admin', region_name='regionOne', zone= |
| name | None |
| port_range_max | 4000 |
| port_range_min | 3000 |
| project_id | 9c7f7d54441841a6b990e928c8e08b8a |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 172.16.0.0/16 |
| revision_number | 0 |
| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 |
| tags | [] |
| updated_at | 2022-10-12T10:01:05Z |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
# notice when a SG is created some default outbound egress rules are created to allow access anywhere, these rules are present in the 'default' security group so typically do not need to be included
# they are present incase this is the only SG applied to the VM Instance
# often where multiple SGs are bound to a host this default outbound rule will be duplicated, this is not an issue
# however if you want to control egress traffic it maybe easier to have only one SG containing egress rules
openstack security group rule show db49e625-da33-4c1e-aab8-9ce4a10cf4f9
+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2022-10-12T10:00:04Z |
| description | None |
| direction | egress |
| ether_type | IPv4 |
| id | db49e625-da33-4c1e-aab8-9ce4a10cf4f9 |
| location | cloud='', project.domain_id=, project.domain_name=, project.id='45e6f96ee6cc4ba3a348c38a212fd8b8', project.name=, region_name='regionOne', zone= |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 45e6f96ee6cc4ba3a348c38a212fd8b8 |
| protocol | None |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 |
| tags | [] |
| updated_at | 2022-10-12T10:00:04Z |
+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
# delete the rules and security group
# if there are multiple SGs named MYAPP it maybe hard to determine the correct SG, using UUID values is safer
#openstack security group show MYAPP -f json | jq -r .id
#03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3
openstack security group list --project guest
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 | MYAPP | MYAPP | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] |
| cc3e3172-66ff-48ae-8b92-96bd43fbbc65 | default | Default security group | 45e6f96ee6cc4ba3a348c38a212fd8b8 | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
#openstack security group show 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3 -f json | jq -r .rules[].id
#7685e615-3e1b-4ea1-82e9-1131daf11f69
#9c64924f-644a-4234-8026-8239fac14c16
#bfaa42da-8573-4490-8098-45e7befa57f4
#db49e625-da33-4c1e-aab8-9ce4a10cf4f9
openstack security group rule list 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3
+--------------------------------------+-------------+-----------+---------------+------------+-----------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+---------------+------------+-----------------------+
| 7685e615-3e1b-4ea1-82e9-1131daf11f69 | tcp | IPv4 | 172.16.0.0/16 | 3000:4000 | None |
| 9c64924f-644a-4234-8026-8239fac14c16 | None | IPv6 | ::/0 | | None |
| bfaa42da-8573-4490-8098-45e7befa57f4 | tcp | IPv4 | 0.0.0.0/0 | 2000:2000 | None |
| db49e625-da33-4c1e-aab8-9ce4a10cf4f9 | None | IPv4 | 0.0.0.0/0 | | None |
+--------------------------------------+-------------+-----------+---------------+------------+-----------------------+
# remove rules
openstack security group rule delete 7685e615-3e1b-4ea1-82e9-1131daf11f69
openstack security group rule delete 9c64924f-644a-4234-8026-8239fac14c16
openstack security group rule delete bfaa42da-8573-4490-8098-45e7befa57f4
openstack security group rule delete db49e625-da33-4c1e-aab8-9ce4a10cf4f9
# remove SG
openstack security group delete 03e90cde-a2c9-4a5f-bcfc-8ea93726ebc3
```