initial commit

main
tseed 2022-10-26 17:49:20 +01:00
commit 204e41c729
8 changed files with 309 additions and 0 deletions

22
GCP CLI_SDK Account access.md Executable file
View File

@ -0,0 +1,22 @@
# CLI/SDK Account access
This is useful to pull, edit then run the helm charts and kubectl commands without going through all the usual kubectl auth token stuff over the internet, we want to connect to the google cloud shell over ssh rather than the web console.
The project is already setup and bound to a billing account.
> https://cloud.google.com/sdk/docs/install#deb
```sh
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get install apt-transport-https ca-certificates gnupg
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
# generate the auth token
gcloud init --console-only
# select account toby.n.seed@gmail.com
# select project influenzanet-321116
# select region europe-west2
# select zone europe-west2-b
# the following command will generate ssh keys
gcloud cloud-shell ssh --authorize-session
```

View File

@ -0,0 +1,136 @@
# Ingress controller (layer4) with certmanager using letsencrypt
These notes focus on the certmanager methods of handshake to provision an SSL certificate from ACME provider, the ingress controller configuration is missing (probably deleted in a cloud shell).
This should work on other cloud providers too, the nginx ingress controller should install bindings for cloud providers to automatically provision a cloud layer 4 loadbalancer.
Using the nginx ingress controller and a cloud layer4 loadbalancer is more cost effective and multiple ports can be shunted through the same cloud loadbalancer, it also allows multiple uri paths to point to different services.
GKE provides its own version of nginx ingress controller that brings up a layer 7 loadbalancer - looks a little more involved and provisions a new loadbalancer for each public endpoint.
## connect to the cluster in gcloud shell
```
gcloud config set project influenzanet-321116
gcloud config set compute/zone europe-west2-b
gcloud container clusters list
gcloud config set container/cluster influenzanet
gcloud container clusters get-credentials influenzanet
```
## install the latest version of cert-manager
```
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.4.2 --set installCRDs=true
```
### additional DNS setting
During the DNS01 selector validation the cert manager container will check that it can validate a DNS A record before sending signalling to the ACME provider to start the challenge.
You can set the cert manager to query specific DNS servers for this, it wont help the ACME provider do its own validation but can speed up the certificate provisioning if you have only just created the DNS record for the load balancer (or have split horizon dns where a DNS record points to a private ip internally).
```
#using the nameserver for our zone in google cloudDNS
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.4.2 --set installCRDs=true --set 'extraArgs={--dns01-recursive-nameservers=ns-cloud-e1.googledomains.com:53,8.8.8.8:53,1.1.1.1:53}'
#using the link-local instance metadata / internal service (inc. DNS) cloud endpoint
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.4.2 --set installCRDs=true --set 'extraArgs={--dns01-recursive-nameservers=169.254.169.254:53,8.8.8.8:53,1.1.1.1:53}'
```
## setup the dns zone
> https://cloud.google.com/dns/docs/
> https://cloud.google.com/dns/docs/update-name-servers
We are using google cloudDNS to have control over records in our zone and also google domains for ocftest.com.
When you provision a new domain it wont likely have the same nameservers as you cloudDNS zone, the domain record needs the nameservers configuring from the cloudDNS zone.
google domains default nameservers for ocftest.com (not in use)
![ab6a38c3672bc6dbef70f5e2c2929ea5.png](_resources/ab6a38c3672bc6dbef70f5e2c2929ea5.png)
google domains custom name servers for ocftest.com
![f68c1acefbdad559296f4fb58a042eb6.png](_resources/f68c1acefbdad559296f4fb58a042eb6.png)
cloudDNS zone nameservers
![2425a439caf8bc74923c2bae14f641a0.png](_resources/2425a439caf8bc74923c2bae14f641a0.png)
### DNSSEC
This is not tested but believed to be necessary if the zone has DNSSEC enabled.
Use the CAA generator to find settings for a CAA record.
> CAA generator: https://sslmate.com/caa/
Create a record in your zone with the settings from the CAA generator.
type: CAA
DNS name: influenzanet.ocftest.com
auth entries:
0 issue "letsencrypt.org"
0 issuewild ";"
0 iodef "[mailto:toby.n.seed@gmail.com](mailto:toby.n.seed@gmail.com)"
![52ba061d0241cf0827ae930727cf86f4.png](_resources/52ba061d0241cf0827ae930727cf86f4.png)
validate the CAA record with dig.
```
dig caa influenzanet.ocftest.com @8.8.8.8
```
## Selector method
> for both selector methods HTTP01/DNS01 in the issuer configuration, the ACME provider will require a valid DNS A record.
The issuer configuration for the ACME provider will validate domain ownership using either of two methods, DNS01 or HTTP01.
In the HTTP01 method the ingress controller is prompted by cert manger to present an url with a challenge key (derived from a random secret), something like https://influenzanet.ocftest.com/acme/_challenge/<some random="" key="">.</some>
In the DNS01 method, cert manager connects to the cloudDNS via API and creates a TXT record with the challenge key, this method requires a cloud service account with RBAC capable of creating and deleting DNS records.
Which is best? the dns method is quicker and less intrusive if you have a complex ingress controller ruleset (you don't want to alter endpoints with an additional path because of regex matching etc), it allows for wildcard certificate generation and sub domain certificates a.b.example.com but it does need access to the cloudDNS api and the RBAC rules should probably be finely tuned for security.
The HTTP01 method is the simplest to setup.
Both types of selector can be installed as type ClusterIssuer or Issuer, the former being able to generate certificates for any ingress controller in any namespace. When using the ClusterIssuer type with the DNS01 selector the RBAC profile to use the cloudDNS API secret (google cloud dns in this example) must be in the cert-manager namespace, with Issuer type this secret may reside in the namespace with the issuer.
## install the latest (stable/default) version of niginx ingress controller
```
helm install nginx-ingress ingress-nginx/ingress-nginx
```
## Install a sample web application to serve content via the ingress controller
The content of this manifest have been lost, the following resources should fill the gaps.
> https://platform9.com/learn/v1.0/tutorials/nginix-controller-helm
> https://kosyfrances.com/ingress-gce-letsencrypt/
> https://acloudguru.com/hands-on-labs/configuring-the-nginx-ingress-controller-on-gke
The manifest will contain sections for:
- Issuer or Cluster issuer - this will contain the letsencrypt(ACME) configuration with a solver section HTTP01 or DNS01
- Ingress controller - this will provision the cloud loadbalancer with an ingress.class annotation and another annotation to use the certmanager issuer configuration, there will be a rules section pointing to the nodeport of the sample web application pod via a serviceName config item
- Sample hello world web application
```
kubectl apply -f sampleapp-deployment.yaml
```
## DNS01 selector method
### Create service account and RBAC rules capable of creation/deletion of DNS TXT records
> in production a new role with attached RBAC policy should be defined with just enough security, the dns.admin role offers full write access to the dns API. The RBAC should ideally have a limited scope able to create and delete TXT records in one specific zone.
```
GCP_PROJECT=$(gcloud config get-value project)
#create service account
gcloud iam service-accounts create dns01-solver --display-name "dns01-solver" --project=$GCP_PROJECT
#attach predefined RBAC role dns.admin to the service account
gcloud projects add-iam-policy-binding $GCP_PROJECT --member serviceAccount:dns01-solver@$GCP_PROJECT.iam.gserviceaccount.com --role roles/dns.admin
#create a json access key file (credentials) to store in a secret for cert-manager
gcloud iam service-accounts keys create key.json --iam-account dns01-solver@$GCP_PROJECT.iam.gserviceaccount.com --project=$GCP_PROJECT
```

View File

@ -0,0 +1,148 @@
# GKE cluster creation and cloud shell access
> https://cloud.google.com/kubernetes-engine/docs/quickstart
> https://rafay.co/the-kubernetes-current/getting-started-with-google-kubernetes-engine-gke-0/
## connect with cloud shell, configure the environment
```
tseed@NieX0:~$ gcloud cloud-shell ssh --authorize-session
Starting your Cloud Shell machine...
Waiting for your Cloud Shell machine to start...done.
Warning: Permanently added '[34.76.250.222]:6000' (RSA) to the list of known hosts.
Welcome to Cloud Shell! Type "help" to get started.
Your Cloud Platform project in this session is set to influenzanet-321116.
Use “gcloud config set project [PROJECT_ID]” to change to a different project.
toby_n_seed@cloudshell:~ (influenzanet-321116)$
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config list project
[core]
project = influenzanet-321116
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config set project influenzanet-321116
Updated property [core/project].
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config set compute/zone europe-west2-b
Updated property [compute/zone].
```
## build quick cluster
> API reference
> https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
> available GKE versions
> https://cloud.google.com/kubernetes-engine/versioning
> single zone, multi-zone and regional cluster - we will create a single zone cluster for ease
> https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster
#### Find the default version and default version for a channel for a channel
```
gcloud container get-server-config --format="yaml(defaultClusterVersion)" --zone europe-west2-b
gcloud container get-server-config --flatten="channels" --filter="channels.channel=REGULAR" --format="yaml(channels.channel,channels.validVersions)" --zone europe-west2-b
```
#### Find versions in regular channel, there are rapid and stable channels
```
gcloud container get-server-config --flatten="channels" --filter="channels.channel=RAPID" --format="yaml(channels.channel,channels.validVersions)" --zone europe-west2-b
```
#### Find valid image types and default image
```
gcloud container get-server-config --format="yaml(validImageTypes)" --zone europe-west2-b
gcloud container get-server-config --format="yaml(defaultImageType)" --zone europe-west2-b
```
#### Find instance types
e2-medium is the smallest recommended size for k8s nodes, this is the default.
```
gcloud compute machine-types list --filter="zone:( europe-west2-a europe-west2-b europe-west2-c )"
gcloud compute machine-types list --filter="zone:( europe-west2-b )"
```
### Create the cluster
The command is as if you'd created a cluster with defaults in a single zone.
It features a smaller ssd disk and only a single node, no scaling by node or pod is enabled.
```
gcloud container clusters create influenzanet \
--release-channel=regular \
--cluster-version=1.20.8-gke.900 \
--image-type=COS \
--num-nodes=1 \
--machine-type=e2-medium \
--disk-size=50GB \
--disk-type=pd-ssd \
--zone=europe-west2-b
```
### Delete the cluster
```
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config set compute/zone europe-west2-b
Updated property [compute/zone].
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
influenzanet europe-west2-b 1.20.8-gke.900 35.234.142.111 e2-medium 1.20.8-gke.900 1 RUNNING
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud container clusters delete influenzanet
The following clusters will be deleted.
- [influenzanet] in [europe-west2-b]
Do you want to continue (Y/n)? y
Deleting cluster influenzanet...⠼
```
## Connect to the cluster and test ability to create workload
```
gcloud cloud-shell ssh --authorize-session
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config list project
[core]
project = influenzanet-321116
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config set project influenzanet-321116
Updated property [core/project].
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
influenzanet europe-west2-b 1.19.9-gke.1900 34.105.199.155 n1-standard-1 1.19.9-gke.1900 2 RUNNING
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud container clusters describe influenzanet
ERROR: (gcloud.container.clusters.describe) One of [--zone, --region] must be supplied: Please specify location.
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud config set compute/zone europe-west2-b
Updated property [compute/zone].
#display cluster info
toby_n_seed@cloudshell:~ (influenzanet-321116)$ gcloud container clusters describe influenzanet
#set as default cluster
toby_n_seed@cloudshell:~/cluster-management (influenzanet-321116)$ gcloud config set container/cluster influenzanet
Updated property [container/cluster].
#this is where the kubectl json creds file is auto created - very handy
toby_n_seed@cloudshell:~/cluster-management (influenzanet-321116)$ gcloud container clusters get-credentials influenzanet
Fetching cluster endpoint and auth data.
kubeconfig entry generated for influenzanet.
#test connectivity with kubectl
toby_n_seed@cloudshell:~ (influenzanet-321116)$ kubectl cluster-info
Kubernetes control plane is running at https://35.197.223.199
GLBCDefaultBackend is running at https://35.197.223.199/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://35.197.223.199/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.197.223.199/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```

3
README.md Normal file
View File

@ -0,0 +1,3 @@
# What is this?
Some notes on GKE setup, usage and notes on certmanager, ACME and nginx ingress controller.

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB