AWX Operator
An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible.
Table of Contents
- AWX Operator
- Table of Contents
Purpose
This operator is meant to provide a more Kubernetes-native installation method for AWX via an AWX Custom Resource Definition (CRD).
Note that the operator is not supported by Red Hat, and is in alpha status. For now, use it at your own risk!
Usage
Basic Install
This Kubernetes Operator is meant to be deployed in your Kubernetes cluster(s) and can manage one or more AWX instances in any namespace.
First, you need to deploy AWX Operator into your cluster. Start by going to https://github.com/ansible/awx-operator/releases and making note of the latest release.
Replace <tag> in the URL below with the version you are deploying:
#> kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/<tag>/deploy/awx-operator.yaml
Then create a file named my-awx.yml with the following contents:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
The metadata.name you provide, will be the name of the resulting AWX deployment. If you deploy more than one to the same namespace, be sure to use unique names.
Finally, use kubectl to create the awx instance in your cluster:
#> kubectl apply -f my-awx.yml
After a few minutes, the new AWX instance will be deployed. One can look at the operator pod logs in order to know where the installation process is at. This can be done by running the following command: kubectl logs -f deployments/awx-operator.
Once deployed, the AWX instance will be accessible at http://awx.mycompany.com/ (assuming your cluster has an Ingress controller configured).
By default, the admin user is admin and the password is available in the <resourcename>-admin-password secret. To retrieve the admin password, run kubectl get secret <resourcename>-admin-password -o jsonpath="{.data.password}" | base64 --decode
You just completed the most basic install of an AWX instance via this operator. Congratulations !
Admin user account configuration
There are three variables that are customizable for the admin user account creation.
| Name | Description | Default |
|---|---|---|
| tower_admin_user | Name of the admin user | admin |
| tower_admin_email | Email of the admin user | test@example.com |
| tower_admin_password_secret | Secret that contains the admin user password | Empty string |
⚠️ tower_admin_password_secret must be a Kubernetes secret and not your text clear password.
If tower_admin_password_secret is not provided, the operator will look for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator will generate a password and create a Secret from it named <resourcename>-admin-password.
To retrieve the admin password, run kubectl get secret <resourcename>-admin-password -o jsonpath="{.data.password}" | base64 --decode
The secret that is expected to be passed should be formatted as follow:
---
apiVersion: v1
kind: Secret
metadata:
name: <resourcename>-admin-password
namespace: <target namespace>
stringData:
password: mysuperlongpassword
Network and TLS Configuration
Ingress Type
By default, the AWX operator is not opinionated and won't force a specific ingress type on you. So, if tower_ingress_type is not specified as part of the Custom Resource specification, it will default to none and nothing ingress-wise will be created.
The AWX operator provides support for three kinds of Ingress to access AWX: Ingress, Route and LoadBalancer, To toggle between these options, you can add the following to your AWX CR:
- Route
---
spec:
...
tower_ingress_type: Route
- Ingress
---
spec:
...
tower_ingress_type: Ingress
tower_hostname: awx.mycompany.com
- LoadBalancer
---
spec:
...
tower_ingress_type: LoadBalancer
tower_loadbalancer_protocol: http
TLS Termination
- Route
The following variables are customizable to specify the TLS termination procedure when Route is picked as an Ingress
| Name | Description | Default |
|---|---|---|
| tower_route_host | Common name the route answers for | Empty string |
| tower_route_tls_termination_mechanism | TLS Termination mechanism (Edge, Passthrough) | Edge |
| tower_route_tls_secret | Secret that contains the TLS information | Empty string |
- Ingress
The following variables are customizable to specify the TLS termination procedure when Ingress is picked as an Ingress
| Name | Description | Default |
|---|---|---|
| tower_ingress_annotations | Ingress annotations | Empty string |
| tower_ingress_tls_secret | Secret that contains the TLS information | Empty string |
- LoadBalancer
The following variables are customizable to specify the TLS termination procedure when LoadBalancer is picked as an Ingress
| Name | Description | Default |
|---|---|---|
| tower_loadbalancer_annotations | LoadBalancer annotations | Empty string |
| tower_loadbalancer_protocol | Protocol to use for Loadbalancer ingress | http |
| tower_loadbalancer_port | Port used for Loadbalancer ingress | 80 |
When setting up a Load Balancer for HTTPS you will be required to set the tower_loadbalancer_port to move the port away from 80.
The HTTPS Load Balancer also uses SSL termination at the Load Balancer level and will offload traffic to AWX over HTTP.
Database Configuration
External PostgreSQL Service
In order for the AWX instance to rely on an external database, the Custom Resource needs to know about the connection details. Those connection details should be stored as a secret and either specified as tower_postgres_configuration_secret at the CR spec level, or simply be present on the namespace under the name <resourcename>-postgres-configuration.
The secret should be formatted as follows:
---
apiVersion: v1
kind: Secret
metadata:
name: <resourcename>-postgres-configuration
namespace: <target namespace>
stringData:
host: <external ip or url resolvable by the cluster>
port: <external port, this usually defaults to 5432>
database: <desired database name>
username: <username to connect as>
password: <password to connect with>
type: Opaque
Migrating data from an old AWX instance
For instructions on how to migrate from an older version of AWX, see migration.md.
Managed PostgreSQL Service
If you don't have access to an external PostgreSQL service, the AWX operator can deploy one for you along side the AWX instance itself.
The following variables are customizable for the managed PostgreSQL service
| Name | Description | Default |
|---|---|---|
| tower_postgres_image | Path of the image to pull | postgres:12 |
| tower_postgres_resource_requirements | PostgreSQL container resource requirements | requests: {storage: 8Gi} |
| tower_postgres_storage_class | PostgreSQL PV storage class | Empty string |
| tower_postgres_data_path | PostgreSQL data path | /var/lib/postgresql/data/pgdata |
Example of customization could be:
---
spec:
...
tower_postgres_resource_requirements:
requests:
memory: 2Gi
storage: 8Gi
limits:
memory: 4Gi
storage: 50Gi
tower_postgres_storage_class: fast-ssd
Note: If tower_postgres_storage_class is not defined, Postgres will store it's data on a volume using the default storage class for your cluster.
Advanced Configuration
Deploying a specific version of AWX
There are a few variables that are customizable for awx the image management.
| Name | Description |
|---|---|
| tower_image | Path of the image to pull |
| tower_image_pull_policy | The pull policy to adopt |
| tower_image_pull_secret | The pull secret to use |
| tower_ee_images | A list of EEs to register |
Example of customization could be:
---
spec:
...
tower_image: myorg/my-custom-awx
tower_image_pull_policy: Always
tower_image_pull_secret: pull_secret_name
tower_ee_images:
- name: my-custom-awx-ee
image: myorg/my-custom-awx-ee
Privileged Tasks
Depending on the type of tasks that you'll be running, you may find that you need the task pod to run as privileged. This can open yourself up to a variety of security concerns, so you should be aware (and verify that you have the privileges) to do this if necessary. In order to toggle this feature, you can add the following to your custom resource:
---
spec:
...
tower_task_privileged: true
If you are attempting to do this on an OpenShift cluster, you will need to grant the awx ServiceAccount the privileged SCC, which can be done with:
#> oc adm policy add-scc-to-user privileged -z awx
Again, this is the most relaxed SCC that is provided by OpenShift, so be sure to familiarize yourself with the security concerns that accompany this action.
Containers Resource Requirements
The resource requirements for both, the task and the web containers are configurable - both the lower end (requests) and the upper end (limits).
| Name | Description | Default |
|---|---|---|
| tower_web_resource_requirements | Web container resource requirements | requests: {cpu: 1000m, memory: 2Gi} |
| tower_task_resource_requirements | Task container resource requirements | requests: {cpu: 500m, memory: 1Gi} |
Example of customization could be:
---
spec:
...
tower_web_resource_requirements:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 2000m
memory: 4Gi
tower_task_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
Assigning AWX pods to specific nodes
You can constrain the AWX pods created by the operator to run on a certain subset of nodes. tower_node_selector and tower_postgres_selector constrains
the AWX pods to run only on the nodes that match all the specified key/value pairs. tower_tolerations and tower_postgres_tolerations allow the AWX
pods to be scheduled onto nodes with matching taints.
| Name | Description | Default |
|---|---|---|
| tower_node_selector | AWX pods' nodeSelector | '' |
| tower_tolerations | AWX pods' tolerations | '' |
| tower_postgres_selector | Postgres pods' nodeSelector | '' |
| tower_postgres_tolerations | Postgres pods' tolerations | '' |
Example of customization could be:
---
spec:
...
tower_node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
tower_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
tower_postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
tower_postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
LDAP Certificate Authority
If the variable ldap_cacert_secret is provided, the operator will look for a the data field ldap-ca.crt in the specified secret.
| Name | Description | Default |
|---|---|---|
| ldap_cacert_secret | LDAP Certificate Authority secret name | '' |
Example of customization could be:
---
spec:
...
ldap_cacert_secret: <resourcename>-ldap-ca-cert
To create the secret, you can use the command below:
# kubectl create secret generic <resourcename>-ldap-ca-cert --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE>
Persisting Projects Directory
In cases which you want to persist the /var/lib/projects directory, there are few variables that are customizable for the awx-operator.
| Name | Description | Default |
|---|---|---|
| tower_projects_persistence | Whether or not the /var/lib/projects directory will be persistent | false |
| tower_projects_storage_class | Define the PersistentVolume storage class | '' |
| tower_projects_storage_size | Define the PersistentVolume size | 8Gi |
| tower_projects_storage_access_mode | Define the PersistentVolume access mode | ReadWriteMany |
| tower_projects_existing_claim | Define an existing PersistentVolumeClaim to use (cannot be combined with tower_projects_storage_*) |
'' |
Example of customization when the awx-operator automatically handles the persistent volume could be:
---
spec:
...
tower_projects_persistence: true
tower_projects_storage_class: rook-ceph
tower_projects_storage_size: 20Gi
Custom Volume and Volume Mount Options
In a scenario where custom volumes and volume mounts are required to either overwrite defaults or mount configuration files.
| Name | Description | Default |
|---|---|---|
| tower_extra_volumes | Specify extra volumes to add to the application pod | '' |
| tower_web_extra_volume_mounts | Specify volume mounts to be added to Web container | '' |
| tower_task_extra_volume_mounts | Specify volume mounts to be added to Task container | '' |
| tower_ee_extra_volume_mounts | Specify volume mounts to be added to Execution container | '' |
Example configuration for ConfigMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: <resourcename>-extra-config
namespace: <target namespace>
data:
ansible.cfg: |
[defaults]
remote_tmp = /tmp
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
custom.py: |
INSIGHTS_URL_BASE = "example.org"
AWX_CLEANUP_PATHS = True
Example spec file for volumes and volume mounts
---
spec:
...
tower_ee_extra_volume_mounts: |
- name: ansible-cfg
mountPath: /etc/ansible/ansible.cfg
subPath: ansible.cfg
tower_task_extra_volume_mounts: |
- name: custom-py
mountPath: /etc/tower/conf.d/custom.py
subPath: custom.py
tower_extra_volumes: |
- name: ansible-cfg
configMap:
defaultMode: 420
items:
- key: ansible.cfg
path: ansible.cfg
name: <resourcename>-extra-config
- name: custom-py
configMap:
defaultMode: 420
items:
- key: custom.py
path: custom.py
name: <resourcename>-extra-config
⚠️ Volume and VolumeMount names cannot contain underscores(_)
Upgrade Notes
From Older Versions
For AWX instances created by the awx-operator<0.0.8, it is required both PostgreSQL statefulset and AWX deployment resources to be deleted and recreated. This is required due to new labels added on both resources and the requirement of the Kubernetes API which enforces selector.matchLabels attributes to be ready-only.
The awx-operator will handle the upgrading both resources. Note that just the statefulset and deployment will be recreated. Therefore, any persistent volume used on any of these 2 resources, shall not be deleted.
Exporting Environment Variables to Containers
If you need to export custom environment variables to your containers.
| Name | Description | Default |
|---|---|---|
| tower_task_extra_env | Environment variables to be added to Task container | '' |
| tower_web_extra_env | Environment variables to be added to Web container | '' |
Example configuration of environment variables
spec:
tower_task_extra_env: |
- name: MYCUSTOMVAR
value: foo
tower_web_extra_env: |
- name: MYCUSTOMVAR
value: foo
Development
Testing
This Operator includes a Molecule-based test environment, which can be executed standalone in Docker (e.g. in CI or in a single Docker container anywhere), or inside any kind of Kubernetes cluster (e.g. Minikube).
You need to make sure you have Molecule installed before running the following commands. You can install Molecule with:
#> pip install 'molecule[docker]'
Running molecule test sets up a clean environment, builds the operator, runs all configured tests on an example operator instance, then tears down the environment (at least in the case of Docker).
If you want to actively develop the operator, use molecule converge, which does everything but tear down the environment at the end.
Testing in Docker
#> molecule test -s test-local
This environment is meant for headless testing (e.g. in a CI environment, or when making smaller changes which don't need to be verified through a web interface). It is difficult to test things like AWX's web UI or to connect other applications on your local machine to the services running inside the cluster, since it is inside a Docker container with no static IP address.
Testing in Minikube
#> minikube start --memory 8g --cpus 4
#> minikube addons enable ingress
#> molecule test -s test-minikube
Minikube is a more full-featured test environment running inside a full VM on your computer, with an assigned IP address. This makes it easier to test things like NodePort services and Ingress from outside the Kubernetes cluster (e.g. in a browser on your computer).
Once the operator is deployed, you can visit the AWX UI in your browser by following these steps:
- Make sure you have an entry like
IP_ADDRESS example-awx.testin your/etc/hostsfile. (Get the IP address withminikube ip.) - Visit
http://example-awx.test/in your browser. (Default admin login istest/changeme.)
Alternatively, you can also update the service awx-service in your namespace to use the type NodePort and use following command to get the URL to access your AWX instance:
#> minikube service <serviceName> -n <namespaceName> --url
Generating a bundle
⚠️ operator-sdk version 0.19.4 is needed to run the following commands
If one has the Operator Lifecycle Manager (OLM) installed, the following steps is the process to generate the bundle that would nicely display in the OLM interface.
At the root of this directory:
- Build and publish the operator
#> operator-sdk build registry.example.com/ansible/awx-operator:mytag
#> podman push registry.example.com/ansible/awx-operator:mytag
- Build and publish the bundle
#> podman build . -f bundle.Dockerfile -t registry.example.com/ansible/awx-operator-bundle:mytag
#> podman push registry.example.com/ansible/awx-operator-bundle:mytag
- Build and publish an index with your bundle in it
#> opm index add --bundles registry.example.com/ansible/awx-operator-bundle:mytag --tag registry.example.com/ansible/awx-operator-catalog:mytag
#> podman push registry.example.com/ansible/awx-operator-catalog:mytag
- In your Kubernetes create a new CatalogSource pointing to
registry.example.com/ansible/awx-operator-catalog:mytag
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: <catalogsource-name>
namespace: <namespace>
spec:
displayName: 'myoperatorhub'
image: registry.example.com/ansible/awx-operator-catalog:mytag
publisher: 'myoperatorhub'
sourceType: grpc
Applying this template will do it. Once the CatalogSource is in a READY state, the bundle should be available on the OperatorHub tab (as part of the custom CatalogSource that just got added)
- Enjoy
Release Process
There are a few moving parts to this project:
- The Docker image which powers AWX Operator.
- The
awx-operator.yamlKubernetes manifest file which initially deploys the Operator into a cluster.
Each of these must be appropriately built in preparation for a new tag:
Verify Functionality
Run the following command inside this directory:
#> operator-sdk build quay.io/<user>/awx-operator:test
Then push the generated image to Docker Hub:
#> docker push quay.io/<user>/awx-operator:test
After it is built, test it on a local cluster:
#> minikube start --memory 6g --cpus 4
#> minikube addons enable ingress
#> ansible-playbook ansible/deploy-operator.yml -e operator_image=quay.io/<user>/awx-operator -e operator_version=test
#> kubectl create namespace example-awx
#> ansible-playbook ansible/instantiate-awx-deployment.yml -e tower_namespace=example-awx
#> <test everything>
#> minikube delete
Update version
Update the awx-operator version:
ansible/group_vars/all
Once the version has been updated, run from the root of the repo:
#> ansible-playbook ansible/chain-operator-files.yml
Commit / Create Release
If everything works, commit the updated version, then publish a new release using the same version you used in ansible/group_vars/all.
After creating the release, this GitHub Workflow will run and publish the new image to quay.io.
Author
This operator was originally built in 2019 by Jeff Geerling and is now maintained by the Ansible Team