Compare commits

..

46 Commits

Author SHA1 Message Date
jamesmarshall24
b333026226 refactor: remove proxy env var CRD fields and CSV specDescriptors (#2114)
Proxy configuration is injected into the operator pod environment
and propagated to containers via the existing ConfigMap mechanism.

Assisted by: Claude

Signed-off-by: James Marshall <jamarsha@redhat.com>
2026-05-06 17:40:54 -04:00
jamesmarshall24
7745848ba5 feat: add proxy env var support for AWX containers (#2113)
Add http_proxy, https_proxy, and no_proxy CRD fields to the AWX spec
and inject them into all application containers via a shared proxy-env
ConfigMap, with automatic rollouts when proxy values change.

Assisted by: Claude

Signed-off-by: James Marshall <jamarsha@redhat.com>
2026-04-29 11:22:17 -04:00
Lucas Benedito
9c3f521514 Standardize dev workflow with Makefile includes and developer documentation (#2111)
* Add standardized Makefile includes and developer documentation

Introduce modular Makefile system (common.mk + operator.mk) for
consistent dev workflows. Standardize CONTRIBUTING.md and
docs/development.md to follow community conventions with clear
separation: contributing guidelines for process, development
guide for technical setup.

- Add common.mk with shared dev workflow targets (make up/down)
- Add operator.mk with AWX-specific variables and targets
- Restructure CONTRIBUTING.md: process, testing requirements, community links
- Expand docs/development.md: customization options table, teardown options,
  Molecule testing, bundle generation via make targets
- Simplify README.md contributing section

Assisted-by: Claude
Signed-off-by: Lucas Benedito <lbenedit@redhat.com>

* Fix DEV_IMG docs example to avoid double-tag issue

Assisted-by: Claude
Signed-off-by: Lucas Benedito <lbenedit@redhat.com>

---------

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-04-14 12:30:17 +01:00
Christian Adams
60fc7d856c Add use_db_compression option for backup database dumps (#2106)
* Add use_db_compression option for backup database dumps

Enable optional pg_dump compression (-Z 9) via use_db_compression
boolean flag. Restore auto-detects compressed (.db.gz) or
uncompressed (.db) backups for backward compatibility.

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude

* Add CRD field, CSV descriptor, and restore auto-detection for use_db_compression

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude
2026-03-24 20:03:44 +00:00
Lucas Benedito
5697feea57 Fix unquoted timestamps in backup/restore event templates (#2110)
Quote {{ now }} in firstTimestamp and lastTimestamp to prevent
YAML parser from converting the value to a datetime object.

Assisted-by: Claude

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-03-23 14:11:54 -04:00
aknochow
56f10cf966 Fix custom backup PVC name not used with create_backup_pvc (#2105)
Use backup_pvc for custom backup PVC name in templates

When backup_pvc is specified with create_backup_pvc: true, the PVC
template and ownerReference removal used the hardcoded default name
(deployment_name-backup-claim) instead of the user-specified name.
This caused the management pod to reference a PVC that didn't exist.

Replace backup_claim variable with backup_pvc throughout the backup
role so the resolved PVC name is used consistently in all templates.

Authored By: Adam Knochowski <aknochow@redhat.com>
Assisted By: Claude
2026-03-05 07:22:22 -05:00
Christian M. Adams
c996c88178 Fix config/testing overlay to use new metrics patch
The testing kustomization overlay still referenced the deleted
manager_auth_proxy_patch.yaml. Update to use manager_metrics_patch.yaml
and add metrics_service.yaml resource.

Ref: AAP-65254

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude
2026-03-04 13:45:50 -05:00
Christian M. Adams
5fb6bb7519 Upgrade operator-sdk to v1.40.0 and remove kube-rbac-proxy
Bump operator-sdk, ansible-operator, and OPM binaries to align with
the OCP 4.20 / AAP 2.7 target. Replace the deprecated kube-rbac-proxy
sidecar (removed in operator-sdk v1.38.0) with controller-runtime's
built-in WithAuthenticationAndAuthorization for metrics endpoint
protection.

Changes:
- Makefile: operator-sdk v1.36.1 → v1.40.0, OPM v1.26.0 → v1.55.0
- Dockerfile: ansible-operator base image v1.36.1 → v1.40.0
- Remove kube-rbac-proxy sidecar and auth_proxy_* RBAC manifests
- Add metrics_auth_role, metrics_reader, and metrics_service resources
- Add --metrics-secure, --metrics-require-rbac, --metrics-bind-address
  flags via JSON patch to serve metrics directly from the manager on
  port 8443 with TLS and RBAC authentication

Ref: AAP-65254

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude
2026-03-04 13:45:50 -05:00
Lucas Benedito
0b4b5dd7fd Fix AWXRestore multiple bugs
- Move force_drop_db from vars/main.yml to defaults/main.yml so CR spec
values are not overridden by Ansible variable precedence
- Grant CREATEDB priv to database user before DROP/CREATE and revoke
it after restore, following the containerized-installer pattern
- Omit --clean --if-exists from pg_restore when force_drop_db is true
since the database is freshly created and empty, avoiding partition
index dependency errors

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-02-27 14:05:13 -05:00
aknochow
d4b295e8b4 Add automatic backup PVC creation with create_backup_pvc option (#2097)
When users specify a custom backup_pvc name, the operator now
automatically creates the PVC instead of failing with
"does not exist, please create this pvc first."

Changes:
- Add create_backup_pvc variable (default: true) to backup defaults
- Update error condition to check create_backup_pvc before failing
- Update PVC creation condition to include create_backup_pvc
- Add create_backup_pvc field to AWXBackup CRD

Users who want the previous behavior can set create_backup_pvc: false.
2026-02-24 16:06:24 -05:00
Hao Liu
e0ce3ef71d [AAP-64061] Add nginx log markers for direct API access detection (#2100)
Add map directives for X-Trusted-Proxy and X-DAB-JW-TOKEN headers to
log the presence of these headers as trusted_proxy_present and
dab_jwt_present fields in the nginx access log.

These markers enable the detection tool (aap-detect-direct-component-access)
to identify direct API access that bypasses AAP Gateway.

Also add explicit error_log /dev/stderr warn; instead of relying on
container base image symlinks.

Part of ANSTRAT-1840: Remove direct API access to platform components.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-02-17 17:25:36 -05:00
Christian Adams
fcf9a0840b Remove OperatorHub automation and documentation (#2101)
AWX Operator is no longer published to OperatorHub. Remove the
publish-operator-hub GHA workflow, the hack/publish-to-operator-hub.sh
script, the OperatorHub section from the release process docs, and the
OperatorHub-specific resource list from the debugging guide.

Author: Christian M. Adams
Assisted By: Claude
2026-02-16 22:52:04 +00:00
Christian Adams
f9c05a5698 ci: Update DOCKER_API_VERSION to 1.44 (#2102)
The Docker daemon on ubuntu-latest runners now requires minimum API
version 1.44, causing molecule kind tests to fail during cluster
teardown.

Author: Christian M. Adams
Assisted By: Claude
2026-02-16 17:27:07 -05:00
jamesmarshall24
bfc4d8e37f Add CRD validation for images and image version (#2096) 2026-02-12 13:46:24 -05:00
Dimitri Savineau
f04ab1878c web: Update python path for rediect page
The application container image is now using python3.12 so we need
to update the associated volume mount for the redirect page.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2026-01-27 19:25:51 -05:00
Dimitri Savineau
eeed2b8ae5 django: Add --no-imports option
With django updated to 5.2 then the django shell commands load imports
at startup which flood stdout with logs and break workflows

https://docs.djangoproject.com/en/dev/releases/5.2/#automatic-models-import-in-the-shell

Adding --no-imports to the cli call solves the issue.

https://docs.djangoproject.com/en/5.2/ref/django-admin/#cmdoption-shell-no-imports

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2026-01-19 13:36:08 -05:00
Lucas Benedito
a47b06f937 devel: Update development guide
- Update the development.md file
- Allow builds from macos automatically
- implement podman-buildx

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-01-15 15:17:24 -05:00
Alan Rominger
605b46d83c Collect logs with greater determination (#2087) 2025-11-04 13:08:35 -05:00
Rebeccah Hunter
7ead166ca0 set client_request_timeout from annotation in the CR (#2077)
add the functionality to accept an annotation in the awx-cr to be able to override the default client_request_timeout value.

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
2025-10-15 18:13:12 -04:00
Christian M. Adams
c5533f47c1 Use --no-acl flag when restoring to exclude GRANT and REVOKE commands
This avoids running in to the following error when pg_restore is run as
  the application db user from the db-management pod:

  pg_restore: error: could not execute query: ERROR: must be member of role postgres
  Command was: ALTER SCHEMA public OWNER TO postgres;
2025-10-15 13:54:21 -04:00
lucas-benedito
78864b3653 fix: Correct the image_version conditional (#2082)
* fix: Correct the image_version conditional

When image is set and image_version is unset, the conditional is failing
due to the unset variable causes and error.
Implemented the correct conditional and added an assert to validate that
both variables are set properly when image is set.

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2025-10-09 18:34:50 +01:00
Sharvesh
bed4aff4cc Fix: Redis ERR max number of clients reached (#2041)
Add timeout to Redis Config

Co-authored-by: Christian Adams <chadams@redhat.com>
2025-09-10 09:44:30 -04:00
jamesmarshall24
e0a8a88243 Add postgres_extra_settings (#2071)
* Add hacking/ directory to .gitignore as it is commonly used for dev scripts
* Add postgres_extra_settings
* Add postgres_configuration_secret checksum to DB statefulset
* Docs for postgres_extra_settings, CI coverage, and examples
---------
Co-authored-by: Christian M. Adams <chadams@redhat.com>
2025-09-03 12:36:34 -04:00
Christian Adams
1c3c5d430d Guard against missing version status on existing CR (#2076) 2025-08-27 16:53:01 -04:00
Joel
6e47dc62c2 Fix installer update-ca-trust command (#1985)
The latest release of the update-ca-trust requires the --output param
if you run as non-root user.

See: 81a090f89a
And: https://github.com/ansible/awx-ee/issues/258#issuecomment-2439742296

Fixes: https://github.com/ansible/awx-ee/issues/258
2025-08-25 14:38:18 +02:00
Christian Adams
2e9615aa1e Add configurable pull secret file support to up.sh (#2073)
- Applies a pull-secret yaml file if it exists at hacking/awx-cr.yml
- The operator will look for a pull secret called
  redhat-operators-pull-secret
- This makes it possible to use a private operator image on your quay.io
  registry out of the box with the up.sh
- Add PULL_SECRET_FILE environment variable with default hacking/pull-secret.yml
2025-08-19 11:50:19 -04:00
lucas-benedito
e2aef8330e Update the default crd example for the up.sh (#2061) 2025-08-13 17:09:31 -04:00
Ricardo Carrillo Cruz
883baeb16b Revert "Run import_auth_config_to_gateway when public_url is defined … (#2068)
Revert "Run import_auth_config_to_gateway when public_url is defined (#2066)"

This reverts commit ba1bb878f1.
2025-07-31 12:59:43 -04:00
Dimitri Savineau
ba1bb878f1 Run import_auth_config_to_gateway when public_url is defined (#2066)
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Co-authored-by: Ricardo Carrillo Cruz <ricarril@redhat.com>
2025-07-30 23:23:49 -04:00
aknochow
45ce8185df Reverting #2064 and Updating descriptions in backup and restore roles (#2060)
* updating task descriptions in backup and restore roles

* Revert "Run import_auth_config_to_gateway when public_url is defined (#2064)"

This reverts commit 54293a0efb.
2025-07-29 23:21:38 +00:00
lucas-benedito
a55829e5d5 Fixes for passwords for FIPS compliance (#2062)
Set password_encryption to scram-sha-256 and re-encrypt db user passwords for FIPS compliance

(cherry picked from commit 0e76404357a77a5f773aee6e2b3a5b85d1f514b7)

Co-authored-by: Christian M. Adams <chadams@redhat.com>
2025-07-28 18:52:59 +01:00
Ricardo Carrillo Cruz
54293a0efb Run import_auth_config_to_gateway when public_url is defined (#2064) 2025-07-24 10:25:07 +02:00
Rebeccah Hunter
e506466d08 set api timeout to match proxy timeout (#2056)
feat: set api timeout to match proxy timeout

Timeout before the openshift route times out
not timing out before undercuts usefulness of our log-traceback-middleware in
django-ansible-base that logs a traceback from requests that get timed
out -- because uwsgi or gunicorn has to send the timeout signal to the
worker handling the request. Also leads to issues where requests that
envoy has already timed out are filling up queues of the workers of the
components.

Also, configure nginx to return a 503 if WSGI server doesn't respond.

Co-authored-by: Elijah DeLee <kdelee@redhat.com>
2025-07-03 20:19:50 +00:00
Albert Daunis
e9750b489e Update migrate_schema to use check_migrations (#2025)
Update migrate schema showmigrations conditional
2025-06-25 15:59:23 -04:00
Christian Adams
0a89fc87a6 Update kubernetes.core to 3.2.0 and sdk to v1.36.1 (#2052)
* Update collections to match the other ansible operators

* Update the ansible-operator base image to v1.36.1
2025-06-18 18:12:58 -04:00
Dimitri Savineau
65a82f706c Fix jquery version in redirect page
Other installer uses 3.7.1 and the file on disk is also using 3.7.1
from the rest framework directory.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2025-06-04 12:17:08 -04:00
Sharvari Khedkar
e8f0306ec2 Add route_annotations feature to mesh ingress CRD (#2045)
* Add route_annotations feature to mesh ingress CRD
* update route_annotations type to string
* display Route Annotations only when ingress_type=route
2025-05-12 18:07:21 -04:00
Bruno Rocha
f1660c8bd1 Address review comments 2025-05-09 15:08:17 -04:00
Bruno Rocha
f967c7d341 fix: explicitly import ldap on config file
File "/etc/tower/conf.d/ldap.py", line 2, in <module>
ldap.OPT_X_TLS_REQUIRE_CERT: True,
^^^^
NameError: name 'ldap' is not defined
2025-05-09 15:08:17 -04:00
aknochow
54072d6a46 fixing backup pvc namespace quotes (#2042) 2025-04-28 08:14:50 -04:00
Christian Adams
fb13011aad Check if pg_isready before trying to restore to new postgresql pod (#2039) 2025-04-24 17:08:50 -04:00
Ricardo Carrillo Cruz
24cb6006f6 Grant postgres to awx user on migrate_data (#2038)
This is needed in case customers move to
operator platform.

Fixes https://issues.redhat.com/browse/AAP-41592
2025-04-24 09:58:48 +02:00
Christian Adams
4c05137fb8 Update kubernetes.core to 2.4.2 to fix k8s_cp module usage against OCP with virt (#2031) 2025-03-17 12:12:32 -04:00
aknochow
07540c29da fixing quotes on namespace to support namespace names with only numbers (#2030) 2025-03-17 09:19:02 -04:00
jamesmarshall24
5bb2b2ac87 Add deployment type shortname for legacy API url (#2026)
* Add deployment type shortname for legacy API url

* Add trailing slash to legacy API url

Co-authored-by: Christian Adams <rooftopcellist@gmail.com>

---------

Co-authored-by: Christian Adams <rooftopcellist@gmail.com>
2025-03-05 15:04:01 -05:00
shellclear
039157d070 Parameterization of the client_max_body_size directive in Nginx (#2014)
Enables users to customize client_max_body_size in Nginx conf to allow
for larger file uploads. This is useful in cases when users need to upload
large subscription manifest files.

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2025-02-24 12:50:08 -05:00
75 changed files with 1301 additions and 765 deletions

View File

@@ -16,7 +16,7 @@ jobs:
- --skip-tags=replicas - --skip-tags=replicas
- -t replicas - -t replicas
env: env:
DOCKER_API_VERSION: "1.41" DOCKER_API_VERSION: "1.44"
DEBUG_OUTPUT_DIR: /tmp/awx_operator_molecule_test DEBUG_OUTPUT_DIR: /tmp/awx_operator_molecule_test
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4

View File

@@ -1,86 +0,0 @@
name: Publish AWX Operator on operator-hub
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag_name:
description: 'Name for the tag of the release.'
required: true
operator_hub_fork:
description: 'Fork of operator-hub where the PR will be created from. default: awx-auto'
required: true
default: 'awx-auto'
image_registry:
description: 'Image registry where the image is published to. default: quay.io'
required: true
default: 'quay.io'
image_registry_organization:
description: 'Image registry organization where the image is published to. default: ansible'
required: true
default: 'ansible'
community_operator_github_org:
description: 'Github organization for community-opeartor project. default: k8s-operatorhub'
required: true
default: 'k8s-operatorhub'
community_operator_prod_github_org:
description: 'GitHub organization for community-operator-prod project. default: redhat-openshift-ecosystem'
required: true
default: 'redhat-openshift-ecosystem'
jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Set GITHUB_ENV from workflow_dispatch event
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo "VERSION=${{ github.event.inputs.tag_name }}" >> $GITHUB_ENV
echo "IMAGE_REGISTRY=${{ github.event.inputs.image_registry }}" >> $GITHUB_ENV
echo "IMAGE_REGISTRY_ORGANIZATION=${{ github.event.inputs.image_registry_organization }}" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_GITHUB_ORG=${{ github.event.inputs.community_operator_github_org }}" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_PROD_GITHUB_ORG=${{ github.event.inputs.community_operator_prod_github_org }}" >> $GITHUB_ENV
- name: Set GITHUB_ENV for release event
if: ${{ github.event_name == 'release' }}
run: |
echo "VERSION=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
echo "IMAGE_REGISTRY=quay.io" >> $GITHUB_ENV
echo "IMAGE_REGISTRY_ORGANIZATION=ansible" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_GITHUB_ORG=k8s-operatorhub" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_PROD_GITHUB_ORG=redhat-openshift-ecosystem" >> $GITHUB_ENV
- name: Log in to image registry
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login ${{ env.IMAGE_REGISTRY }} -u ${{ secrets.QUAY_USER }} --password-stdin
- name: Checkout awx-operator at workflow branch
uses: actions/checkout@v4
with:
path: awx-operator
- name: Checkout awx-opearator at ${{ env.VERSION }}
uses: actions/checkout@v4
with:
fetch-tags: true
ref: ${{ env.VERSION }}
path: awx-operator-${{ env.VERSION }}
fetch-depth: 0 # fetch all history so that git describe works
- name: Copy scripts to awx-operator-${{ env.VERSION }}
run: |
cp -f \
awx-operator/hack/publish-to-operator-hub.sh \
awx-operator-${{ env.VERSION }}/hack/publish-to-operator-hub.sh
cp -f \
awx-operator/Makefile \
awx-operator-${{ env.VERSION }}/Makefile
- name: Build and publish bundle to operator-hub
working-directory: awx-operator-${{ env.VERSION }}
env:
IMG_REPOSITORY: ${{ env.IMAGE_REGISTRY }}/${{ env.IMAGE_REGISTRY_ORGANIZATION }}
GITHUB_TOKEN: ${{ secrets.AWX_AUTO_GITHUB_TOKEN }}
run: |
git config --global user.email "awx-automation@redhat.com"
git config --global user.name "AWX Automation"
./hack/publish-to-operator-hub.sh

1
.gitignore vendored
View File

@@ -11,3 +11,4 @@ gh-pages/
__pycache__ __pycache__
/site /site
venv/* venv/*
hacking/

View File

@@ -1,147 +1,58 @@
# AWX-Operator Contributing Guidelines # Contributing to AWX Operator
Hi there! We're excited to have you as a contributor. Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Please file a new at [https://github.com/ansible/awx-operator/issues](https://github.com/ansible/awx-operator/issues). Have questions about this document or anything not covered here? Please file an issue at [https://github.com/ansible/awx-operator/issues](https://github.com/ansible/awx-operator/issues).
## Table of contents
- [AWX-Operator Contributing Guidelines](#awx-operator-contributing-guidelines)
- [Table of contents](#table-of-contents)
- [Things to know prior to submitting code](#things-to-know-prior-to-submitting-code)
- [Submmiting your work](#submmiting-your-work)
- [Development](#development)
- [Testing](#testing)
- [Testing in Kind](#testing-in-kind)
- [Testing in Minikube](#testing-in-minikube)
- [Generating a bundle](#generating-a-bundle)
- [Reporting Issues](#reporting-issues)
## Things to know prior to submitting code ## Things to know prior to submitting code
- All code submissions are done through pull requests against the `devel` branch. - All code submissions are done through pull requests against the `devel` branch.
- All PRs must have a single commit. Make sure to `squash` any changes into a single commit. - All PRs must have a single commit. Make sure to `squash` any changes into a single commit.
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason. - Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt - If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push --force-with-lease](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com) - We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com).
## Setting up your development environment
## Submmiting your work See [docs/development.md](docs/development.md) for prerequisites, build/deploy instructions, and available Makefile targets.
1. From your fork `devel` branch, create a new branch to stage your changes.
## Submitting your work
1. From your fork's `devel` branch, create a new branch to stage your changes.
```sh ```sh
#> git checkout -b <branch-name> git checkout -b <branch-name>
``` ```
2. Make your changes. 2. Make your changes.
3. Test your changes according described on the Testing section. 3. Test your changes (see [Testing](#testing) below).
4. If everything looks correct, commit your changes. 4. Commit your changes.
```sh ```sh
#> git add <FILES> git add <FILES>
#> git commit -m "My message here" git commit -m "My message here"
``` ```
5. Create your [pull request](https://github.com/ansible/awx-operator/pulls) 5. Create your [pull request](https://github.com/ansible/awx-operator/pulls).
**Note**: If you have multiple commits, make sure to `squash` your commits into a single commit which will facilitate our release process. > **Note**: If you have multiple commits, make sure to `squash` them into a single commit before submitting.
## Development
The development environment consists of running an [`up.sh`](./up.sh) and a [`down.sh`](./down.sh) script, which applies or deletes yaml on the Openshift or K8s cluster you are connected to. See the [development.md](docs/development.md) for information on how to deploy and test changes from your branch.
## Testing ## Testing
This Operator includes a [Molecule](https://ansible.readthedocs.io/projects/molecule/)-based test environment, which can be executed standalone in Docker (e.g. in CI or in a single Docker container anywhere), or inside any kind of Kubernetes cluster (e.g. Minikube). All changes must be tested before submission:
You need to make sure you have Molecule installed before running the following commands. You can install Molecule with:
```sh
#> python -m pip install molecule-plugins[docker]
```
Running `molecule test` sets up a clean environment, builds the operator, runs all configured tests on an example operator instance, then tears down the environment (at least in the case of Docker).
If you want to actively develop the operator, use `molecule converge`, which does everything but tear down the environment at the end.
#### Testing in Kind
Testing with a kind cluster is the recommended way to test the awx-operator locally. First, you need to install kind if you haven't already. Please see these docs for setting that up:
* https://kind.sigs.k8s.io/docs/user/quick-start/
To run the tests, from the root of your checkout, run the following command:
```sh
#> molecule test -s kind
```
#### Testing in Minikube
```sh
#> minikube start --memory 8g --cpus 4
#> minikube addons enable ingress
#> molecule test -s test-minikube
```
[Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) is a more full-featured test environment running inside a full VM on your computer, with an assigned IP address. This makes it easier to test things like NodePort services and Ingress from outside the Kubernetes cluster (e.g. in a browser on your computer).
Once the operator is deployed, you can visit the AWX UI in your browser by following these steps:
1. Make sure you have an entry like `IP_ADDRESS example-awx.test` in your `/etc/hosts` file. (Get the IP address with `minikube ip`.)
2. Visit `http://example-awx.test/` in your browser. (Default admin login is `test`/`changeme`.)
Alternatively, you can also update the service `awx-service` in your namespace to use the type `NodePort` and use following command to get the URL to access your AWX instance:
```sh
#> minikube service <serviceName> -n <namespaceName> --url
```
## Generating a bundle
> :warning: operator-sdk version 0.19.4 is needed to run the following commands
If one has the Operator Lifecycle Manager (OLM) installed, the following steps is the process to generate the bundle that would nicely display in the OLM interface.
At the root of this directory:
1. Build and publish the operator
```
#> operator-sdk build registry.example.com/ansible/awx-operator:mytag
#> podman push registry.example.com/ansible/awx-operator:mytag
```
2. Build and publish the bundle
```
#> podman build . -f bundle.Dockerfile -t registry.example.com/ansible/awx-operator-bundle:mytag
#> podman push registry.example.com/ansible/awx-operator-bundle:mytag
```
3. Build and publish an index with your bundle in it
```
#> opm index add --bundles registry.example.com/ansible/awx-operator-bundle:mytag --tag registry.example.com/ansible/awx-operator-catalog:mytag
#> podman push registry.example.com/ansible/awx-operator-catalog:mytag
```
4. In your Kubernetes create a new CatalogSource pointing to `registry.example.com/ansible/awx-operator-catalog:mytag`
```
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: <catalogsource-name>
namespace: <namespace>
spec:
displayName: 'myoperatorhub'
image: registry.example.com/ansible/awx-operator-catalog:mytag
publisher: 'myoperatorhub'
sourceType: grpc
```
Applying this template will do it. Once the CatalogSource is in a READY state, the bundle should be available on the OperatorHub tab (as part of the custom CatalogSource that just got added)
5. Enjoy
- **Linting** (required for all PRs): `make lint`
- **Molecule tests** (recommended): The operator includes a [Molecule](https://ansible.readthedocs.io/projects/molecule/)-based test environment for integration testing. See the [Testing section in docs/development.md](docs/development.md#testing) for detailed instructions on running tests locally.
## Reporting Issues ## Reporting Issues
We welcome your feedback, and encourage you to file an issue when you run into a problem. We welcome your feedback, and encourage you to file an issue when you run into a problem at [https://github.com/ansible/awx-operator/issues](https://github.com/ansible/awx-operator/issues).
## Getting Help
### Forum
Join the [Ansible Forum](https://forum.ansible.com) for questions, help, and development discussions. Search for posts tagged with [`awx-operator`](https://forum.ansible.com/tag/awx-operator) or start a new discussion.
### Matrix
For real-time conversations:
* [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) — AWX and AWX Operator discussions
* [#docs:ansible.im](https://matrix.to/#/#docs:ansible.im) — Documentation discussions

View File

@@ -1,8 +1,8 @@
FROM quay.io/operator-framework/ansible-operator:v1.34.2 FROM quay.io/operator-framework/ansible-operator:v1.40.0
USER root USER root
RUN dnf update --security --bugfix -y && \ RUN dnf update --security --bugfix -y --disableplugin=subscription-manager && \
dnf install -y openssl dnf install -y --disableplugin=subscription-manager openssl
USER 1001 USER 1001

107
Makefile
View File

@@ -3,10 +3,7 @@
# To re-generate a bundle for another specific version without changing the standard setup, you can: # To re-generate a bundle for another specific version without changing the standard setup, you can:
# - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2) # - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2)
# - use environment variables to overwrite this value (e.g export VERSION=0.0.2) # - use environment variables to overwrite this value (e.g export VERSION=0.0.2)
VERSION ?= $(shell git describe --tags) # VERSION ?= 0.0.1 # Set in operator.mk
PREV_VERSION ?= $(shell git describe --abbrev=0 --tags $(shell git rev-list --tags --skip=1 --max-count=1))
CONTAINER_CMD ?= docker
# CHANNELS define the bundle channels used in the bundle. # CHANNELS define the bundle channels used in the bundle.
# Add a new line here if you would like to change its default config. (E.g CHANNELS = "candidate,fast,stable") # Add a new line here if you would like to change its default config. (E.g CHANNELS = "candidate,fast,stable")
@@ -31,8 +28,8 @@ BUNDLE_METADATA_OPTS ?= $(BUNDLE_CHANNELS) $(BUNDLE_DEFAULT_CHANNEL)
# This variable is used to construct full image tags for bundle and catalog images. # This variable is used to construct full image tags for bundle and catalog images.
# #
# For example, running 'make bundle-build bundle-push catalog-build catalog-push' will build and push both # For example, running 'make bundle-build bundle-push catalog-build catalog-push' will build and push both
# ansible.com/awx-operator-bundle:$VERSION and ansible.com/awx-operator-catalog:$VERSION. # example.com/temp-operator-bundle:$VERSION and example.com/temp-operator-catalog:$VERSION.
IMAGE_TAG_BASE ?= quay.io/ansible/awx-operator # IMAGE_TAG_BASE ?= quay.io/<org>/<operator-name> # Set in operator.mk
# BUNDLE_IMG defines the image:tag used for the bundle. # BUNDLE_IMG defines the image:tag used for the bundle.
# You can use it as an arg. (E.g make bundle-build BUNDLE_IMG=<some-registry>/<project-name-bundle>:<tag>) # You can use it as an arg. (E.g make bundle-build BUNDLE_IMG=<some-registry>/<project-name-bundle>:<tag>)
@@ -49,9 +46,13 @@ ifeq ($(USE_IMAGE_DIGESTS), true)
BUNDLE_GEN_FLAGS += --use-image-digests BUNDLE_GEN_FLAGS += --use-image-digests
endif endif
# Set the Operator SDK version to use. By default, what is installed on the system is used.
# This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
OPERATOR_SDK_VERSION ?= v1.40.0
CONTAINER_TOOL ?= podman
# Image URL to use all building/pushing image targets # Image URL to use all building/pushing image targets
IMG ?= $(IMAGE_TAG_BASE):$(VERSION) IMG ?= $(IMAGE_TAG_BASE):$(VERSION)
NAMESPACE ?= awx
.PHONY: all .PHONY: all
all: docker-build all: docker-build
@@ -73,23 +74,20 @@ all: docker-build
help: ## Display this help. help: ## Display this help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST) @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
.PHONY: print-%
print-%: ## Print any variable from the Makefile. Use as `make print-VARIABLE`
@echo $($*)
##@ Build ##@ Build
.PHONY: run .PHONY: run
ANSIBLE_ROLES_PATH?="$(shell pwd)/roles"
run: ansible-operator ## Run against the configured Kubernetes cluster in ~/.kube/config run: ansible-operator ## Run against the configured Kubernetes cluster in ~/.kube/config
ANSIBLE_ROLES_PATH="$(ANSIBLE_ROLES_PATH):$(shell pwd)/roles" $(ANSIBLE_OPERATOR) run $(ANSIBLE_OPERATOR) run
.PHONY: docker-build .PHONY: docker-build
docker-build: ## Build docker image with the manager. docker-build: ## Build docker image with the manager.
${CONTAINER_CMD} build $(BUILD_ARGS) -t ${IMG} . docker build $(BUILD_ARGS) -t ${IMG} .
.PHONY: docker-push .PHONY: docker-push
docker-push: ## Push docker image with the manager. docker-push: ## Push docker image with the manager.
${CONTAINER_CMD} push ${IMG} docker push ${IMG}
# PLATFORMS defines the target platforms for the manager image be build to provide support to multiple # PLATFORMS defines the target platforms for the manager image be build to provide support to multiple
# architectures. (i.e. make docker-buildx IMG=myregistry/mypoperator:0.0.1). To use this option you need to: # architectures. (i.e. make docker-buildx IMG=myregistry/mypoperator:0.0.1). To use this option you need to:
@@ -97,7 +95,6 @@ docker-push: ## Push docker image with the manager.
# - have enable BuildKit, More info: https://docs.docker.com/develop/develop-images/build_enhancements/ # - have enable BuildKit, More info: https://docs.docker.com/develop/develop-images/build_enhancements/
# - be able to push the image for your registry (i.e. if you do not inform a valid value via IMG=<myregistry/image:<tag>> than the export will fail) # - be able to push the image for your registry (i.e. if you do not inform a valid value via IMG=<myregistry/image:<tag>> than the export will fail)
# To properly provided solutions that supports more than one platform you should use this option. # To properly provided solutions that supports more than one platform you should use this option.
PLATFORMS ?= linux/arm64,linux/amd64,linux/s390x,linux/ppc64le
.PHONY: docker-buildx .PHONY: docker-buildx
docker-buildx: ## Build and push docker image for the manager for cross-platform support docker-buildx: ## Build and push docker image for the manager for cross-platform support
- docker buildx create --name project-v3-builder - docker buildx create --name project-v3-builder
@@ -105,37 +102,34 @@ docker-buildx: ## Build and push docker image for the manager for cross-platform
- docker buildx build --push $(BUILD_ARGS) --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile . - docker buildx build --push $(BUILD_ARGS) --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile .
- docker buildx rm project-v3-builder - docker buildx rm project-v3-builder
##@ Deployment ##@ Deployment
ifndef ignore-not-found
ignore-not-found = false
endif
.PHONY: install .PHONY: install
install: kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config. install: kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/crd | kubectl apply -f - $(KUSTOMIZE) build config/crd | kubectl apply -f -
.PHONY: uninstall .PHONY: uninstall
uninstall: kustomize ## Uninstall CRDs from the K8s cluster specified in ~/.kube/config. uninstall: kustomize ## Uninstall CRDs from the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/crd | kubectl delete -f - $(KUSTOMIZE) build config/crd | kubectl delete --ignore-not-found=$(ignore-not-found) -f -
.PHONY: gen-resources
gen-resources: kustomize ## Generate resources for controller and print to stdout
@cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
@cd config/default && $(KUSTOMIZE) edit set namespace ${NAMESPACE}
@$(KUSTOMIZE) build config/default
.PHONY: deploy .PHONY: deploy
deploy: kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config. deploy: kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
@cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG} cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
@cd config/default && $(KUSTOMIZE) edit set namespace ${NAMESPACE} $(KUSTOMIZE) build config/default | kubectl apply -f -
@$(KUSTOMIZE) build config/default | kubectl apply -f -
.PHONY: undeploy .PHONY: undeploy
undeploy: ## Undeploy controller from the K8s cluster specified in ~/.kube/config. undeploy: ## Undeploy controller from the K8s cluster specified in ~/.kube/config.
@cd config/default && $(KUSTOMIZE) edit set namespace ${NAMESPACE} $(KUSTOMIZE) build config/default | kubectl delete --ignore-not-found=$(ignore-not-found) -f -
$(KUSTOMIZE) build config/default | kubectl delete -f -
## Location for locally installed tools
LOCALBIN ?= $(shell pwd)/bin
OS := $(shell uname -s | tr '[:upper:]' '[:lower:]') OS := $(shell uname -s | tr '[:upper:]' '[:lower:]')
ARCHA := $(shell uname -m | sed -e 's/x86_64/amd64/' -e 's/aarch64/arm64/') ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
ARCHX := $(shell uname -m | sed -e 's/amd64/x86_64/' -e 's/aarch64/arm64/')
.PHONY: kustomize .PHONY: kustomize
KUSTOMIZE = $(shell pwd)/bin/kustomize KUSTOMIZE = $(shell pwd)/bin/kustomize
@@ -145,7 +139,7 @@ ifeq (,$(shell which kustomize 2>/dev/null))
@{ \ @{ \
set -e ;\ set -e ;\
mkdir -p $(dir $(KUSTOMIZE)) ;\ mkdir -p $(dir $(KUSTOMIZE)) ;\
curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.0.1/kustomize_v5.0.1_$(OS)_$(ARCHA).tar.gz | \ curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.6.0/kustomize_v5.6.0_$(OS)_$(ARCH).tar.gz | \
tar xzf - -C bin/ ;\ tar xzf - -C bin/ ;\
} }
else else
@@ -153,22 +147,6 @@ KUSTOMIZE = $(shell which kustomize)
endif endif
endif endif
.PHONY: operator-sdk
OPERATOR_SDK = $(shell pwd)/bin/operator-sdk
operator-sdk: ## Download operator-sdk locally if necessary, preferring the $(pwd)/bin path over global if both exist.
ifeq (,$(wildcard $(OPERATOR_SDK)))
ifeq (,$(shell which operator-sdk 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(OPERATOR_SDK)) ;\
curl -sSLo $(OPERATOR_SDK) https://github.com/operator-framework/operator-sdk/releases/download/v1.34.2/operator-sdk_$(OS)_$(ARCHA) ;\
chmod +x $(OPERATOR_SDK) ;\
}
else
OPERATOR_SDK = $(shell which operator-sdk)
endif
endif
.PHONY: ansible-operator .PHONY: ansible-operator
ANSIBLE_OPERATOR = $(shell pwd)/bin/ansible-operator ANSIBLE_OPERATOR = $(shell pwd)/bin/ansible-operator
ansible-operator: ## Download ansible-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist. ansible-operator: ## Download ansible-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist.
@@ -177,7 +155,7 @@ ifeq (,$(shell which ansible-operator 2>/dev/null))
@{ \ @{ \
set -e ;\ set -e ;\
mkdir -p $(dir $(ANSIBLE_OPERATOR)) ;\ mkdir -p $(dir $(ANSIBLE_OPERATOR)) ;\
curl -sSLo $(ANSIBLE_OPERATOR) https://github.com/operator-framework/ansible-operator-plugins/releases/download/v1.34.0/ansible-operator_$(OS)_$(ARCHA) ;\ curl -sSLo $(ANSIBLE_OPERATOR) https://github.com/operator-framework/ansible-operator-plugins/releases/download/$(OPERATOR_SDK_VERSION)/ansible-operator_$(OS)_$(ARCH) ;\
chmod +x $(ANSIBLE_OPERATOR) ;\ chmod +x $(ANSIBLE_OPERATOR) ;\
} }
else else
@@ -185,30 +163,47 @@ ANSIBLE_OPERATOR = $(shell which ansible-operator)
endif endif
endif endif
.PHONY: operator-sdk
OPERATOR_SDK ?= $(LOCALBIN)/operator-sdk
operator-sdk: ## Download operator-sdk locally if necessary.
ifeq (,$(wildcard $(OPERATOR_SDK)))
ifeq (, $(shell which operator-sdk 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(OPERATOR_SDK)) ;\
curl -sSLo $(OPERATOR_SDK) https://github.com/operator-framework/operator-sdk/releases/download/$(OPERATOR_SDK_VERSION)/operator-sdk_$(OS)_$(ARCH) ;\
chmod +x $(OPERATOR_SDK) ;\
}
else
OPERATOR_SDK = $(shell which operator-sdk)
endif
endif
.PHONY: bundle .PHONY: bundle
bundle: kustomize operator-sdk ## Generate bundle manifests and metadata, then validate generated files. bundle: kustomize operator-sdk ## Generate bundle manifests and metadata, then validate generated files.
$(OPERATOR_SDK) generate kustomize manifests -q $(OPERATOR_SDK) generate kustomize manifests -q
cd config/manager && $(KUSTOMIZE) edit set image controller=$(IMG) cd config/manager && $(KUSTOMIZE) edit set image controller=$(IMG)
$(KUSTOMIZE) build config/manifests | $(OPERATOR_SDK) generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) $(KUSTOMIZE) build config/manifests | $(OPERATOR_SDK) generate bundle $(BUNDLE_GEN_FLAGS)
$(OPERATOR_SDK) bundle validate ./bundle $(OPERATOR_SDK) bundle validate ./bundle
.PHONY: bundle-build .PHONY: bundle-build
bundle-build: ## Build the bundle image. bundle-build: ## Build the bundle image.
${CONTAINER_CMD} build -f bundle.Dockerfile -t $(BUNDLE_IMG) . $(CONTAINER_TOOL) build -f bundle.Dockerfile -t $(BUNDLE_IMG) .
.PHONY: bundle-push .PHONY: bundle-push
bundle-push: ## Push the bundle image. bundle-push: ## Push the bundle image.
$(MAKE) docker-push IMG=$(BUNDLE_IMG) $(MAKE) docker-push IMG=$(BUNDLE_IMG)
.PHONY: opm .PHONY: opm
OPM = ./bin/opm OPM = $(LOCALBIN)/opm
opm: ## Download opm locally if necessary. opm: ## Download opm locally if necessary.
ifeq (,$(wildcard $(OPM))) ifeq (,$(wildcard $(OPM)))
ifeq (,$(shell which opm 2>/dev/null)) ifeq (,$(shell which opm 2>/dev/null))
@{ \ @{ \
set -e ;\ set -e ;\
mkdir -p $(dir $(OPM)) ;\ mkdir -p $(dir $(OPM)) ;\
curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.26.0/$(OS)-$(ARCHA)-opm ;\ curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.55.0/$(OS)-$(ARCH)-opm ;\
chmod +x $(OPM) ;\ chmod +x $(OPM) ;\
} }
else else
@@ -233,9 +228,15 @@ endif
# https://github.com/operator-framework/community-operators/blob/7f1438c/docs/packaging-operator.md#updating-your-existing-operator # https://github.com/operator-framework/community-operators/blob/7f1438c/docs/packaging-operator.md#updating-your-existing-operator
.PHONY: catalog-build .PHONY: catalog-build
catalog-build: opm ## Build a catalog image. catalog-build: opm ## Build a catalog image.
$(OPM) index add --container-tool ${CONTAINER_CMD} --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLE_IMGS) $(FROM_INDEX_OPT) $(OPM) index add --container-tool $(CONTAINER_TOOL) --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLE_IMGS) $(FROM_INDEX_OPT)
# Push the catalog image. # Push the catalog image.
.PHONY: catalog-push .PHONY: catalog-push
catalog-push: ## Push a catalog image. catalog-push: ## Push a catalog image.
$(MAKE) docker-push IMG=$(CATALOG_IMG) $(MAKE) docker-push IMG=$(CATALOG_IMG)
##@ Includes
# Operator-specific targets and variables
-include makefiles/operator.mk
# Shared dev workflow targets (synced across all operator repos)
-include makefiles/common.mk

View File

@@ -16,11 +16,7 @@ The AWX Operator documentation is available at <https://ansible.readthedocs.io/p
## Contributing ## Contributing
Please visit [our contributing guidelines](https://github.com/ansible/awx-operator/blob/devel/CONTRIBUTING.md). Please visit our [contributing guidelines](https://github.com/ansible/awx-operator/blob/devel/CONTRIBUTING.md) and [development guide](https://github.com/ansible/awx-operator/blob/devel/docs/development.md) for information on how to set up your environment, build and deploy the operator, and submit changes.
For docs changes, create PRs on the appropriate files in the `/docs` folder.
The development environment consists of running an [`up.sh`](https://github.com/ansible/awx-operator/blob/devel/up.sh) and a [`down.sh`](https://github.com/ansible/awx-operator/blob/devel/down.sh) script, which applies or deletes yaml on the Openshift or K8s cluster you are connected to. See the [development.md](https://github.com/ansible/awx-operator/blob/devel/docs/development.md) for information on how to deploy and test changes from your branch.
## Author ## Author

View File

@@ -37,6 +37,9 @@ spec:
metadata: metadata:
type: object type: object
spec: spec:
x-kubernetes-validations:
- rule: "has(self.postgres_image) && has(self.postgres_image_version) || !has(self.postgres_image) && !has(self.postgres_image_version)"
message: "Both postgres_image and postgres_image_version must be set when required"
type: object type: object
x-kubernetes-preserve-unknown-fields: true x-kubernetes-preserve-unknown-fields: true
required: required:
@@ -48,6 +51,10 @@ spec:
backup_pvc: backup_pvc:
description: Name of the backup PVC description: Name of the backup PVC
type: string type: string
create_backup_pvc:
description: If true (default), automatically create the backup PVC if it does not exist
type: boolean
default: true
backup_pvc_namespace: backup_pvc_namespace:
description: (Deprecated) Namespace the PVC is in description: (Deprecated) Namespace the PVC is in
type: string type: string
@@ -81,6 +88,10 @@ spec:
pg_dump_suffix: pg_dump_suffix:
description: Additional parameters for the pg_dump command description: Additional parameters for the pg_dump command
type: string type: string
use_db_compression:
description: Enable compression for database dumps using pg_dump built-in compression.
type: boolean
default: true
postgres_label_selector: postgres_label_selector:
description: Label selector used to identify postgres pod for backing up data description: Label selector used to identify postgres pod for backing up data
type: string type: string

View File

@@ -69,6 +69,9 @@ spec:
ingress_annotations: ingress_annotations:
description: Annotations to add to the Ingress Controller description: Annotations to add to the Ingress Controller
type: string type: string
route_annotations:
description: Annotations to add to the OpenShift Route
type: string
ingress_class_name: ingress_class_name:
description: The name of ingress class to use instead of the cluster default. description: The name of ingress class to use instead of the cluster default.
type: string type: string

View File

@@ -37,6 +37,9 @@ spec:
metadata: metadata:
type: object type: object
spec: spec:
x-kubernetes-validations:
- rule: "has(self.postgres_image) && has(self.postgres_image_version) || !has(self.postgres_image) && !has(self.postgres_image_version)"
message: "Both postgres_image and postgres_image_version must be set when required"
type: object type: object
x-kubernetes-preserve-unknown-fields: true x-kubernetes-preserve-unknown-fields: true
required: required:

View File

@@ -36,6 +36,17 @@ spec:
metadata: metadata:
type: object type: object
spec: spec:
x-kubernetes-validations:
- rule: "has(self.image) && has(self.image_version) || !has(self.image) && !has(self.image_version)"
message: "Both image and image_version must be set when required"
- rule: "has(self.redis_image) && has(self.redis_image_version) || !has(self.redis_image) && !has(self.redis_image_version)"
message: "Both redis_image and redis_image_version must be set when required"
- rule: "has(self.postgres_image) && has(self.postgres_image_version) || !has(self.postgres_image) && !has(self.postgres_image_version)"
message: "Both postgres_image and postgres_image_version must be set when required"
- rule: >-
has(self.metrics_utility_image) && has(self.metrics_utility_image_version) ||
!has(self.metrics_utility_image) && !has(self.metrics_utility_image_version)
message: "Both metrics_utility_image and metrics_utility_image_version must be set when required"
properties: properties:
deployment_type: deployment_type:
description: Name of the deployment type description: Name of the deployment type
@@ -1730,15 +1741,15 @@ spec:
uwsgi_listen_queue_size: uwsgi_listen_queue_size:
description: Set the socket listen queue size for uwsgi description: Set the socket listen queue size for uwsgi
type: integer type: integer
uwsgi_timeout:
description: Set the timeout for requests served by uwsgi. (note, graceful exit signal sent 2 seconds prior to timeout)
type: integer
nginx_worker_processes: nginx_worker_processes:
description: Set the number of workers for nginx description: Set the number of workers for nginx
type: integer type: integer
nginx_worker_connections: nginx_worker_connections:
description: Set the number of connections per worker for nginx description: Set the number of connections per worker for nginx
type: integer type: integer
nginx_client_max_body_size:
description: Sets the maximum allowed size of the client request body in megabytes (defaults to 5M)
type: integer
nginx_worker_cpu_affinity: nginx_worker_cpu_affinity:
description: Set the CPU affinity for nginx workers description: Set the CPU affinity for nginx workers
type: string type: string
@@ -1828,9 +1839,25 @@ spec:
description: Assign a preexisting priority class to the postgres pod description: Assign a preexisting priority class to the postgres pod
type: string type: string
postgres_extra_args: postgres_extra_args:
description: "(Deprecated, use postgres_extra_settings parameter) Define postgres configuration arguments to use"
type: array type: array
items: items:
type: string type: string
postgres_extra_settings:
description: "PostgreSQL configuration settings to be added to postgresql.conf"
type: array
items:
type: object
properties:
setting:
description: "PostgreSQL configuration parameter name"
type: string
value:
description: "PostgreSQL configuration parameter value"
type: string
required:
- setting
- value
postgres_data_volume_init: postgres_data_volume_init:
description: Sets permissions on the /var/lib/pgdata/data for postgres container using an init container (not Openshift) description: Sets permissions on the /var/lib/pgdata/data for postgres container using an init container (not Openshift)
type: boolean type: boolean

View File

@@ -20,11 +20,11 @@ resources:
- ../manager - ../manager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. # [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus #- ../prometheus
- metrics_service.yaml
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
apiVersion: kustomize.config.k8s.io/v1beta1 apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization kind: Kustomization
patches: patches:
- path: manager_auth_proxy_patch.yaml - path: manager_metrics_patch.yaml
target:
kind: Deployment

View File

@@ -1,40 +0,0 @@
# This patch inject a sidecar container which is a HTTP proxy for the
# controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: kube-rbac-proxy
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- "ALL"
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=0"
ports:
- containerPort: 8443
protocol: TCP
name: https
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 5m
memory: 64Mi
- name: awx-manager
args:
- "--health-probe-bind-address=:6789"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"
- "--leader-election-id=awx-operator"

View File

@@ -0,0 +1,12 @@
# This patch adds the args to allow exposing the metrics endpoint using HTTPS
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-bind-address=:8443
# This patch adds the args to allow securing the metrics endpoint
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-secure
# This patch adds the args to allow RBAC-based authn/authz for the metrics endpoint
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-require-rbac

View File

@@ -3,6 +3,8 @@ kind: Service
metadata: metadata:
labels: labels:
control-plane: controller-manager control-plane: controller-manager
app.kubernetes.io/name: awx-operator
app.kubernetes.io/managed-by: kustomize
name: controller-manager-metrics-service name: controller-manager-metrics-service
namespace: system namespace: system
spec: spec:
@@ -10,6 +12,7 @@ spec:
- name: https - name: https
port: 8443 port: 8443
protocol: TCP protocol: TCP
targetPort: https targetPort: 8443
selector: selector:
control-plane: controller-manager control-plane: controller-manager
app.kubernetes.io/name: awx-operator

View File

@@ -38,6 +38,7 @@ spec:
- args: - args:
- --leader-elect - --leader-elect
- --leader-election-id=awx-operator - --leader-election-id=awx-operator
- --health-probe-bind-address=:6789
image: controller:latest image: controller:latest
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
name: awx-manager name: awx-manager

View File

@@ -50,6 +50,12 @@ spec:
path: ingress_annotations path: ingress_annotations
x-descriptors: x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:text - urn:alm:descriptor:com.tectonic.ui:text
- displayName: Route Annotations
path: route_annotations
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:advanced'
- 'urn:alm:descriptor:com.tectonic.ui:text'
- 'urn:alm:descriptor:com.tectonic.ui:fieldDependency:ingress_type:Route'
- displayName: Ingress Class Name - displayName: Ingress Class Name
path: ingress_class_name path: ingress_class_name
x-descriptors: x-descriptors:
@@ -169,6 +175,12 @@ spec:
path: additional_labels path: additional_labels
x-descriptors: x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced - urn:alm:descriptor:com.tectonic.ui:advanced
- description: Enable compression for database dumps using pg_dump built-in compression
displayName: Use DB Compression
path: use_db_compression
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:booleanSwitch
- displayName: Node Selector for backup management pod - displayName: Node Selector for backup management pod
path: db_management_pod_node_selector path: db_management_pod_node_selector
x-descriptors: x-descriptors:
@@ -554,12 +566,6 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:advanced - urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:number - urn:alm:descriptor:com.tectonic.ui:number
- urn:alm:descriptor:com.tectonic.ui:hidden - urn:alm:descriptor:com.tectonic.ui:hidden
- displayName: Uwsgi Timeout
path: uwsgi_timeout
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:number
- urn:alm:descriptor:com.tectonic.ui:hidden
- displayName: Uwsgi Processes - displayName: Uwsgi Processes
path: uwsgi_processes path: uwsgi_processes
x-descriptors: x-descriptors:
@@ -590,6 +596,11 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:advanced - urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:number - urn:alm:descriptor:com.tectonic.ui:number
- urn:alm:descriptor:com.tectonic.ui:hidden - urn:alm:descriptor:com.tectonic.ui:hidden
- displayName: Set the maximum allowed size of the client request body in megabytes for nginx
path: nginx_client_max_body_size
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:number
- displayName: Task Replicas - displayName: Task Replicas
path: task_replicas path: task_replicas
x-descriptors: x-descriptors:
@@ -692,11 +703,16 @@ spec:
x-descriptors: x-descriptors:
- urn:alm:descriptor:io.kubernetes:StorageClass - urn:alm:descriptor:io.kubernetes:StorageClass
- urn:alm:descriptor:com.tectonic.ui:advanced - urn:alm:descriptor:com.tectonic.ui:advanced
- displayName: Postgres Extra Arguments - displayName: Postgres Extra Arguments (Deprecated)
path: postgres_extra_args path: postgres_extra_args
x-descriptors: x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced - urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:hidden - urn:alm:descriptor:com.tectonic.ui:hidden
- displayName: Postgres Extra Settings
path: postgres_extra_settings
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:hidden
- description: Specify extra volumes to add to the postgres pod - description: Specify extra volumes to add to the postgres pod
displayName: Postgres Extra Volumes displayName: Postgres Extra Volumes
path: postgres_extra_volumes path: postgres_extra_volumes

View File

@@ -9,10 +9,6 @@ resources:
- role_binding.yaml - role_binding.yaml
- leader_election_role.yaml - leader_election_role.yaml
- leader_election_role_binding.yaml - leader_election_role_binding.yaml
# Comment the following 4 lines if you want to disable - metrics_auth_role.yaml
# the auth proxy (https://github.com/brancz/kube-rbac-proxy) - metrics_auth_role_binding.yaml
# which protects your /metrics endpoint. - metrics_reader_role.yaml
- auth_proxy_service.yaml
- auth_proxy_role.yaml
- auth_proxy_role_binding.yaml
- auth_proxy_client_clusterrole.yaml

View File

@@ -1,7 +1,7 @@
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
name: proxy-role name: metrics-auth-role
rules: rules:
- apiGroups: - apiGroups:
- authentication.k8s.io - authentication.k8s.io

View File

@@ -1,11 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: proxy-rolebinding name: metrics-auth-rolebinding
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: proxy-role name: metrics-auth-role
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: controller-manager name: controller-manager

View File

@@ -14,10 +14,13 @@ resources:
- ../crd - ../crd
- ../rbac - ../rbac
- ../manager - ../manager
- ../default/metrics_service.yaml
images: images:
- name: testing - name: testing
newName: testing-operator newName: testing-operator
patches: patches:
- path: manager_image.yaml - path: manager_image.yaml
- path: debug_logs_patch.yaml - path: debug_logs_patch.yaml
- path: ../default/manager_auth_proxy_patch.yaml - path: ../default/manager_metrics_patch.yaml
target:
kind: Deployment

View File

@@ -0,0 +1,30 @@
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: clusterip
ingress_type: Route
postgres_extra_settings:
- setting: max_connections
value: "999"
- setting: ssl_ciphers
value: "HIGH:!aNULL:!MD5"
# requires custom-postgres-configuration secret to be pre-created
# postgres_configuration_secret: custom-postgres-configuration
postgres_resource_requirements:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 800m
memory: 1Gi
postgres_storage_requirements:
requests:
storage: 20Gi
limits:
storage: 100Gi

View File

@@ -7,7 +7,7 @@ spec:
service_type: clusterip service_type: clusterip
ingress_type: Route ingress_type: Route
# Secrets # # Secrets
admin_password_secret: custom-admin-password # admin_password_secret: custom-admin-password
postgres_configuration_secret: custom-pg-configuration # postgres_configuration_secret: custom-pg-configuration
secret_key_secret: custom-secret-key # secret_key_secret: custom-secret-key

View File

@@ -8,20 +8,3 @@ After the draft release is created, publish it and the [Promote AWX Operator ima
- Publish image to Quay - Publish image to Quay
- Release Helm chart - Release Helm chart
After the GHA is complete, the final step is to run the [publish-to-operator-hub.sh](https://github.com/ansible/awx-operator/blob/devel/hack/publish-to-operator-hub.sh) script, which will create a PR in the following repos to add the new awx-operator bundle version to OperatorHub:
- <https://github.com/k8s-operatorhub/community-operators> (community operator index)
- <https://github.com/redhat-openshift-ecosystem/community-operators-prod> (operator index shipped with Openshift)
!!! note
The usage is documented in the script itself, but here is an example of how you would use the script to publish the 2.5.3 awx-opeator bundle to OperatorHub.
Note that you need to specify the version being released, as well as the previous version. This is because the bundle has a pointer to the previous version that is it being upgrade from. This is used by OLM to create a dependency graph.
```bash
VERSION=2.5.3 PREV_VERSION=2.5.2 ./hack/publish-to-operator-hub.sh
```
There are some quirks with running this on OS X that still need to be fixed, but the script runs smoothly on linux.
As soon as CI completes successfully, the PR's will be auto-merged. Please remember to monitor those PR's to make sure that CI passes, sometimes it needs a retry.

View File

@@ -1,58 +1,229 @@
# Development Guide # Development Guide
There are development scripts and yaml exaples in the [`dev/`](../dev) directory that, along with the up.sh and down.sh scripts in the root of the repo, can be used to build, deploy and test changes made to the awx-operator. There are development yaml examples in the [`dev/`](../dev) directory and Makefile targets that can be used to build, deploy and test changes made to the awx-operator.
Run `make help` to see all available targets and options.
## Prerequisites
You will need to have the following tools installed:
* [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* [podman](https://podman.io/docs/installation) or [docker](https://docs.docker.com/get-docker/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [oc](https://docs.openshift.com/container-platform/4.11/cli_reference/openshift_cli/getting-started-cli.html) (if using OpenShift)
You will also need a container registry account. This guide uses [quay.io](https://quay.io), but any container registry will work.
## Registry Setup
### Create a Quay.io Repository
1. Go to [quay.io](https://quay.io) and create a repository named `awx-operator` under your username.
2. Login at the CLI:
```sh
podman login quay.io
```
### Pull Secret (optional)
If your repository is private, you'll need to configure a pull secret so the cluster can pull your operator image:
1. In your Quay.io repository, go to Settings → Robot Accounts.
2. Create a robot account with write permissions.
3. Click the robot account name, then click "Kubernetes Secret" and copy the YAML.
4. Save it to `hacking/pull-secret.yml` in your checkout (this path is in `.gitignore`).
5. Change the `name` field to `redhat-operators-pull-secret`.
Example:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: redhat-operators-pull-secret
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: <base64-encoded-credentials>
```
If a pull secret file is found at `hacking/pull-secret.yml` (or the path set by `PULL_SECRET_FILE`), `make up` will apply it automatically. Otherwise, you can make your quay.io repos public or create a global pull secret on your cluster.
## Build and Deploy ## Build and Deploy
Make sure you are logged into your cluster (`oc login` or `kubectl` configured), then run:
If you clone the repo, and make sure you are logged in at the CLI with oc and your cluster, you can run: ```sh
QUAY_USER=username make up
```
export QUAY_USER=username
export NAMESPACE=awx
export TAG=test
./up.sh
``` ```
You can add those variables to your .bashrc file so that you can just run `./up.sh` in the future. This will:
1. Login to container registries
2. Create the target namespace
3. Build the operator image and push it to your registry
4. Deploy the operator via kustomize
5. Apply dev secrets and create a dev AWX instance
> Note: the first time you run this, it will create quay.io repos on your fork. You will need to either make those public, or create a global pull secret on your Openshift cluster. ### Customization Options
To get the URL, if on **Openshift**, run: | Variable | Default | Description |
|----------|---------|-------------|
| `QUAY_USER` | _(required)_ | Your quay.io username |
| `NAMESPACE` | `awx` | Target namespace |
| `DEV_TAG` | `dev` | Image tag for dev builds |
| `CONTAINER_TOOL` | `podman` | Container engine (`podman` or `docker`) |
| `PLATFORM` | _(auto-detected)_ | Target platform (e.g., `linux/amd64`) |
| `MULTI_ARCH` | `false` | Build multi-arch image (`linux/arm64,linux/amd64`) |
| `DEV_IMG` | `quay.io/<QUAY_USER>/awx-operator` | Override full image path (skips QUAY_USER) |
| `BUILD_IMAGE` | `true` | Set to `false` to skip image build (use existing image) |
| `CREATE_CR` | `true` | Set to `false` to skip creating the dev AWX instance |
| `CREATE_SECRETS` | `true` | Set to `false` to skip creating dev secrets |
| `IMAGE_PULL_POLICY` | `Always` | Set to `Never` for local builds without push |
| `BUILD_ARGS` | _(empty)_ | Extra args passed to container build (e.g., `--no-cache`) |
| `DEV_CR` | `dev/awx-cr/awx-openshift-cr.yml` | Path to the dev CR to apply |
| `PULL_SECRET_FILE` | `dev/pull-secret.yml` | Path to pull secret YAML |
| `PODMAN_CONNECTION` | _(empty)_ | Remote podman connection name |
``` Examples:
$ oc get route
```bash
# Use a specific namespace and tag
QUAY_USER=username NAMESPACE=awx DEV_TAG=mytag make up
# Use docker instead of podman
CONTAINER_TOOL=docker QUAY_USER=username make up
# Build for a specific platform (e.g., when on ARM building for x86)
PLATFORM=linux/amd64 QUAY_USER=username make up
# Deploy without building (use an existing image)
BUILD_IMAGE=false DEV_IMG=quay.io/myuser/awx-operator DEV_TAG=latest make up
# Build without pushing (local cluster like kind/minikube)
IMAGE_PULL_POLICY=Never QUAY_USER=username make up
``` ```
On **k8s with ingress**, run: ### Accessing the Deployment
``` On **OpenShift**:
$ kubectl get ing ```sh
oc get route
``` ```
On **k8s with nodeport**, run: On **k8s with ingress**:
```sh
``` kubectl get ing
$ kubectl get svc
``` ```
The URL is then `http://<Node-IP>:<NodePort>` On **k8s with nodeport**:
```sh
kubectl get svc
```
The URL is then `http://<Node-IP>:<NodePort>`.
> Note: NodePort will only work if you expose that port on your underlying k8s node, or are accessing it from localhost. > **Note**: NodePort will only work if you expose that port on your underlying k8s node, or are accessing it from localhost.
### Default Credentials
The dev CR pre-creates an admin password secret. Default credentials are:
- **Username**: `admin`
- **Password**: `password`
Without the dev CR, a password would be generated and stored in a secret named `<deployment-name>-admin-password`.
By default, the usename and password will be admin and password if using the `up.sh` script because it pre-creates a custom admin password k8s secret and specifies it on the AWX custom resource spec. Without that, a password would have been generated and stored in a k8s secret named <deployment-name>-admin-password.
## Clean up ## Clean up
To tear down your development deployment:
Same thing for cleanup, just run ./down.sh and it will clean up your namespace on that cluster ```sh
make down
```
./down.sh
``` ```
## Running CI tests locally ### Teardown Options
More tests coming soon... | Variable | Default | Description |
|----------|---------|-------------|
| `KEEP_NAMESPACE` | `false` | Set to `true` to keep the namespace for reuse |
| `DELETE_PVCS` | `true` | Set to `false` to preserve PersistentVolumeClaims |
| `DELETE_SECRETS` | `true` | Set to `false` to preserve secrets |
Examples:
```bash
# Keep the namespace for faster redeploy
KEEP_NAMESPACE=true make down
# Keep PVCs (preserve database data between deploys)
DELETE_PVCS=false make down
```
## Testing
### Linting
Run linting checks (required for all PRs):
```sh
make lint
```
This runs `ansible-lint` on roles, playbooks, and config samples, and checks that `no_log` statements use the `{{ no_log }}` variable.
### Molecule Tests
The operator includes a [Molecule](https://ansible.readthedocs.io/projects/molecule/)-based test environment for integration testing. Molecule can run standalone in Docker or inside a Kubernetes cluster.
Install Molecule:
```sh
python -m pip install molecule-plugins[docker]
```
#### Testing in Kind (recommended)
[Kind](https://kind.sigs.k8s.io/docs/user/quick-start/) is the recommended way to test locally:
```sh
molecule test -s kind
```
#### Testing in Minikube
```sh
minikube start --memory 8g --cpus 4
minikube addons enable ingress
molecule test -s test-minikube
```
[Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) runs a full VM with an assigned IP address, making it easier to test NodePort services and Ingress from outside the cluster.
Once deployed, access the AWX UI:
1. Add `<minikube-ip> example-awx.test` to your `/etc/hosts` file (get the IP with `minikube ip`).
2. Visit `http://example-awx.test/` (default login: `test`/`changeme`).
#### Active Development
Use `molecule converge` instead of `molecule test` to keep the environment running after tests complete — useful for iterating on changes.
## Bundle Generation
If you have the Operator Lifecycle Manager (OLM) installed, you can generate and deploy an operator bundle:
```bash
# Generate bundle manifests and validate
make bundle
# Build and push the bundle image
make bundle-build bundle-push
# Build and push a catalog image
make catalog-build catalog-push
```
After pushing the catalog, create a `CatalogSource` in your cluster pointing to the catalog image. Once the CatalogSource is in a READY state, the operator will be available in OperatorHub.

View File

@@ -24,13 +24,6 @@ Past that, it is often useful to inspect various resources the AWX Operator mana
* secrets * secrets
* serviceaccount * serviceaccount
And if installing via OperatorHub and OLM:
* subscription
* csv
* installPlan
* catalogSource
To inspect these resources you can use these commands To inspect these resources you can use these commands
```sh ```sh

View File

@@ -70,16 +70,15 @@ spec:
## Custom UWSGI Configuration ## Custom UWSGI Configuration
We allow the customization of three UWSGI parameters: We allow the customization of two UWSGI parameters:
* [processes](https://uwsgi-docs.readthedocs.io/en/latest/Options.html#processes) with `uwsgi_processes` (default 5) * [processes](https://uwsgi-docs.readthedocs.io/en/latest/Options.html#processes) with `uwsgi_processes` (default 5)
* [listen](https://uwsgi-docs.readthedocs.io/en/latest/Options.html#listen) with `uwsgi_listen_queue_size` (default 128) * [listen](https://uwsgi-docs.readthedocs.io/en/latest/Options.html#listen) with `uwsgi_listen_queue_size` (default 128)
* [harakiri](https://uwsgi-docs.readthedocs.io/en/latest/Options.html#harakiri) with `uwsgi_timeout` (default 30)
**Note:** Increasing the listen queue beyond 128 requires that the sysctl setting net.core.somaxconn be set to an equal value or higher. **Note:** Increasing the listen queue beyond 128 requires that the sysctl setting net.core.somaxconn be set to an equal value or higher.
The operator will set the appropriate securityContext sysctl value for you, but it is a required that this sysctl be added to an allowlist on the kubelet level. [See kubernetes docs about allowing this sysctl setting](https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls). The operator will set the appropriate securityContext sysctl value for you, but it is a required that this sysctl be added to an allowlist on the kubelet level. [See kubernetes docs about allowing this sysctl setting](https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls).
The `processes` and `listen` vars relate to the vertical and horizontal scalibility of the web service. These vars relate to the vertical and horizontal scalibility of the web service.
Increasing the number of processes allows more requests to be actively handled Increasing the number of processes allows more requests to be actively handled
per web pod, but will consume more CPU and Memory and the resource requests per web pod, but will consume more CPU and Memory and the resource requests
@@ -90,12 +89,6 @@ requests (more than 128) tend to come in a short period of time, but can all be
handled before any other time outs may apply. Also see related nginx handled before any other time outs may apply. Also see related nginx
configuration. configuration.
The `uwsgi_timeout` variable determines after how many seconds a request will
be forecibly killed by uwsgi. A "graceful" timeout signal is sent to the worker
2 seconds prior to attempt to get a traceback of what may be causing the
request to hang.
## Custom Nginx Configuration ## Custom Nginx Configuration
Using the [extra_volumes feature](#custom-volume-and-volume-mount-options), it is possible to extend the nginx.conf. Using the [extra_volumes feature](#custom-volume-and-volume-mount-options), it is possible to extend the nginx.conf.
@@ -122,6 +115,7 @@ configuration.
* [worker_cpu_affinity](http://nginx.org/en/docs/ngx_core_module.html#worker_cpu_affinity) with `nginx_worker_cpu_affinity` (default "auto") * [worker_cpu_affinity](http://nginx.org/en/docs/ngx_core_module.html#worker_cpu_affinity) with `nginx_worker_cpu_affinity` (default "auto")
* [worker_connections](http://nginx.org/en/docs/ngx_core_module.html#worker_connections) with `nginx_worker_connections` (minimum of 1024) * [worker_connections](http://nginx.org/en/docs/ngx_core_module.html#worker_connections) with `nginx_worker_connections` (minimum of 1024)
* [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) with `nginx_listen_queue_size` (default same as uwsgi listen queue size) * [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) with `nginx_listen_queue_size` (default same as uwsgi listen queue size)
* [client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) with `nginx_client_max_body_size` (default of 5M)
## Custom Logos ## Custom Logos

View File

@@ -69,6 +69,7 @@ The following variables are customizable for the managed PostgreSQL service
| postgres_storage_requirements | PostgreSQL container storage requirements | requests: {storage: 8Gi} | | postgres_storage_requirements | PostgreSQL container storage requirements | requests: {storage: 8Gi} |
| postgres_storage_class | PostgreSQL PV storage class | Empty string | | postgres_storage_class | PostgreSQL PV storage class | Empty string |
| postgres_priority_class | Priority class used for PostgreSQL pod | Empty string | | postgres_priority_class | Priority class used for PostgreSQL pod | Empty string |
| postgres_extra_settings | PostgreSQL configuration settings to be added to postgresql.conf | `[]` |
Example of customization could be: Example of customization could be:
@@ -89,14 +90,78 @@ spec:
limits: limits:
storage: 50Gi storage: 50Gi
postgres_storage_class: fast-ssd postgres_storage_class: fast-ssd
postgres_extra_args: postgres_extra_settings:
- '-c' - setting: max_connections
- 'max_connections=1000' value: "1000"
``` ```
!!! note !!! note
If `postgres_storage_class` is not defined, PostgreSQL will store it's data on a volume using the default storage class for your cluster. If `postgres_storage_class` is not defined, PostgreSQL will store it's data on a volume using the default storage class for your cluster.
## PostgreSQL Extra Settings
!!! warning "Deprecation Notice"
The `postgres_extra_args` parameter is **deprecated** and should no longer be used. Use `postgres_extra_settings` instead for configuring PostgreSQL parameters. The `postgres_extra_args` parameter will be removed in a future version of the AWX operator.
You can customize PostgreSQL configuration by adding settings to the `postgresql.conf` file using the `postgres_extra_settings` parameter. This allows you to tune PostgreSQL performance, security, and behavior according to your specific requirements.
The `postgres_extra_settings` parameter accepts an array of setting objects, where each object contains a `setting` name and its corresponding `value`.
!!! note
The `postgres_extra_settings` parameter replaces the deprecated `postgres_extra_args` parameter and provides a more structured way to configure PostgreSQL settings.
### Configuration Format
```yaml
spec:
postgres_extra_settings:
- setting: max_connections
value: "499"
- setting: ssl_ciphers
value: "HIGH:!aNULL:!MD5"
```
**Common PostgreSQL settings you might want to configure:**
| Setting | Description | Example Value |
|---------|-------------|---------------|
| `max_connections` | Maximum number of concurrent connections | `"200"` |
| `ssl_ciphers` | SSL cipher suites to use | `"HIGH:!aNULL:!MD5"` |
| `shared_buffers` | Amount of memory for shared memory buffers | `"256MB"` |
| `effective_cache_size` | Planner's assumption about effective cache size | `"1GB"` |
| `work_mem` | Amount of memory for internal sort operations | `"4MB"` |
| `maintenance_work_mem` | Memory for maintenance operations | `"64MB"` |
| `checkpoint_completion_target` | Target for checkpoint completion | `"0.9"` |
| `wal_buffers` | Amount of memory for WAL buffers | `"16MB"` |
### Important Notes
!!! warning
- Changes to `postgres_extra_settings` require a PostgreSQL pod restart to take effect.
- Some settings may require specific PostgreSQL versions or additional configuration.
- Always test configuration changes in a non-production environment first.
!!! tip
- String values should be quoted in the YAML configuration.
- Numeric values can be provided as strings or numbers.
- Boolean values should be provided as strings ("on"/"off" or "true"/"false").
For a complete list of available PostgreSQL configuration parameters, refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/runtime-config.html).
**Verification:**
You can verify that your settings have been applied by connecting to the PostgreSQL database and running:
```bash
kubectl exec -it <postgres-pod-name> -n <namespace> -- psql
```
Then run the following query:
```sql
SELECT name, setting FROM pg_settings;
```
## Note about overriding the postgres image ## Note about overriding the postgres image
We recommend you use the default image sclorg image. If you are coming from a deployment using the old postgres image from dockerhub (postgres:13), upgrading from awx-operator version 2.12.2 and below to 2.15.0+ will handle migrating your data to the new postgresql image (postgresql-15-c9s). We recommend you use the default image sclorg image. If you are coming from a deployment using the old postgres image from dockerhub (postgres:13), upgrading from awx-operator version 2.12.2 and below to 2.15.0+ will handle migrating your data to the new postgresql image (postgresql-15-c9s).

36
down.sh
View File

@@ -1,36 +0,0 @@
#!/bin/bash
# AWX Operator down.sh
# Purpose:
# Cleanup and delete the namespace you deployed in
# -- Usage
# NAMESPACE=awx ./down.sh
# -- Variables
TAG=${TAG:-dev}
AWX_CR=${AWX_CR:-awx}
CLEAN_DB=${CLEAN_DB:-false}
# -- Check for required variables
# Set the following environment variables
# export NAMESPACE=awx
if [ -z "$NAMESPACE" ]; then
echo "Error: NAMESPACE env variable is not set. Run the following with your namespace:"
echo " export NAMESPACE=developer"
exit 1
fi
# -- Delete Backups
kubectl delete awxbackup --all
# -- Delete Restores
kubectl delete awxrestore --all
# Deploy Operator
make undeploy NAMESPACE=$NAMESPACE
# Remove PVCs
kubectl delete pvc postgres-15-$AWX_CR-postgres-15-0

View File

@@ -1,123 +0,0 @@
#!/bin/bash
# Create PR to Publish to community-operators and community-operators-prod
#
# * Create upstream awx-operator release
# * Check out tag (1.1.2).
# * Run VERSION=1.1.2 make bundle
# * Clone https://github.com/k8s-operatorhub/community-operators --branch main
# * mkdir -p operators/awx-operator/0.31.0/
# * Copy in manifests/ metadata/ and tests/ directories into operators/awx-operator/1.1.2/
# * Use sed to add in a replaces or skip entry. replace by default.
# * No need to update config.yaml
# * Build and Push operator and bundle images
# * Open PR or at least push to a branch so that a PR can be manually opened from it.
#
# Usage:
# First, check out awx-operator tag you intend to release, in this case, 1.0.0
# $ VERSION=1.1.2 PREV_VERSION=1.1.1 FORK=<your-fork> ./hack/publish-to-operator-hub.sh
#
# Remember to change update the VERSION and PREV_VERSION before running!!!
set -e
VERSION=${VERSION:-$(make print-VERSION)}
PREV_VERSION=${PREV_VERSION:-$(make print-PREV_VERSION)}
BRANCH=publish-awx-operator-$VERSION
FORK=${FORK:-awx-auto}
GITHUB_TOKEN=${GITHUB_TOKEN:-$AWX_AUTO_GITHUB_TOKEN}
IMG_REPOSITORY=${IMG_REPOSITORY:-quay.io/ansible}
OPERATOR_IMG=$IMG_REPOSITORY/awx-operator:$VERSION
CATALOG_IMG=$IMG_REPOSITORY/awx-operator-catalog:$VERSION
BUNDLE_IMG=$IMG_REPOSITORY/awx-operator-bundle:$VERSION
COMMUNITY_OPERATOR_GITHUB_ORG=${COMMUNITY_OPERATOR_GITHUB_ORG:-k8s-operatorhub}
COMMUNITY_OPERATOR_PROD_GITHUB_ORG=${COMMUNITY_OPERATOR_PROD_GITHUB_ORG:-redhat-openshift-ecosystem}
# Build bundle directory
make bundle IMG=$OPERATOR_IMG
# Build bundle and catalog images
make bundle-build bundle-push BUNDLE_IMG=$BUNDLE_IMG IMG=$OPERATOR_IMG
make catalog-build catalog-push CATALOG_IMG=$CATALOG_IMG BUNDLE_IMGS=$BUNDLE_IMG BUNDLE_IMG=$BUNDLE_IMG IMG=$OPERATOR_IMG
# Set containerImage & namespace variables in CSV
sed -i.bak -e "s|containerImage: quay.io/ansible/awx-operator:devel|containerImage: ${OPERATOR_IMG}|g" bundle/manifests/awx-operator.clusterserviceversion.yaml
sed -i.bak -e "s|namespace: placeholder|namespace: awx|g" bundle/manifests/awx-operator.clusterserviceversion.yaml
# Add replaces to dependency graph for upgrade path
if ! grep -qF 'replaces: awx-operator.v${PREV_VERSION}' bundle/manifests/awx-operator.clusterserviceversion.yaml; then
sed -i.bak -e "/version: ${VERSION}/a \\
replaces: awx-operator.v$PREV_VERSION" bundle/manifests/awx-operator.clusterserviceversion.yaml
fi
# Rename CSV to contain version in name
mv bundle/manifests/awx-operator.clusterserviceversion.yaml bundle/manifests/awx-operator.v${VERSION}.clusterserviceversion.yaml
# Set Openshift Support Range (bump minKubeVersion in CSV when changing)
if ! grep -qF 'openshift.versions' bundle/metadata/annotations.yaml; then
sed -i.bak -e "/annotations:/a \\
com.redhat.openshift.versions: v4.11" bundle/metadata/annotations.yaml
fi
# Remove .bak files from bundle result from sed commands
find bundle -name "*.bak" -type f -delete
echo "-- Create branch on community-operators fork --"
git clone https://github.com/$COMMUNITY_OPERATOR_GITHUB_ORG/community-operators.git
mkdir -p community-operators/operators/awx-operator/$VERSION/
cp -r bundle/* community-operators/operators/awx-operator/$VERSION/
pushd community-operators/operators/awx-operator/$VERSION/
git checkout -b $BRANCH
git add ./
git status
message='operator [N] [CI] awx-operator'
commitMessage="${message} ${VERSION}"
git commit -m "$commitMessage" -s
git remote add upstream https://$GITHUB_TOKEN@github.com/$FORK/community-operators.git
git push upstream --delete $BRANCH || true
git push upstream $BRANCH
gh pr create \
--title "operator awx-operator (${VERSION})" \
--body "operator awx-operator (${VERSION})" \
--base main \
--head $FORK:$BRANCH \
--repo $COMMUNITY_OPERATOR_GITHUB_ORG/community-operators
popd
echo "-- Create branch on community-operators-prod fork --"
git clone https://github.com/$COMMUNITY_OPERATOR_PROD_GITHUB_ORG/community-operators-prod.git
mkdir -p community-operators-prod/operators/awx-operator/$VERSION/
cp -r bundle/* community-operators-prod/operators/awx-operator/$VERSION/
pushd community-operators-prod/operators/awx-operator/$VERSION/
git checkout -b $BRANCH
git add ./
git status
message='operator [N] [CI] awx-operator'
commitMessage="${message} ${VERSION}"
git commit -m "$commitMessage" -s
git remote add upstream https://$GITHUB_TOKEN@github.com/$FORK/community-operators-prod.git
git push upstream --delete $BRANCH || true
git push upstream $BRANCH
gh pr create \
--title "operator awx-operator (${VERSION})" \
--body "operator awx-operator (${VERSION})" \
--base main \
--head $FORK:$BRANCH \
--repo $COMMUNITY_OPERATOR_PROD_GITHUB_ORG/community-operators-prod
popd

439
makefiles/common.mk Normal file
View File

@@ -0,0 +1,439 @@
# common.mk — Shared dev workflow targets for AAP operators
#
# Synced across all operator repos via GHA.
# Operator-specific customization goes in operator.mk.
#
# Usage:
# make up # Full dev deploy
# make down # Full dev undeploy
#
# Required variables (set in operator.mk):
# NAMESPACE — target namespace
# DEPLOYMENT_NAME — operator deployment name
# VERSION — operator version
#
# Optional overrides:
# CONTAINER_TOOL=docker make up # use docker instead of podman (default in Makefile)
# QUAY_USER=myuser make up
# DEV_TAG=mytag make up
# DEV_IMG=registry.example.com/my-operator make up # override image (skips QUAY_USER)
# IMAGE_PULL_POLICY=Never make up # set imagePullPolicy (e.g. for local builds)
# PODMAN_CONNECTION=aap-lab make up # use remote podman connection
# KEEP_NAMESPACE=true make down # undeploy but keep namespace
# PLATFORM=linux/amd64 make up # build for specific platform (auto-detected from cluster)
# MULTI_ARCH=true make up # build multi-arch image (PLATFORMS=linux/arm64,linux/amd64)
# Suppress "Entering/Leaving directory" messages from recursive make calls
MAKEFLAGS += --no-print-directory
#@ Common Variables
# Kube CLI auto-detect (oc preferred, kubectl fallback)
KUBECTL ?= $(shell command -v oc 2>/dev/null || command -v kubectl 2>/dev/null)
# Dev workflow
QUAY_USER ?=
REGISTRIES ?= registry.redhat.io $(if $(QUAY_USER),quay.io/$(QUAY_USER))
DEV_TAG ?= dev
PULL_SECRET_FILE ?= dev/pull-secret.yml
CREATE_PULL_SECRET ?= true
IMAGE_PULL_POLICY ?=
PODMAN_CONNECTION ?=
# Dev image: defaults to quay.io/<user>/<operator-name>, overridable via DEV_IMG
_OPERATOR_NAME = $(notdir $(IMAGE_TAG_BASE))
DEV_IMG ?= $(if $(QUAY_USER),quay.io/$(QUAY_USER)/$(_OPERATOR_NAME),$(IMAGE_TAG_BASE))
# Build platform (auto-detected from cluster, override with PLATFORM=linux/amd64)
MULTI_ARCH ?= false
PLATFORMS ?= linux/arm64,linux/amd64
# Auto-detect registry auth config
REGISTRY_AUTH_CONFIG ?= $(shell \
if [ "$(CONTAINER_TOOL)" = "podman" ]; then \
for f in "$${XDG_RUNTIME_DIR}/containers/auth.json" \
"$${HOME}/.config/containers/auth.json" \
"$${HOME}/.docker/config.json"; do \
[ -f "$$f" ] && echo "$$f" && break; \
done; \
else \
[ -f "$${HOME}/.docker/config.json" ] && echo "$${HOME}/.docker/config.json"; \
fi)
# Container tool with optional remote connection (podman only)
_CONTAINER_CMD = $(CONTAINER_TOOL)$(if $(and $(filter podman,$(CONTAINER_TOOL)),$(PODMAN_CONNECTION)), --connection $(PODMAN_CONNECTION),)
# Portable sed -i (GNU vs BSD)
_SED_I = $(shell if sed --version >/dev/null 2>&1; then echo 'sed -i'; else echo 'sed -i ""'; fi)
# Custom configs to apply during post-deploy (secrets, configmaps, etc.)
DEV_CUSTOM_CONFIG ?=
# Dev CR to apply after deployment (set in operator.mk)
DEV_CR ?=
CREATE_CR ?= true
# Teardown configuration (set in operator.mk)
TEARDOWN_CR_KINDS ?=
TEARDOWN_BACKUP_KINDS ?=
TEARDOWN_RESTORE_KINDS ?=
OLM_SUBSCRIPTIONS ?=
DELETE_PVCS ?= true
DELETE_SECRETS ?= true
KEEP_NAMESPACE ?= false
##@ Dev Workflow
.PHONY: up
up: _require-img _require-namespace ## Full dev deploy
@$(MAKE) registry-login
@$(MAKE) ns-wait
@$(MAKE) ns-create
@$(MAKE) ns-security
@$(MAKE) pull-secret
@$(MAKE) patch-pull-policy
@$(MAKE) operator-up
.PHONY: down
down: _require-namespace ## Full dev undeploy
@echo "=== Tearing down dev environment ==="
@$(MAKE) _teardown-restores
@$(MAKE) _teardown-backups
@$(MAKE) _teardown-operands
@$(MAKE) _teardown-pvcs
@$(MAKE) _teardown-secrets
@$(MAKE) _teardown-olm
@$(MAKE) _teardown-namespace
#@ Operator Deploy Building Blocks
#
# Composable targets for operator-up. Each operator.mk wires these
# together in its own operator-up target, adding repo-specific steps.
#
# Kustomize repos:
# operator-up: _operator-build-and-push _operator-deploy _operator-wait-ready _operator-post-deploy
#
# OLM repos (gateway):
# operator-up: _olm-cleanup _olm-deploy _operator-build-and-inject _operator-wait-ready <custom> _operator-post-deploy
.PHONY: _operator-build-and-push
_operator-build-and-push:
@if [ "$(BUILD_IMAGE)" != "true" ]; then \
echo "Skipping image build (BUILD_IMAGE=false)"; \
exit 0; \
fi; \
$(MAKE) dev-build; \
echo "Pushing $(DEV_IMG):$(DEV_TAG)..."; \
$(_CONTAINER_CMD) push $(DEV_IMG):$(DEV_TAG)
.PHONY: _operator-deploy
_operator-deploy:
@$(MAKE) pre-deploy-cleanup
@cd config/default && $(KUSTOMIZE) edit set namespace $(NAMESPACE)
@$(MAKE) deploy IMG=$(DEV_IMG):$(DEV_TAG)
.PHONY: _operator-wait-ready
_operator-wait-ready:
@echo "Waiting for operator pods to be ready..."
@ATTEMPTS=0; \
while [ $$ATTEMPTS -lt 30 ]; do \
READY=$$($(KUBECTL) get deployment $(DEPLOYMENT_NAME) -n $(NAMESPACE) \
-o jsonpath='{.status.readyReplicas}' 2>/dev/null); \
DESIRED=$$($(KUBECTL) get deployment $(DEPLOYMENT_NAME) -n $(NAMESPACE) \
-o jsonpath='{.status.replicas}' 2>/dev/null); \
if [ -n "$$READY" ] && [ -n "$$DESIRED" ] && [ "$$READY" = "$$DESIRED" ] && [ "$$READY" -gt 0 ]; then \
echo "All pods ready ($$READY/$$DESIRED)."; \
break; \
fi; \
echo "Pods not ready ($$READY/$$DESIRED). Waiting..."; \
ATTEMPTS=$$((ATTEMPTS + 1)); \
sleep 10; \
done; \
if [ $$ATTEMPTS -ge 30 ]; then \
echo "ERROR: Timed out waiting for operator pods to be ready (5 minutes)." >&2; \
exit 1; \
fi
@$(KUBECTL) config set-context --current --namespace=$(NAMESPACE)
.PHONY: _operator-post-deploy
_operator-post-deploy:
@# Apply dev custom configs (secrets, configmaps, etc.) from DEV_CUSTOM_CONFIG
@$(MAKE) _apply-custom-config
@if [ "$(CREATE_CR)" = "true" ] && [ -f "$(DEV_CR)" ]; then \
echo "Applying dev CR: $(DEV_CR)"; \
$(KUBECTL) apply -n $(NAMESPACE) -f $(DEV_CR); \
fi
#@ Teardown
.PHONY: _teardown-restores
_teardown-restores:
@for kind in $(TEARDOWN_RESTORE_KINDS); do \
echo "Deleting $$kind resources..."; \
$(KUBECTL) delete $$kind -n $(NAMESPACE) --all --wait=true --ignore-not-found=true || true; \
done
.PHONY: _teardown-backups
_teardown-backups:
@for kind in $(TEARDOWN_BACKUP_KINDS); do \
echo "Deleting $$kind resources..."; \
$(KUBECTL) delete $$kind -n $(NAMESPACE) --all --wait=true --ignore-not-found=true || true; \
done
.PHONY: _teardown-operands
_teardown-operands:
@for kind in $(TEARDOWN_CR_KINDS); do \
echo "Deleting $$kind resources..."; \
$(KUBECTL) delete $$kind -n $(NAMESPACE) --all --wait=true --ignore-not-found=true || true; \
done
.PHONY: _teardown-pvcs
_teardown-pvcs:
@if [ "$(DELETE_PVCS)" = "true" ]; then \
echo "Deleting PVCs..."; \
$(KUBECTL) delete pvc -n $(NAMESPACE) --all --ignore-not-found=true; \
else \
echo "Keeping PVCs (DELETE_PVCS=false)"; \
fi
.PHONY: _teardown-secrets
_teardown-secrets:
@if [ "$(DELETE_SECRETS)" = "true" ]; then \
echo "Deleting secrets..."; \
$(KUBECTL) delete secrets -n $(NAMESPACE) --all --ignore-not-found=true; \
else \
echo "Keeping secrets (DELETE_SECRETS=false)"; \
fi
.PHONY: _teardown-olm
_teardown-olm:
@for sub in $(OLM_SUBSCRIPTIONS); do \
echo "Deleting subscription $$sub..."; \
$(KUBECTL) delete subscription $$sub -n $(NAMESPACE) --ignore-not-found=true || true; \
done
@CSV=$$($(KUBECTL) get csv -n $(NAMESPACE) --no-headers -o custom-columns=":metadata.name" 2>/dev/null | grep aap-operator || true); \
if [ -n "$$CSV" ]; then \
echo "Deleting CSV: $$CSV"; \
$(KUBECTL) delete csv $$CSV -n $(NAMESPACE) --ignore-not-found=true; \
fi
.PHONY: _teardown-namespace
_teardown-namespace:
@if [ "$(KEEP_NAMESPACE)" != "true" ]; then \
echo "Deleting namespace $(NAMESPACE)..."; \
$(KUBECTL) delete namespace $(NAMESPACE) --ignore-not-found=true; \
else \
echo "Keeping namespace $(NAMESPACE) (KEEP_NAMESPACE=true)"; \
fi
##@ Registry
.PHONY: registry-login
registry-login: ## Login to container registries
@for registry in $(REGISTRIES); do \
echo "Logging into $$registry..."; \
$(_CONTAINER_CMD) login $$registry; \
done
##@ Namespace
.PHONY: ns-wait
ns-wait: ## Wait for namespace to finish terminating
@if $(KUBECTL) get namespace $(NAMESPACE) 2>/dev/null | grep -q 'Terminating'; then \
echo "Namespace $(NAMESPACE) is terminating. Waiting..."; \
while $(KUBECTL) get namespace $(NAMESPACE) 2>/dev/null | grep -q 'Terminating'; do \
sleep 5; \
done; \
echo "Namespace $(NAMESPACE) terminated."; \
fi
.PHONY: ns-create
ns-create: ## Create namespace if it does not exist
@if ! $(KUBECTL) get namespace $(NAMESPACE) --no-headers 2>/dev/null | grep -q .; then \
echo "Creating namespace $(NAMESPACE)"; \
$(KUBECTL) create namespace $(NAMESPACE); \
else \
echo "Namespace $(NAMESPACE) already exists"; \
fi
.PHONY: ns-security
ns-security: ## Configure namespace security for OLM bundle unpacking
@if ! oc get scc anyuid >/dev/null 2>&1; then \
echo "No SCC support detected (vanilla Kubernetes), applying pod security labels..."; \
$(KUBECTL) label namespace "$(NAMESPACE)" \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/audit=privileged \
pod-security.kubernetes.io/warn=privileged --overwrite; \
elif $(KUBECTL) get namespace openshift-apiserver >/dev/null 2>&1; then \
echo "Full OpenShift detected — skipping SCC grants (OLM handles bundle unpacking)"; \
else \
echo "MicroShift detected — granting SCCs for bundle unpack pods in $(NAMESPACE)..."; \
oc adm policy add-scc-to-user privileged -z default -n "$(NAMESPACE)" 2>/dev/null || true; \
oc adm policy add-scc-to-user anyuid -z default -n "$(NAMESPACE)" 2>/dev/null || true; \
fi
##@ Secrets
.PHONY: pull-secret
pull-secret: ## Apply pull secret from file or create from auth config
@if [ "$(CREATE_PULL_SECRET)" != "true" ]; then \
echo "Pull secret creation disabled (CREATE_PULL_SECRET=false)"; \
exit 0; \
fi; \
if [ -f "$(PULL_SECRET_FILE)" ]; then \
echo "Applying pull secret from $(PULL_SECRET_FILE)"; \
$(KUBECTL) apply -n $(NAMESPACE) -f $(PULL_SECRET_FILE); \
elif [ -n "$(REGISTRY_AUTH_CONFIG)" ] && [ -f "$(REGISTRY_AUTH_CONFIG)" ]; then \
if ! $(KUBECTL) get secret redhat-operators-pull-secret -n $(NAMESPACE) 2>/dev/null | grep -q .; then \
echo "Creating pull secret from $(REGISTRY_AUTH_CONFIG)"; \
$(KUBECTL) create secret generic redhat-operators-pull-secret \
--from-file=.dockerconfigjson="$(REGISTRY_AUTH_CONFIG)" \
--type=kubernetes.io/dockerconfigjson \
-n $(NAMESPACE); \
else \
echo "Pull secret already exists"; \
fi; \
else \
echo "No pull secret file or registry auth config found, skipping"; \
exit 0; \
fi; \
echo "Linking pull secret to default service account..."; \
$(KUBECTL) patch serviceaccount default -n $(NAMESPACE) \
-p '{"imagePullSecrets": [{"name": "redhat-operators-pull-secret"}]}' 2>/dev/null \
&& echo "Pull secret linked to default SA" \
|| echo "Warning: could not link pull secret to default SA"
##@ Build
.PHONY: podman-build
podman-build: ## Build image with podman
$(_CONTAINER_CMD) build $(BUILD_ARGS) -t ${IMG} .
.PHONY: podman-push
podman-push: ## Push image with podman
$(_CONTAINER_CMD) push ${IMG}
.PHONY: podman-buildx
podman-buildx: ## Build multi-arch image with podman
$(_CONTAINER_CMD) build $(BUILD_ARGS) --platform=$(PLATFORMS) --manifest ${IMG} -f Dockerfile .
.PHONY: podman-buildx-push
podman-buildx-push: podman-buildx ## Build and push multi-arch image with podman
$(_CONTAINER_CMD) manifest push --all ${IMG}
.PHONY: dev-build
dev-build: ## Build dev image (auto-detects arch of connected cluster, cross-compiles if needed)
@HOST_ARCH=$$(uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/'); \
CLUSTER_ARCH=$$($(KUBECTL) get nodes -o jsonpath='{.items[0].status.nodeInfo.architecture}' 2>/dev/null); \
if [ -z "$$CLUSTER_ARCH" ]; then \
echo "WARNING: Could not detect cluster architecture. Is the cluster reachable?"; \
echo " Falling back to host architecture ($$HOST_ARCH)"; \
CLUSTER_ARCH="$$HOST_ARCH"; \
fi; \
echo "Building $(DEV_IMG):$(DEV_TAG) with $(CONTAINER_TOOL)..."; \
echo " Host arch: $$HOST_ARCH"; \
echo " Cluster arch: $$CLUSTER_ARCH"; \
if [ "$(MULTI_ARCH)" = "true" ]; then \
echo " Build mode: multi-arch ($(PLATFORMS))"; \
$(MAKE) $(CONTAINER_TOOL)-buildx IMG=$(DEV_IMG):$(DEV_TAG) PLATFORMS=$(PLATFORMS); \
elif [ -n "$(PLATFORM)" ]; then \
echo " Build mode: cross-arch ($(PLATFORM))"; \
$(MAKE) $(CONTAINER_TOOL)-buildx IMG=$(DEV_IMG):$(DEV_TAG) PLATFORMS=$(PLATFORM); \
elif [ "$$HOST_ARCH" != "$$CLUSTER_ARCH" ]; then \
echo " Build mode: cross-arch (linux/$$CLUSTER_ARCH)"; \
$(MAKE) $(CONTAINER_TOOL)-buildx-push IMG=$(DEV_IMG):$(DEV_TAG) PLATFORMS=linux/$$CLUSTER_ARCH; \
else \
echo " Build mode: local ($$HOST_ARCH)"; \
$(MAKE) $(CONTAINER_TOOL)-build IMG=$(DEV_IMG):$(DEV_TAG); \
if [ "$(IMAGE_PULL_POLICY)" != "Never" ]; then \
echo "WARNING: Local build without push. Set IMAGE_PULL_POLICY=Never or the kubelet"; \
echo " will attempt to pull $(DEV_IMG):$(DEV_TAG) from a registry and fail."; \
fi; \
fi
##@ Deployment Helpers
.PHONY: patch-pull-policy
patch-pull-policy: ## Patch imagePullPolicy in manager config (default: Always, override with IMAGE_PULL_POLICY)
@_POLICY="$(if $(IMAGE_PULL_POLICY),$(IMAGE_PULL_POLICY),Always)"; \
for file in config/manager/manager.yaml; do \
if [ -f "$$file" ] && grep -q 'imagePullPolicy: IfNotPresent' "$$file"; then \
echo "Patching imagePullPolicy to $$_POLICY in $$file"; \
$(_SED_I) "s|imagePullPolicy: IfNotPresent|imagePullPolicy: $$_POLICY|g" "$$file"; \
fi; \
done
.PHONY: pre-deploy-cleanup
pre-deploy-cleanup: ## Delete existing operator deployment (safe)
@if [ -n "$(DEPLOYMENT_NAME)" ]; then \
echo "Cleaning up deployment $(DEPLOYMENT_NAME)..."; \
$(KUBECTL) delete deployment $(DEPLOYMENT_NAME) \
-n $(NAMESPACE) --ignore-not-found=true; \
fi
.PHONY: _apply-custom-config
_apply-custom-config: ## Apply custom configs (secrets, configmaps, etc.)
@for f in $(DEV_CUSTOM_CONFIG); do \
if [ -f "$$f" ]; then \
echo "Applying custom config: $$f"; \
$(KUBECTL) apply -n $(NAMESPACE) -f $$f; \
else \
echo "WARNING: Custom config not found: $$f"; \
fi; \
done
#@ Validation
.PHONY: _require-img
_require-img:
@if [ -z "$(DEV_IMG)" ]; then \
echo "Error: Set QUAY_USER or DEV_IMG."; \
echo " export QUAY_USER=<your-quay-username>"; \
echo " or: DEV_IMG=registry.example.com/my-operator make up"; \
exit 1; \
fi
@if echo "$(DEV_IMG)" | grep -q '^registry\.redhat\.io'; then \
echo "Error: Cannot push to registry.redhat.io (production registry)."; \
echo " Set QUAY_USER or DEV_IMG to use a personal registry."; \
exit 1; \
fi
@if echo "$(DEV_IMG)" | grep -q '^quay\.io/'; then \
if [ -z "$(QUAY_USER)" ]; then \
echo "Error: Cannot push to quay.io without QUAY_USER."; \
echo " export QUAY_USER=<your-quay-username>"; \
echo " or: DEV_IMG=<your-registry>/<image> make up"; \
exit 1; \
fi; \
if ! echo "$(DEV_IMG)" | grep -q '^quay\.io/$(QUAY_USER)/'; then \
echo "Error: DEV_IMG ($(DEV_IMG)) does not match QUAY_USER ($(QUAY_USER))."; \
echo " Expected: quay.io/$(QUAY_USER)/<image>"; \
echo " Either fix QUAY_USER or set DEV_IMG explicitly."; \
exit 1; \
fi; \
fi
.PHONY: _require-namespace
_require-namespace:
@if [ -z "$(NAMESPACE)" ]; then \
echo "Error: NAMESPACE is required. Set it in operator.mk or run: export NAMESPACE=<namespace>"; \
exit 1; \
fi
##@ Linting
LINT_PATHS ?= roles/ playbooks/ config/samples/ config/manager/
.PHONY: lint
lint: ## Run ansible-lint and check no_log usage
@echo "Checking if ansible-lint is installed..."
@which ansible-lint > /dev/null || (echo "ansible-lint not found, installing..." && pip install --user ansible-lint)
@echo "Running ansible-lint..."
@ansible-lint $(LINT_PATHS)
@if [ -d "roles/" ]; then \
echo "Checking for no_log instances that need to use the variable..."; \
if grep -nr ' no_log:' roles | grep -qv '"{{ no_log }}"'; then \
echo 'Please update the following no_log statement(s) with the "{{ no_log }}" value'; \
grep -nr ' no_log:' roles | grep -v '"{{ no_log }}"'; \
exit 1; \
fi; \
fi

39
makefiles/operator.mk Normal file
View File

@@ -0,0 +1,39 @@
# operator.mk — AWX Operator specific targets and variables
#
# This file is NOT synced across repos. Each operator maintains its own.
#@ Operator Variables
VERSION ?= $(shell git describe --tags 2>/dev/null || echo 0.0.1)
PREV_VERSION ?= $(shell git describe --abbrev=0 --tags $(shell git rev-list --tags --skip=1 --max-count=1) 2>/dev/null)
IMAGE_TAG_BASE ?= quay.io/ansible/awx-operator
NAMESPACE ?= awx
DEPLOYMENT_NAME ?= awx-operator-controller-manager
# Dev CR to apply after deployment
DEV_CR ?= dev/awx-cr/awx-openshift-cr.yml
# Custom configs to apply during post-deploy (secrets, configmaps, etc.)
DEV_CUSTOM_CONFIG ?= dev/secrets/custom-secret-key.yml dev/secrets/admin-password-secret.yml
# Feature flags
BUILD_IMAGE ?= true
CREATE_CR ?= true
# Teardown configuration
TEARDOWN_CR_KINDS ?= awx
TEARDOWN_BACKUP_KINDS ?= awxbackup
TEARDOWN_RESTORE_KINDS ?= awxrestore
OLM_SUBSCRIPTIONS ?=
##@ AWX Operator
.PHONY: operator-up
operator-up: _operator-build-and-push _operator-deploy _operator-wait-ready _operator-post-deploy ## AWX-specific deploy
@:
##@ Utilities
.PHONY: print-%
print-%: ## Print any variable from the Makefile. Use as `make print-VARIABLE`
@echo $($*)

View File

@@ -49,3 +49,8 @@ spec:
{% if additional_fields is defined %} {% if additional_fields is defined %}
{{ additional_fields | to_nice_yaml | indent(2) }} {{ additional_fields | to_nice_yaml | indent(2) }}
{% endif %} {% endif %}
postgres_extra_settings:
- setting: max_connections
value: "499"
- setting: ssl_ciphers
value: "HIGH:!aNULL:!MD5"

View File

@@ -5,10 +5,21 @@
name: '{{ item.metadata.name }}' name: '{{ item.metadata.name }}'
all_containers: true all_containers: true
register: all_container_logs register: all_container_logs
ignore_errors: yes
- name: Store logs in file - name: Store logs in file
ansible.builtin.copy: ansible.builtin.copy:
content: "{{ all_container_logs.log_lines | join('\n') }}" content: |-
{% if all_container_logs is failed %}
Failed to retrieve logs for pod {{ item.metadata.name }}:
{{ all_container_logs.msg | default(all_container_logs.stderr | default('No additional details provided.')) }}
{% elif all_container_logs.log_lines is defined %}
{{ all_container_logs.log_lines | join('\n') }}
{% elif all_container_logs.log is defined %}
{{ all_container_logs.log }}
{% else %}
No log content returned by kubernetes.core.k8s_log.
{% endif %}
dest: '{{ debug_output_dir }}/{{ item.metadata.name }}.log' dest: '{{ debug_output_dir }}/{{ item.metadata.name }}.log'
# TODO: all_containser option dump all of the output in a single output make it hard to read we probably should iterate through each of the container to get specific logs # TODO: all_containser option dump all of the output in a single output make it hard to read we probably should iterate through each of the container to get specific logs

View File

@@ -1,6 +1,6 @@
--- ---
collections: collections:
- name: kubernetes.core
version: '>=2.3.2'
- name: operator_sdk.util - name: operator_sdk.util
version: "0.5.0" version: "0.5.0"
- name: kubernetes.core
version: "3.2.0"

View File

@@ -8,6 +8,9 @@ api_version: '{{ deployment_type }}.ansible.com/v1beta1'
backup_pvc: '' backup_pvc: ''
backup_pvc_namespace: "{{ ansible_operator_meta.namespace }}" backup_pvc_namespace: "{{ ansible_operator_meta.namespace }}"
# If true (default), automatically create the backup PVC if it does not exist
create_backup_pvc: true
# Size of backup PVC if created dynamically # Size of backup PVC if created dynamically
backup_storage_requirements: '' backup_storage_requirements: ''
@@ -39,6 +42,9 @@ backup_resource_requirements:
# Allow additional parameters to be added to the pg_dump backup command # Allow additional parameters to be added to the pg_dump backup command
pg_dump_suffix: '' pg_dump_suffix: ''
# Enable compression for database dumps (pg_dump -F custom built-in compression)
use_db_compression: true
# Labels defined on the resource, which should be propagated to child resources # Labels defined on the resource, which should be propagated to child resources
additional_labels: [] additional_labels: []

View File

@@ -22,17 +22,18 @@
block: block:
- name: Set error message - name: Set error message
set_fact: set_fact:
error_msg: "{{ backup_pvc }} does not exist, please create this pvc first." error_msg: "{{ backup_pvc }} does not exist, please create this pvc first or ensure create_backup_pvc is set to true (default) for automatic backup_pvc creation."
- name: Handle error - name: Handle error
import_tasks: error_handling.yml import_tasks: error_handling.yml
- name: Fail early if pvc is defined but does not exist - name: Fail early if pvc is defined but does not exist
fail: fail:
msg: "{{ backup_pvc }} does not exist, please create this pvc first." msg: "{{ backup_pvc }} does not exist, please create this pvc first or ensure create_backup_pvc is set to true (default) for automatic backup_pvc creation."
when: when:
- backup_pvc != '' - backup_pvc != ''
- provided_pvc.resources | length == 0 - provided_pvc.resources | length == 0
- not create_backup_pvc | bool
# If backup_pvc is defined, use in management-pod.yml.j2 # If backup_pvc is defined, use in management-pod.yml.j2
- name: Set default pvc name - name: Set default pvc name
@@ -42,7 +43,7 @@
# by default, it will re-use the old pvc if already created (unless a pvc is provided) # by default, it will re-use the old pvc if already created (unless a pvc is provided)
- name: Set PVC to use for backup - name: Set PVC to use for backup
set_fact: set_fact:
backup_claim: "{{ backup_pvc | default(_default_backup_pvc, true) }}" backup_pvc: "{{ backup_pvc | default(_default_backup_pvc, true) }}"
- block: - block:
- name: Create PVC for backup - name: Create PVC for backup
@@ -56,11 +57,11 @@
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: "{{ deployment_name }}-backup-claim" name: "{{ backup_pvc }}"
namespace: "{{ backup_pvc_namespace }}" namespace: "{{ backup_pvc_namespace }}"
ownerReferences: null ownerReferences: null
when: when:
- backup_pvc == '' or backup_pvc is not defined - (backup_pvc == '' or backup_pvc is not defined) or (create_backup_pvc | bool)
- name: Set default postgres image - name: Set default postgres image
set_fact: set_fact:

View File

@@ -72,7 +72,7 @@
command: >- command: >-
touch {{ backup_dir }}/tower.db touch {{ backup_dir }}/tower.db
- name: Set full resolvable host name for postgres pod - name: Set resolvable_db_host
set_fact: set_fact:
resolvable_db_host: '{{ (awx_postgres_type == "managed") | ternary(awx_postgres_host + "." + ansible_operator_meta.namespace + ".svc", awx_postgres_host) }}' # yamllint disable-line rule:line-length resolvable_db_host: '{{ (awx_postgres_type == "managed") | ternary(awx_postgres_host + "." + ansible_operator_meta.namespace + ".svc", awx_postgres_host) }}' # yamllint disable-line rule:line-length
no_log: "{{ no_log }}" no_log: "{{ no_log }}"
@@ -121,6 +121,7 @@
-d {{ awx_postgres_database }} -d {{ awx_postgres_database }}
-p {{ awx_postgres_port }} -p {{ awx_postgres_port }}
-F custom -F custom
{{ use_db_compression | bool | ternary('', '-Z 0') }}
{{ pg_dump_suffix }} {{ pg_dump_suffix }}
no_log: "{{ no_log }}" no_log: "{{ no_log }}"

View File

@@ -9,5 +9,5 @@
namespace: "{{ ansible_operator_meta.namespace }}" namespace: "{{ ansible_operator_meta.namespace }}"
status: status:
backupDirectory: "{{ backup_dir }}" backupDirectory: "{{ backup_dir }}"
backupClaim: "{{ backup_claim }}" backupClaim: "{{ backup_pvc }}"
when: backup_complete when: backup_complete

View File

@@ -2,8 +2,8 @@
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: {{ deployment_name }}-backup-claim name: {{ backup_pvc }}
namespace: {{ backup_pvc_namespace }} namespace: "{{ backup_pvc_namespace }}"
ownerReferences: null ownerReferences: null
labels: labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }} {{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}

View File

@@ -3,15 +3,15 @@ apiVersion: v1
kind: Event kind: Event
metadata: metadata:
name: backup-error.{{ now }} name: backup-error.{{ now }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
involvedObject: involvedObject:
apiVersion: awx.ansible.com/v1beta1 apiVersion: awx.ansible.com/v1beta1
kind: {{ kind }} kind: {{ kind }}
name: {{ ansible_operator_meta.name }} name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
message: {{ error_msg }} message: {{ error_msg }}
reason: BackupFailed reason: BackupFailed
type: Warning type: Warning
firstTimestamp: {{ now }} firstTimestamp: "{{ now }}"
lastTimestamp: {{ now }} lastTimestamp: "{{ now }}"
count: 1 count: 1

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: {{ ansible_operator_meta.name }}-db-management name: {{ ansible_operator_meta.name }}-db-management
namespace: {{ backup_pvc_namespace }} namespace: "{{ backup_pvc_namespace }}"
labels: labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }} {{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}
spec: spec:
@@ -27,6 +27,6 @@ spec:
volumes: volumes:
- name: {{ ansible_operator_meta.name }}-backup - name: {{ ansible_operator_meta.name }}-backup
persistentVolumeClaim: persistentVolumeClaim:
claimName: {{ backup_claim }} claimName: {{ backup_pvc }}
readOnly: false readOnly: false
restartPolicy: Never restartPolicy: Never

View File

@@ -1,5 +1,6 @@
--- ---
deployment_type: awx deployment_type: awx
deployment_type_shortname: awx
kind: 'AWX' kind: 'AWX'
api_version: '{{ deployment_type }}.ansible.com/v1beta1' api_version: '{{ deployment_type }}.ansible.com/v1beta1'
@@ -421,14 +422,20 @@ projects_persistence: false
# Define an existing PersistentVolumeClaim to use # Define an existing PersistentVolumeClaim to use
projects_existing_claim: '' projects_existing_claim: ''
# #
# Define postgres configuration arguments to use # Define postgres configuration arguments to use (Deprecated)
postgres_extra_args: '' postgres_extra_args: ''
#
# Define postgresql.conf configurations
postgres_extra_settings: []
postgres_data_volume_init: false postgres_data_volume_init: false
postgres_init_container_commands: | postgres_init_container_commands: |
chown 26:0 /var/lib/pgsql/data chown 26:0 /var/lib/pgsql/data
chmod 700 /var/lib/pgsql/data chmod 700 /var/lib/pgsql/data
# Enable PostgreSQL SCRAM-SHA-256 migration
postgres_scram_migration_enabled: true
# Configure postgres connection keepalive # Configure postgres connection keepalive
postgres_keepalives: true postgres_keepalives: true
postgres_keepalives_idle: 5 postgres_keepalives_idle: 5
@@ -452,6 +459,12 @@ ldap_password_secret: ''
# Secret to lookup that provides the custom CA trusted bundle # Secret to lookup that provides the custom CA trusted bundle
bundle_cacert_secret: '' bundle_cacert_secret: ''
# Proxy environment variables for AWX containers.
# Values are inherited from the operator pod environment.
http_proxy: "{{ lookup('env', 'http_proxy') or lookup('env', 'HTTP_PROXY') or '' }}"
https_proxy: "{{ lookup('env', 'https_proxy') or lookup('env', 'HTTPS_PROXY') or '' }}"
no_proxy: "{{ lookup('env', 'no_proxy') or lookup('env', 'NO_PROXY') or '' }}"
# Set false for basic install without operator # Set false for basic install without operator
update_status: true update_status: true
@@ -488,8 +501,12 @@ ipv6_disabled: false
# - hostname # - hostname
host_aliases: '' host_aliases: ''
# receptor default values
receptor_log_level: info receptor_log_level: info
# common default values
client_request_timeout: 30
# UWSGI default values # UWSGI default values
uwsgi_processes: 5 uwsgi_processes: 5
# NOTE: to increase this value, net.core.somaxconn must also be increased # NOTE: to increase this value, net.core.somaxconn must also be increased
@@ -497,13 +514,17 @@ uwsgi_processes: 5
# Also see https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls for how # Also see https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls for how
# to allow setting this sysctl, which requires kubelet configuration to add to allowlist # to allow setting this sysctl, which requires kubelet configuration to add to allowlist
uwsgi_listen_queue_size: 128 uwsgi_listen_queue_size: 128
uwsgi_timeout: 30 uwsgi_timeout: "{{ (([(client_request_timeout | int), 10] | max) / 3) | int }}"
uwsgi_timeout_grace_period: 2
# NGINX default values # NGINX default values
nginx_worker_processes: 1 nginx_worker_processes: 1
nginx_worker_connections: "{{ uwsgi_listen_queue_size }}" nginx_worker_connections: "{{ uwsgi_listen_queue_size }}"
nginx_worker_cpu_affinity: 'auto' nginx_worker_cpu_affinity: 'auto'
nginx_listen_queue_size: "{{ uwsgi_listen_queue_size }}" nginx_listen_queue_size: "{{ uwsgi_listen_queue_size }}"
nginx_client_max_body_size: 5
nginx_read_timeout: "{{ (([(client_request_timeout | int), 10] | max) / 2) | int }}" # used in nginx config
extra_settings_files: {} extra_settings_files: {}

View File

@@ -25,7 +25,10 @@
- name: Set previous_version version based on AWX CR version status - name: Set previous_version version based on AWX CR version status
ansible.builtin.set_fact: ansible.builtin.set_fact:
previous_version: "{{ existing_cr.resources[0].status.version }}" previous_version: "{{ existing_cr.resources[0].status.version }}"
when: existing_cr['resources'] | length when:
- existing_cr.resources | length
- existing_cr.resources[0].status is defined
- existing_cr.resources[0].status.version is defined
- name: If previous_version is less than or equal to gating_version, set upgraded_from to previous_version - name: If previous_version is less than or equal to gating_version, set upgraded_from to previous_version
ansible.builtin.set_fact: ansible.builtin.set_fact:

View File

@@ -2,6 +2,12 @@
- name: Get database configuration - name: Get database configuration
include_tasks: database_configuration.yml include_tasks: database_configuration.yml
- name: Create postgresql.conf ConfigMap
k8s:
apply: true
definition: "{{ lookup('template', 'configmaps/postgres_extra_settings.yaml.j2') }}"
when: postgres_extra_settings | length
# It is possible that N-2 postgres pods may still be present in the namespace from previous upgrades. # It is possible that N-2 postgres pods may still be present in the namespace from previous upgrades.
# So we have to take that into account and preferentially set the most recent one. # So we have to take that into account and preferentially set the most recent one.
- name: Get the old postgres pod (N-1) - name: Get the old postgres pod (N-1)
@@ -70,6 +76,22 @@
- debug: - debug:
msg: "--- Upgrading from {{ old_postgres_pod['metadata']['name'] | default('NONE')}} Pod ---" msg: "--- Upgrading from {{ old_postgres_pod['metadata']['name'] | default('NONE')}} Pod ---"
- name: Migrate from md5 to scram-sha-256
k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ old_postgres_pod['metadata']['name'] }}"
command: |
bash -c "
psql -U postgres -c \"ALTER SYSTEM SET password_encryption = 'scram-sha-256';\" &&
psql -U postgres -c \"SELECT pg_reload_conf();\" &&
psql -U postgres -c \"ALTER USER \\\"{{ awx_postgres_user }}\\\" WITH PASSWORD '{{ awx_postgres_pass }}';\"
"
register: _migration_output
no_log: "{{ no_log }}"
when:
- postgres_scram_migration_enabled
- (_old_pg_version.stdout | default(0) | int ) == 13
- name: Upgrade data dir from old Postgres to {{ supported_pg_version }} if applicable - name: Upgrade data dir from old Postgres to {{ supported_pg_version }} if applicable
include_tasks: upgrade_postgres.yml include_tasks: upgrade_postgres.yml
when: when:

View File

@@ -8,7 +8,7 @@
bash -c "echo 'from django.contrib.auth.models import User; bash -c "echo 'from django.contrib.auth.models import User;
nsu = User.objects.filter(is_superuser=True, username=\"{{ admin_user }}\").count(); nsu = User.objects.filter(is_superuser=True, username=\"{{ admin_user }}\").count();
exit(0 if nsu > 0 else 1)' exit(0 if nsu > 0 else 1)'
| awx-manage shell" | awx-manage shell --no-imports"
ignore_errors: true ignore_errors: true
register: users_result register: users_result
changed_when: users_result.return_code > 0 changed_when: users_result.return_code > 0

View File

@@ -50,6 +50,12 @@
definition: "{{ lookup('template', 'configmaps/redirect-page.configmap.html.j2') }}" definition: "{{ lookup('template', 'configmaps/redirect-page.configmap.html.j2') }}"
when: public_base_url is defined when: public_base_url is defined
- name: Apply proxy environment ConfigMap
k8s:
apply: true
definition: "{{ lookup('template', 'configmaps/proxy-env.configmap.yaml.j2') }}"
state: "{{ 'present' if (http_proxy or https_proxy or no_proxy) else 'absent' }}"
- name: Load LDAP CAcert certificate (Deprecated) - name: Load LDAP CAcert certificate (Deprecated)
include_tasks: load_ldap_cacert_secret.yml include_tasks: load_ldap_cacert_secret.yml
when: when:

View File

@@ -3,6 +3,31 @@
include_tasks: idle_deployment.yml include_tasks: idle_deployment.yml
when: idle_deployment | bool when: idle_deployment | bool
- name: Look up details for this deployment
k8s_info:
api_version: "{{ api_version }}"
kind: "{{ kind }}"
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: this_awx
- name: set annotations based on this_awx
set_fact:
this_annotations: "{{ this_awx['resources'][0]['metadata']['annotations'] | default({}) }}"
- name: set client_request_timeout based on annotation
set_fact:
client_request_timeout: "{{ (this_annotations['aap.ansible.io/client-request-timeout'][:-1]) | int }}"
client_request_timeout_overidden: true
when:
- "'aap.ansible.io/client-request-timeout' in this_annotations"
- this_annotations['aap.ansible.io/client-request-timeout'] is match('^\\d+s$')
- name: client_request_timeout has been changed
debug:
msg: "client_request_timeout's default 30s value has been overriden by the annotation 'aap.ansible.io/client-request-timeout' to {{ client_request_timeout }}s"
when: client_request_timeout_overidden | default(false)
- name: Check for presence of old awx Deployment - name: Check for presence of old awx Deployment
k8s_info: k8s_info:
api_version: apps/v1 api_version: apps/v1

View File

@@ -77,7 +77,9 @@
trap 'end_keepalive \"$keepalive_file\" \"$keepalive_pid\"' EXIT SIGINT SIGTERM trap 'end_keepalive \"$keepalive_file\" \"$keepalive_pid\"' EXIT SIGINT SIGTERM
echo keepalive_pid: $keepalive_pid echo keepalive_pid: $keepalive_pid
set -e -o pipefail set -e -o pipefail
psql -c 'GRANT postgres TO {{ awx_postgres_user }}'
PGPASSWORD=\"$PGPASSWORD_OLD\" {{ pgdump }} | PGPASSWORD=\"$POSTGRES_PASSWORD\" {{ pg_restore }} PGPASSWORD=\"$PGPASSWORD_OLD\" {{ pgdump }} | PGPASSWORD=\"$POSTGRES_PASSWORD\" {{ pg_restore }}
psql -c 'REVOKE postgres FROM {{ awx_postgres_user }}'
set +e +o pipefail set +e +o pipefail
echo 'Successful' echo 'Successful'
" "

View File

@@ -6,7 +6,7 @@
pod: "{{ awx_web_pod_name }}" pod: "{{ awx_web_pod_name }}"
container: "{{ ansible_operator_meta.name }}-web" container: "{{ ansible_operator_meta.name }}-web"
command: >- command: >-
bash -c "awx-manage showmigrations | grep -v '[X]' | grep '[ ]' | wc -l" bash -c "awx-manage showmigrations | grep -v '(no migrations)' | grep -v '[X]' | grep '[ ]' | wc -l"
changed_when: false changed_when: false
when: awx_web_pod_name != '' when: awx_web_pod_name != ''
register: database_check register: database_check

View File

@@ -224,7 +224,7 @@
_custom_image: "{{ image }}:{{ image_version }}" _custom_image: "{{ image }}:{{ image_version }}"
when: when:
- image | default([]) | length - image | default([]) | length
- image_version is defined or image_version != '' - image_version is defined and image_version != ''
- name: Set AWX app image URL - name: Set AWX app image URL
set_fact: set_fact:
@@ -239,7 +239,7 @@
_custom_redis_image: "{{ redis_image }}:{{ redis_image_version }}" _custom_redis_image: "{{ redis_image }}:{{ redis_image_version }}"
when: when:
- redis_image | default([]) | length - redis_image | default([]) | length
- redis_image_version is defined or redis_image_version != '' - redis_image_version is defined and redis_image_version != ''
- name: Set Redis image URL - name: Set Redis image URL
set_fact: set_fact:

View File

@@ -72,7 +72,7 @@
- "app.kubernetes.io/managed-by={{ deployment_type }}-operator" - "app.kubernetes.io/managed-by={{ deployment_type }}-operator"
register: old_postgres_svc register: old_postgres_svc
- name: Set full resolvable host name for postgres pod - name: Set resolvable_db_host
set_fact: set_fact:
resolvable_db_host: "{{ old_postgres_svc['resources'][0]['metadata']['name'] }}.{{ ansible_operator_meta.namespace }}.svc" # yamllint disable-line rule:line-length resolvable_db_host: "{{ old_postgres_svc['resources'][0]['metadata']['name'] }}.{{ ansible_operator_meta.namespace }}.svc" # yamllint disable-line rule:line-length
no_log: "{{ no_log }}" no_log: "{{ no_log }}"

View File

@@ -109,13 +109,25 @@ data:
include /etc/nginx/mime.types; include /etc/nginx/mime.types;
default_type application/octet-stream; default_type application/octet-stream;
server_tokens off; server_tokens off;
client_max_body_size 5M; client_max_body_size {{ nginx_client_max_body_size }}M;
map $http_x_trusted_proxy $trusted_proxy_present {
default "trusted-proxy";
"" "-";
}
map $http_x_dab_jw_token $dab_jwt_present {
default "dab-jwt";
"" "-";
}
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" ' '$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'; '"$http_user_agent" "$http_x_forwarded_for" '
'$trusted_proxy_present $dab_jwt_present';
access_log /dev/stdout main; access_log /dev/stdout main;
error_log /dev/stderr warn;
map $http_upgrade $connection_upgrade { map $http_upgrade $connection_upgrade {
default upgrade; default upgrade;
@@ -229,7 +241,7 @@ data:
location {{ ingress_path }} { location {{ ingress_path }} {
# Add trailing / if missing # Add trailing / if missing
rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent; rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;
uwsgi_read_timeout 125s; uwsgi_read_timeout {{ nginx_read_timeout }}s;
uwsgi_pass uwsgi; uwsgi_pass uwsgi;
include /etc/nginx/uwsgi_params; include /etc/nginx/uwsgi_params;
include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf.d/*.conf;
@@ -243,6 +255,23 @@ data:
add_header Cache-Control "no-cache, no-store, must-revalidate"; add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Expires "0"; add_header Expires "0";
add_header Pragma "no-cache"; add_header Pragma "no-cache";
# Return 503 Service Unavailable with JSON response if uWSGI fails to respond
error_page 504 =503 /json_503;
error_page 502 =503 /json_503; # Optional, in case uWSGI is completely down
}
location = /json_503 {
# Custom JSON response for 503 Service Unavailable
internal;
add_header Content-Type application/json;
# Check if X-Request-ID is set and include it in the response
if ($http_x_request_id) {
return 503 '{"status": "error", "message": "Service Unavailable", "code": 503, "request_id": "$http_x_request_id"}';
}
# If X-Request-ID is not set, just return the basic JSON response
return 503 '{"status": "error", "message": "Service Unavailable", "code": 503}';
} }
} }
} }
@@ -251,6 +280,7 @@ data:
unixsocketperm 777 unixsocketperm 777
port 0 port 0
bind 127.0.0.1 bind 127.0.0.1
timeout 300
receptor_conf: | receptor_conf: |
--- ---
- log-level: {{ receptor_log_level }} - log-level: {{ receptor_log_level }}
@@ -304,8 +334,8 @@ data:
max-requests = 1000 max-requests = 1000
buffer-size = 32768 buffer-size = 32768
harakiri = {{ uwsgi_timeout|int }} harakiri = {{ uwsgi_timeout }}
harakiri-graceful-timeout = {{ [(uwsgi_timeout|int - 2), 1] | max }} harakiri-graceful-timeout = {{ uwsgi_timeout_grace_period }}
harakiri-graceful-signal = 6 harakiri-graceful-signal = 6
py-call-osafterfork = true py-call-osafterfork = true

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ ansible_operator_meta.name }}-postgres-extra-settings'
namespace: '{{ ansible_operator_meta.namespace }}'
labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}
data:
99-overrides.conf: |
{% for pg_setting in postgres_extra_settings %}
{% if pg_setting.value is string %}
{{ pg_setting.setting }} = '{{ pg_setting.value }}'
{% else %}
{{ pg_setting.setting }} = {{ pg_setting.value }}
{% endif %}
{% endfor %}

View File

@@ -0,0 +1,19 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ ansible_operator_meta.name }}-proxy-env'
namespace: '{{ ansible_operator_meta.namespace }}'
data:
{% if http_proxy %}
HTTP_PROXY: '{{ http_proxy }}'
http_proxy: '{{ http_proxy }}'
{% endif %}
{% if https_proxy %}
HTTPS_PROXY: '{{ https_proxy }}'
https_proxy: '{{ https_proxy }}'
{% endif %}
{% if no_proxy %}
NO_PROXY: '{{ no_proxy }}'
no_proxy: '{{ no_proxy }}'
{% endif %}

View File

@@ -2,7 +2,7 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: {{ ansible_operator_meta.name }}-redirect-page name: {{ ansible_operator_meta.name }}-redirect-page
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
data: data:
redirect-page.html: | redirect-page.html: |
<!DOCTYPE html> <!DOCTYPE html>
@@ -66,11 +66,11 @@ data:
<p class="doc-note"> <p class="doc-note">
The API endpoints for this platform service will temporarily remain available at the URL for this service. The API endpoints for this platform service will temporarily remain available at the URL for this service.
Please use the Ansible Automation Platform API endpoints corresponding to this component in the future. Please use the Ansible Automation Platform API endpoints corresponding to this component in the future.
These can be found at <a href="{{ public_base_url }}/api/{{ deployment_type }}" target="_blank">{{ public_base_url }}/api/{{ deployment_type }}</a>. These can be found at <a href="{{ public_base_url }}/api/{{ deployment_type_shortname }}" target="_blank">{{ public_base_url }}/api/{{ deployment_type_shortname }}/</a>.
</p> </p>
<!-- Include any additional scripts if needed --> <!-- Include any additional scripts if needed -->
<script src="static/rest_framework/js/jquery-3.5.1.min.js"></script> <script src="static/rest_framework/js/jquery-3.7.1.min.js"></script>
<script src="static/rest_framework/js/bootstrap.min.js"></script> <script src="static/rest_framework/js/bootstrap.min.js"></script>
</body> </body>
</html> </html>

View File

@@ -48,6 +48,9 @@ spec:
{{ task_annotations | indent(width=8) }} {{ task_annotations | indent(width=8) }}
{% elif annotations %} {% elif annotations %}
{{ annotations | indent(width=8) }} {{ annotations | indent(width=8) }}
{% endif %}
{% if http_proxy or https_proxy or no_proxy %}
checksum-configmaps-proxy-env: "{{ lookup('template', 'configmaps/proxy-env.configmap.yaml.j2') | sha1 }}"
{% endif %} {% endif %}
spec: spec:
serviceAccountName: '{{ ansible_operator_meta.name }}' serviceAccountName: '{{ ansible_operator_meta.name }}'
@@ -84,7 +87,7 @@ spec:
- -c - -c
- | - |
mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2} mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2}
update-ca-trust extract update-ca-trust extract --output /etc/pki/ca-trust/extracted
volumeMounts: volumeMounts:
- name: "ca-trust-extracted" - name: "ca-trust-extracted"
mountPath: "/etc/pki/ca-trust/extracted" mountPath: "/etc/pki/ca-trust/extracted"
@@ -351,6 +354,10 @@ spec:
{% if task_extra_env -%} {% if task_extra_env -%}
{{ task_extra_env | indent(width=12, first=True) }} {{ task_extra_env | indent(width=12, first=True) }}
{% endif %} {% endif %}
envFrom:
- configMapRef:
name: '{{ ansible_operator_meta.name }}-proxy-env'
optional: true
resources: {{ task_resource_requirements }} resources: {{ task_resource_requirements }}
- image: '{{ _control_plane_ee_image }}' - image: '{{ _control_plane_ee_image }}'
name: '{{ ansible_operator_meta.name }}-ee' name: '{{ ansible_operator_meta.name }}-ee'
@@ -414,6 +421,10 @@ spec:
{% if ee_extra_env -%} {% if ee_extra_env -%}
{{ ee_extra_env | indent(width=12, first=True) }} {{ ee_extra_env | indent(width=12, first=True) }}
{% endif %} {% endif %}
envFrom:
- configMapRef:
name: '{{ ansible_operator_meta.name }}-proxy-env'
optional: true
- image: '{{ _image }}' - image: '{{ _image }}'
name: '{{ ansible_operator_meta.name }}-rsyslog' name: '{{ ansible_operator_meta.name }}-rsyslog'
{% if rsyslog_command %} {% if rsyslog_command %}
@@ -475,6 +486,10 @@ spec:
{% if rsyslog_extra_env -%} {% if rsyslog_extra_env -%}
{{ rsyslog_extra_env | indent(width=12, first=True) }} {{ rsyslog_extra_env | indent(width=12, first=True) }}
{% endif %} {% endif %}
envFrom:
- configMapRef:
name: '{{ ansible_operator_meta.name }}-proxy-env'
optional: true
{% if task_node_selector %} {% if task_node_selector %}
nodeSelector: nodeSelector:
{{ task_node_selector | indent(width=8) }} {{ task_node_selector | indent(width=8) }}

View File

@@ -51,6 +51,9 @@ spec:
{{ web_annotations | indent(width=8) }} {{ web_annotations | indent(width=8) }}
{% elif annotations %} {% elif annotations %}
{{ annotations | indent(width=8) }} {{ annotations | indent(width=8) }}
{% endif %}
{% if http_proxy or https_proxy or no_proxy %}
checksum-configmaps-proxy-env: "{{ lookup('template', 'configmaps/proxy-env.configmap.yaml.j2') | sha1 }}"
{% endif %} {% endif %}
spec: spec:
{% if uwsgi_listen_queue_size is defined and uwsgi_listen_queue_size|int > 128 %} {% if uwsgi_listen_queue_size is defined and uwsgi_listen_queue_size|int > 128 %}
@@ -93,7 +96,7 @@ spec:
- -c - -c
- | - |
mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2} mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2}
update-ca-trust extract update-ca-trust extract --output /etc/pki/ca-trust/extracted
volumeMounts: volumeMounts:
- name: "ca-trust-extracted" - name: "ca-trust-extracted"
mountPath: "/etc/pki/ca-trust/extracted" mountPath: "/etc/pki/ca-trust/extracted"
@@ -202,7 +205,7 @@ spec:
volumeMounts: volumeMounts:
{% if public_base_url is defined %} {% if public_base_url is defined %}
- name: redirect-page - name: redirect-page
mountPath: '/var/lib/awx/venv/awx/lib/python3.11/site-packages/awx/ui/build/index.html' mountPath: '/var/lib/awx/venv/awx/lib/python3.12/site-packages/awx/ui/build/index.html'
subPath: redirect-page.html subPath: redirect-page.html
{% endif %} {% endif %}
{% if bundle_ca_crt %} {% if bundle_ca_crt %}
@@ -300,6 +303,10 @@ spec:
{% if web_extra_env -%} {% if web_extra_env -%}
{{ web_extra_env | indent(width=12, first=True) }} {{ web_extra_env | indent(width=12, first=True) }}
{% endif %} {% endif %}
envFrom:
- configMapRef:
name: '{{ ansible_operator_meta.name }}-proxy-env'
optional: true
resources: {{ web_resource_requirements }} resources: {{ web_resource_requirements }}
- image: '{{ _image }}' - image: '{{ _image }}'
name: '{{ ansible_operator_meta.name }}-rsyslog' name: '{{ ansible_operator_meta.name }}-rsyslog'
@@ -349,6 +356,10 @@ spec:
{% if rsyslog_extra_env -%} {% if rsyslog_extra_env -%}
{{ rsyslog_extra_env | indent(width=12, first=True) }} {{ rsyslog_extra_env | indent(width=12, first=True) }}
{% endif %} {% endif %}
envFrom:
- configMapRef:
name: '{{ ansible_operator_meta.name }}-proxy-env'
optional: true
resources: {{ rsyslog_resource_requirements }} resources: {{ rsyslog_resource_requirements }}
{% if web_node_selector %} {% if web_node_selector %}
nodeSelector: nodeSelector:

View File

@@ -24,7 +24,7 @@ spec:
- -c - -c
- | - |
mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2} mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2}
update-ca-trust extract update-ca-trust extract --output /etc/pki/ca-trust/extracted
volumeMounts: volumeMounts:
- name: "ca-trust-extracted" - name: "ca-trust-extracted"
mountPath: "/etc/pki/ca-trust/extracted" mountPath: "/etc/pki/ca-trust/extracted"

View File

@@ -1,9 +1,13 @@
AUTH_LDAP_GLOBAL_OPTIONS = {
{% if ldap_cacert_ca_crt %} {% if ldap_cacert_ca_crt %}
import ldap
AUTH_LDAP_GLOBAL_OPTIONS = {
ldap.OPT_X_TLS_REQUIRE_CERT: True, ldap.OPT_X_TLS_REQUIRE_CERT: True,
ldap.OPT_X_TLS_CACERTFILE: "/etc/openldap/certs/ldap-ca.crt" ldap.OPT_X_TLS_CACERTFILE: "/etc/openldap/certs/ldap-ca.crt"
{% endif %}
} }
{% else %}
AUTH_LDAP_GLOBAL_OPTIONS = {}
{% endif %}
# Load LDAP BIND password from Kubernetes secret if define # Load LDAP BIND password from Kubernetes secret if define
{% if ldap_password_secret -%} {% if ldap_password_secret -%}

View File

@@ -34,6 +34,11 @@ spec:
app.kubernetes.io/component: 'database' app.kubernetes.io/component: 'database'
app.kubernetes.io/part-of: '{{ ansible_operator_meta.name }}' app.kubernetes.io/part-of: '{{ ansible_operator_meta.name }}'
app.kubernetes.io/managed-by: '{{ deployment_type }}-operator' app.kubernetes.io/managed-by: '{{ deployment_type }}-operator'
annotations:
{% if postgres_extra_settings | length > 0 %}
checksum-postgres_extra_settings: "{{ lookup('template', 'configmaps/postgres_extra_settings.yaml.j2') | sha1 }}"
{% endif %}
checksum-secret-postgres_configuration_secret: "{{ lookup('ansible.builtin.vars', 'pg_config', default='')["resources"][0]["data"] | default('') | sha1 }}"
{% if postgres_annotations %} {% if postgres_annotations %}
{{ postgres_annotations | indent(width=8) }} {{ postgres_annotations | indent(width=8) }}
{% endif %} {% endif %}
@@ -137,6 +142,11 @@ spec:
- name: postgres-{{ supported_pg_version }} - name: postgres-{{ supported_pg_version }}
mountPath: '{{ _postgres_data_path | dirname }}' mountPath: '{{ _postgres_data_path | dirname }}'
subPath: '{{ _postgres_data_path | dirname | basename }}' subPath: '{{ _postgres_data_path | dirname | basename }}'
{% if postgres_extra_settings | length > 0 %}
- name: pg-overrides
mountPath: /opt/app-root/src/postgresql-cfg
readOnly: true
{% endif %}
{% if postgres_extra_volume_mounts %} {% if postgres_extra_volume_mounts %}
{{ postgres_extra_volume_mounts | indent(width=12, first=True) }} {{ postgres_extra_volume_mounts | indent(width=12, first=True) }}
{% endif %} {% endif %}
@@ -149,9 +159,19 @@ spec:
tolerations: tolerations:
{{ postgres_tolerations | indent(width=8) }} {{ postgres_tolerations | indent(width=8) }}
{% endif %} {% endif %}
{% if postgres_extra_volumes %} {% if (postgres_extra_volumes | length + postgres_extra_settings | length) > 0 %}
volumes: volumes:
{% if postgres_extra_volumes %}
{{ postgres_extra_volumes | indent(width=8, first=False) }} {{ postgres_extra_volumes | indent(width=8, first=False) }}
{% endif %}
{% if postgres_extra_settings | length > 0 %}
- name: pg-overrides
configMap:
name: '{{ ansible_operator_meta.name }}-postgres-extra-settings'
items:
- key: 99-overrides.conf
path: 99-overrides.conf
{% endif %}
{% endif %} {% endif %}
volumeClaimTemplates: volumeClaimTemplates:
- metadata: - metadata:

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: {{ _metrics_utility_pvc_claim }} name: {{ _metrics_utility_pvc_claim }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
ownerReferences: null ownerReferences: null
labels: labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }} {{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}

View File

@@ -6,6 +6,7 @@ ingress_api_version: 'networking.k8s.io/v1'
ingress_annotations: '' ingress_annotations: ''
ingress_class_name: '' ingress_class_name: ''
ingress_controller: '' ingress_controller: ''
route_annotations: ''
set_self_owneref: true set_self_owneref: true

View File

@@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: {{ ansible_operator_meta.name }} name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
spec: spec:
selector: selector:
matchLabels: matchLabels:

View File

@@ -6,7 +6,7 @@ apiVersion: '{{ ingress_api_version }}'
kind: Ingress kind: Ingress
metadata: metadata:
name: {{ ansible_operator_meta.name }} name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
annotations: annotations:
{% if ingress_annotations %} {% if ingress_annotations %}
{{ ingress_annotations | indent(width=4) }} {{ ingress_annotations | indent(width=4) }}
@@ -41,7 +41,7 @@ apiVersion: '{{ ingress_api_version }}'
kind: IngressRouteTCP kind: IngressRouteTCP
metadata: metadata:
name: {{ ansible_operator_meta.name }} name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
annotations: annotations:
{% if ingress_annotations %} {% if ingress_annotations %}
{{ ingress_annotations | indent(width=4) }} {{ ingress_annotations | indent(width=4) }}
@@ -67,8 +67,11 @@ kind: Route
metadata: metadata:
annotations: annotations:
openshift.io/host.generated: "true" openshift.io/host.generated: "true"
{% if route_annotations %}
{{ route_annotations | indent(width=4) }}
{% endif %}
name: {{ ansible_operator_meta.name }} name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
spec: spec:
{% if external_hostname is defined %} {% if external_hostname is defined %}
host: {{ external_hostname }} host: {{ external_hostname }}

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: {{ ansible_operator_meta.name }}-receptor-config name: {{ ansible_operator_meta.name }}-receptor-config
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
data: data:
receptor_conf: | receptor_conf: |
--- ---

View File

@@ -40,5 +40,8 @@ additional_labels: []
# Maintain some of the recommended `app.kubernetes.io/*` labels on the resource (self) # Maintain some of the recommended `app.kubernetes.io/*` labels on the resource (self)
set_self_labels: true set_self_labels: true
# If set to true, the restore process will drop and recreate the database schema before restoring
force_drop_db: false
spec_overrides: {} spec_overrides: {}
... ...

View File

@@ -5,7 +5,7 @@
postgres_configuration_secret: "{{ spec['postgres_configuration_secret'] | default(postgres_configuration_secret) }}" postgres_configuration_secret: "{{ spec['postgres_configuration_secret'] | default(postgres_configuration_secret) }}"
- name: Check for specified PostgreSQL configuration - name: Check for specified PostgreSQL configuration
k8s_info: kubernetes.core.k8s_info:
kind: Secret kind: Secret
namespace: '{{ ansible_operator_meta.namespace }}' namespace: '{{ ansible_operator_meta.namespace }}'
name: '{{ postgres_configuration_secret }}' name: '{{ postgres_configuration_secret }}'
@@ -29,7 +29,7 @@
- block: - block:
- name: Get the postgres pod information - name: Get the postgres pod information
k8s_info: kubernetes.core.k8s_info:
kind: Pod kind: Pod
namespace: '{{ ansible_operator_meta.namespace }}' namespace: '{{ ansible_operator_meta.namespace }}'
label_selectors: label_selectors:
@@ -47,7 +47,7 @@
when: awx_postgres_type == 'managed' when: awx_postgres_type == 'managed'
- name: Check for presence of AWX Deployment - name: Check for presence of AWX Deployment
k8s_info: kubernetes.core.k8s_info:
api_version: apps/v1 api_version: apps/v1
kind: Deployment kind: Deployment
name: "{{ deployment_name }}-task" name: "{{ deployment_name }}-task"
@@ -55,7 +55,7 @@
register: this_deployment register: this_deployment
- name: Scale down Deployment for migration - name: Scale down Deployment for migration
k8s_scale: kubernetes.core.k8s_scale:
api_version: apps/v1 api_version: apps/v1
kind: Deployment kind: Deployment
name: "{{ item }}" name: "{{ item }}"
@@ -67,21 +67,40 @@
- "{{ deployment_name }}-web" - "{{ deployment_name }}-web"
when: this_deployment['resources'] | length when: this_deployment['resources'] | length
- name: Set full resolvable host name for postgres pod - name: Set resolvable_db_host
set_fact: set_fact:
resolvable_db_host: '{{ (awx_postgres_type == "managed") | ternary(awx_postgres_host + "." + ansible_operator_meta.namespace + ".svc." + cluster_name, awx_postgres_host) }}' # yamllint disable-line rule:line-length resolvable_db_host: '{{ (awx_postgres_type == "managed") | ternary(awx_postgres_host + "." + ansible_operator_meta.namespace + ".svc." + cluster_name, awx_postgres_host) }}' # yamllint disable-line rule:line-length
no_log: "{{ no_log }}" no_log: "{{ no_log }}"
- name: Set pg_isready command
ansible.builtin.set_fact:
pg_isready: >-
pg_isready
-h {{ resolvable_db_host }}
-p {{ awx_postgres_port }}
no_log: "{{ no_log }}"
- name: Set pg_restore command - name: Set pg_restore command
set_fact: set_fact:
pg_restore: >- pg_restore: >-
pg_restore --clean --if-exists pg_restore {{ force_drop_db | bool | ternary('', '--clean --if-exists') }} --no-owner --no-acl
-U {{ awx_postgres_user }} -U {{ awx_postgres_user }}
-h {{ resolvable_db_host }} -h {{ resolvable_db_host }}
-d {{ awx_postgres_database }} -d {{ awx_postgres_database }}
-p {{ awx_postgres_port }} -p {{ awx_postgres_port }}
no_log: "{{ no_log }}" no_log: "{{ no_log }}"
- name: Grant CREATEDB privilege to database user for force_drop_db
kubernetes.core.k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ postgres_pod_name }}"
container: postgres
command: >-
psql -c "ALTER USER {{ awx_postgres_user }} CREATEDB;"
when:
- force_drop_db | bool
- awx_postgres_type == 'managed'
- name: Force drop and create database if force_drop_db is true - name: Force drop and create database if force_drop_db is true
block: block:
- name: Set drop db command - name: Set drop db command
@@ -111,8 +130,8 @@
{{ pg_create_db }} {{ pg_create_db }}
when: force_drop_db when: force_drop_db
- name: Restore database dump to the new postgresql container - name: Restore Postgres database
k8s_exec: kubernetes.core.k8s_exec:
namespace: "{{ backup_pvc_namespace }}" namespace: "{{ backup_pvc_namespace }}"
pod: "{{ ansible_operator_meta.name }}-db-management" pod: "{{ ansible_operator_meta.name }}-db-management"
container: "{{ ansible_operator_meta.name }}-db-management" container: "{{ ansible_operator_meta.name }}-db-management"
@@ -126,6 +145,11 @@
exit $rc exit $rc
} }
keepalive_file=\"$(mktemp)\" keepalive_file=\"$(mktemp)\"
until {{ pg_isready }} &> /dev/null
do
echo \"Waiting until Postgres is accepting connections...\"
sleep 2
done
while [[ -f \"$keepalive_file\" ]]; do while [[ -f \"$keepalive_file\" ]]; do
echo 'Migrating data from old database...' echo 'Migrating data from old database...'
sleep 60 sleep 60
@@ -142,3 +166,14 @@
" "
register: data_migration register: data_migration
no_log: "{{ no_log }}" no_log: "{{ no_log }}"
- name: Revoke CREATEDB privilege from database user
kubernetes.core.k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ postgres_pod_name }}"
container: postgres
command: >-
psql -c "ALTER USER {{ awx_postgres_user }} NOCREATEDB;"
when:
- force_drop_db | bool
- awx_postgres_type == 'managed'

View File

@@ -3,15 +3,15 @@ apiVersion: v1
kind: Event kind: Event
metadata: metadata:
name: restore-error.{{ now }} name: restore-error.{{ now }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
involvedObject: involvedObject:
apiVersion: awx.ansible.com/v1beta1 apiVersion: awx.ansible.com/v1beta1
kind: {{ kind }} kind: {{ kind }}
name: {{ ansible_operator_meta.name }} name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }} namespace: "{{ ansible_operator_meta.namespace }}"
message: {{ error_msg }} message: {{ error_msg }}
reason: RestoreFailed reason: RestoreFailed
type: Warning type: Warning
firstTimestamp: {{ now }} firstTimestamp: "{{ now }}"
lastTimestamp: {{ now }} lastTimestamp: "{{ now }}"
count: 1 count: 1

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: {{ ansible_operator_meta.name }}-db-management name: {{ ansible_operator_meta.name }}-db-management
namespace: {{ backup_pvc_namespace }} namespace: "{{ backup_pvc_namespace }}"
labels: labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }} {{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}
spec: spec:

View File

@@ -14,7 +14,4 @@ broadcast_websocket_secret: '{{ deployment_name }}-broadcast-websocket'
postgres_configuration_secret: '{{ deployment_name }}-postgres-configuration' postgres_configuration_secret: '{{ deployment_name }}-postgres-configuration'
supported_pg_version: 15 supported_pg_version: 15
image_pull_policy: IfNotPresent image_pull_policy: IfNotPresent
# If set to true, the restore process will delete the existing database and create a new one
force_drop_db: false
pg_drop_create: '' pg_drop_create: ''

134
up.sh
View File

@@ -1,134 +0,0 @@
#!/bin/bash
# AWX Operator up.sh
# Purpose:
# Build operator image from your local checkout, push to quay.io/youruser/awx-operator:dev, and deploy operator
# -- Usage
# NAMESPACE=awx TAG=dev QUAY_USER=developer ./up.sh
# -- User Variables
NAMESPACE=${NAMESPACE:-awx}
QUAY_USER=${QUAY_USER:-developer}
TAG=${TAG:-$(git rev-parse --short HEAD)}
DEV_TAG=${DEV_TAG:-dev}
DEV_TAG_PUSH=${DEV_TAG_PUSH:-true}
# -- Check for required variables
# Set the following environment variables
# export NAMESPACE=awx
# export QUAY_USER=developer
if [ -z "$QUAY_USER" ]; then
echo "Error: QUAY_USER env variable is not set."
echo " export QUAY_USER=developer"
exit 1
fi
if [ -z "$NAMESPACE" ]; then
echo "Error: NAMESPACE env variable is not set. Run the following with your namespace:"
echo " export NAMESPACE=developer"
exit 1
fi
# -- Container Build Engine (podman or docker)
ENGINE=${ENGINE:-podman}
# -- Variables
IMG=quay.io/$QUAY_USER/awx-operator
KUBE_APPLY="kubectl apply -n $NAMESPACE -f"
# -- Wait for existing project to be deleted
# Function to check if the namespace is in terminating state
is_namespace_terminating() {
kubectl get namespace $NAMESPACE 2>/dev/null | grep -q 'Terminating'
return $?
}
# Check if the namespace exists and is in terminating state
if kubectl get namespace $NAMESPACE 2>/dev/null; then
echo "Namespace $NAMESPACE exists."
if is_namespace_terminating; then
echo "Namespace $NAMESPACE is in terminating state. Waiting for it to be fully terminated..."
while is_namespace_terminating; do
sleep 5
done
echo "Namespace $NAMESPACE has been terminated."
fi
fi
# -- Create namespace
kubectl create namespace $NAMESPACE
# -- Prepare
# Set imagePullPolicy to Always
files=(
config/manager/manager.yaml
)
for file in "${files[@]}"; do
if grep -qF 'imagePullPolicy: IfNotPresent' ${file}; then
sed -i -e "s|imagePullPolicy: IfNotPresent|imagePullPolicy: Always|g" ${file};
fi
done
# Delete old operator deployment
kubectl delete deployment awx-operator-controller-manager
# Create secrets
$KUBE_APPLY dev/secrets/custom-secret-key.yml
$KUBE_APPLY dev/secrets/admin-password-secret.yml
# (Optional) Create external-pg-secret
# $KUBE_APPLY dev/secrets/external-pg-secret.yml
# -- Login to Quay.io
$ENGINE login quay.io
if [ $ENGINE = 'podman' ]; then
if [ -f "$XDG_RUNTIME_DIR/containers/auth.json" ] ; then
REGISTRY_AUTH_CONFIG=$XDG_RUNTIME_DIR/containers/auth.json
echo "Found registry auth config: $REGISTRY_AUTH_CONFIG"
elif [ -f $HOME/.config/containers/auth.json ] ; then
REGISTRY_AUTH_CONFIG=$HOME/.config/containers/auth.json
echo "Found registry auth config: $REGISTRY_AUTH_CONFIG"
elif [ -f "/home/$USER/.docker/config.json" ] ; then
REGISTRY_AUTH_CONFIG=/home/$USER/.docker/config.json
echo "Found registry auth config: $REGISTRY_AUTH_CONFIG"
else
echo "No Podman configuration files were found."
fi
fi
if [ $ENGINE = 'docker' ]; then
if [ -f "/home/$USER/.docker/config.json" ] ; then
REGISTRY_AUTH_CONFIG=/home/$USER/.docker/config.json
echo "Found registry auth config: $REGISTRY_AUTH_CONFIG"
else
echo "No Docker configuration files were found."
fi
fi
# -- Build & Push Operator Image
echo "Preparing to build $IMG:$TAG ($IMG:$DEV_TAG) with $ENGINE..."
sleep 3
make docker-build docker-push IMG=$IMG:$TAG
# Tag and Push DEV_TAG Image when DEV_TAG_PUSH is 'True'
if $DEV_TAG_PUSH ; then
$ENGINE tag $IMG:$TAG $IMG:$DEV_TAG
make docker-push IMG=$IMG:$DEV_TAG
fi
# -- Deploy Operator
make deploy IMG=$IMG:$TAG NAMESPACE=$NAMESPACE
# -- Create CR
# uncomment the CR you want to use
$KUBE_APPLY dev/awx-cr/awx-openshift-cr.yml
# $KUBE_APPLY dev/awx-cr/awx-cr-settings.yml
# $KUBE_APPLY dev/awx-cr/awx-k8s-ingress.yml

Binary file not shown.