Compare commits

..

47 Commits

Author SHA1 Message Date
Christian Adams
60fc7d856c Add use_db_compression option for backup database dumps (#2106)
* Add use_db_compression option for backup database dumps

Enable optional pg_dump compression (-Z 9) via use_db_compression
boolean flag. Restore auto-detects compressed (.db.gz) or
uncompressed (.db) backups for backward compatibility.

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude

* Add CRD field, CSV descriptor, and restore auto-detection for use_db_compression

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude
2026-03-24 20:03:44 +00:00
Lucas Benedito
5697feea57 Fix unquoted timestamps in backup/restore event templates (#2110)
Quote {{ now }} in firstTimestamp and lastTimestamp to prevent
YAML parser from converting the value to a datetime object.

Assisted-by: Claude

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-03-23 14:11:54 -04:00
aknochow
56f10cf966 Fix custom backup PVC name not used with create_backup_pvc (#2105)
Use backup_pvc for custom backup PVC name in templates

When backup_pvc is specified with create_backup_pvc: true, the PVC
template and ownerReference removal used the hardcoded default name
(deployment_name-backup-claim) instead of the user-specified name.
This caused the management pod to reference a PVC that didn't exist.

Replace backup_claim variable with backup_pvc throughout the backup
role so the resolved PVC name is used consistently in all templates.

Authored By: Adam Knochowski <aknochow@redhat.com>
Assisted By: Claude
2026-03-05 07:22:22 -05:00
Christian M. Adams
c996c88178 Fix config/testing overlay to use new metrics patch
The testing kustomization overlay still referenced the deleted
manager_auth_proxy_patch.yaml. Update to use manager_metrics_patch.yaml
and add metrics_service.yaml resource.

Ref: AAP-65254

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude
2026-03-04 13:45:50 -05:00
Christian M. Adams
5fb6bb7519 Upgrade operator-sdk to v1.40.0 and remove kube-rbac-proxy
Bump operator-sdk, ansible-operator, and OPM binaries to align with
the OCP 4.20 / AAP 2.7 target. Replace the deprecated kube-rbac-proxy
sidecar (removed in operator-sdk v1.38.0) with controller-runtime's
built-in WithAuthenticationAndAuthorization for metrics endpoint
protection.

Changes:
- Makefile: operator-sdk v1.36.1 → v1.40.0, OPM v1.26.0 → v1.55.0
- Dockerfile: ansible-operator base image v1.36.1 → v1.40.0
- Remove kube-rbac-proxy sidecar and auth_proxy_* RBAC manifests
- Add metrics_auth_role, metrics_reader, and metrics_service resources
- Add --metrics-secure, --metrics-require-rbac, --metrics-bind-address
  flags via JSON patch to serve metrics directly from the manager on
  port 8443 with TLS and RBAC authentication

Ref: AAP-65254

Authored By: Christian M. Adams <chadams@redhat.com>
Assisted By: Claude
2026-03-04 13:45:50 -05:00
Lucas Benedito
0b4b5dd7fd Fix AWXRestore multiple bugs
- Move force_drop_db from vars/main.yml to defaults/main.yml so CR spec
values are not overridden by Ansible variable precedence
- Grant CREATEDB priv to database user before DROP/CREATE and revoke
it after restore, following the containerized-installer pattern
- Omit --clean --if-exists from pg_restore when force_drop_db is true
since the database is freshly created and empty, avoiding partition
index dependency errors

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-02-27 14:05:13 -05:00
aknochow
d4b295e8b4 Add automatic backup PVC creation with create_backup_pvc option (#2097)
When users specify a custom backup_pvc name, the operator now
automatically creates the PVC instead of failing with
"does not exist, please create this pvc first."

Changes:
- Add create_backup_pvc variable (default: true) to backup defaults
- Update error condition to check create_backup_pvc before failing
- Update PVC creation condition to include create_backup_pvc
- Add create_backup_pvc field to AWXBackup CRD

Users who want the previous behavior can set create_backup_pvc: false.
2026-02-24 16:06:24 -05:00
Hao Liu
e0ce3ef71d [AAP-64061] Add nginx log markers for direct API access detection (#2100)
Add map directives for X-Trusted-Proxy and X-DAB-JW-TOKEN headers to
log the presence of these headers as trusted_proxy_present and
dab_jwt_present fields in the nginx access log.

These markers enable the detection tool (aap-detect-direct-component-access)
to identify direct API access that bypasses AAP Gateway.

Also add explicit error_log /dev/stderr warn; instead of relying on
container base image symlinks.

Part of ANSTRAT-1840: Remove direct API access to platform components.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-02-17 17:25:36 -05:00
Christian Adams
fcf9a0840b Remove OperatorHub automation and documentation (#2101)
AWX Operator is no longer published to OperatorHub. Remove the
publish-operator-hub GHA workflow, the hack/publish-to-operator-hub.sh
script, the OperatorHub section from the release process docs, and the
OperatorHub-specific resource list from the debugging guide.

Author: Christian M. Adams
Assisted By: Claude
2026-02-16 22:52:04 +00:00
Christian Adams
f9c05a5698 ci: Update DOCKER_API_VERSION to 1.44 (#2102)
The Docker daemon on ubuntu-latest runners now requires minimum API
version 1.44, causing molecule kind tests to fail during cluster
teardown.

Author: Christian M. Adams
Assisted By: Claude
2026-02-16 17:27:07 -05:00
jamesmarshall24
bfc4d8e37f Add CRD validation for images and image version (#2096) 2026-02-12 13:46:24 -05:00
Dimitri Savineau
f04ab1878c web: Update python path for rediect page
The application container image is now using python3.12 so we need
to update the associated volume mount for the redirect page.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2026-01-27 19:25:51 -05:00
Dimitri Savineau
eeed2b8ae5 django: Add --no-imports option
With django updated to 5.2 then the django shell commands load imports
at startup which flood stdout with logs and break workflows

https://docs.djangoproject.com/en/dev/releases/5.2/#automatic-models-import-in-the-shell

Adding --no-imports to the cli call solves the issue.

https://docs.djangoproject.com/en/5.2/ref/django-admin/#cmdoption-shell-no-imports

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2026-01-19 13:36:08 -05:00
Lucas Benedito
a47b06f937 devel: Update development guide
- Update the development.md file
- Allow builds from macos automatically
- implement podman-buildx

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2026-01-15 15:17:24 -05:00
Alan Rominger
605b46d83c Collect logs with greater determination (#2087) 2025-11-04 13:08:35 -05:00
Rebeccah Hunter
7ead166ca0 set client_request_timeout from annotation in the CR (#2077)
add the functionality to accept an annotation in the awx-cr to be able to override the default client_request_timeout value.

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
2025-10-15 18:13:12 -04:00
Christian M. Adams
c5533f47c1 Use --no-acl flag when restoring to exclude GRANT and REVOKE commands
This avoids running in to the following error when pg_restore is run as
  the application db user from the db-management pod:

  pg_restore: error: could not execute query: ERROR: must be member of role postgres
  Command was: ALTER SCHEMA public OWNER TO postgres;
2025-10-15 13:54:21 -04:00
lucas-benedito
78864b3653 fix: Correct the image_version conditional (#2082)
* fix: Correct the image_version conditional

When image is set and image_version is unset, the conditional is failing
due to the unset variable causes and error.
Implemented the correct conditional and added an assert to validate that
both variables are set properly when image is set.

Signed-off-by: Lucas Benedito <lbenedit@redhat.com>
2025-10-09 18:34:50 +01:00
Sharvesh
bed4aff4cc Fix: Redis ERR max number of clients reached (#2041)
Add timeout to Redis Config

Co-authored-by: Christian Adams <chadams@redhat.com>
2025-09-10 09:44:30 -04:00
jamesmarshall24
e0a8a88243 Add postgres_extra_settings (#2071)
* Add hacking/ directory to .gitignore as it is commonly used for dev scripts
* Add postgres_extra_settings
* Add postgres_configuration_secret checksum to DB statefulset
* Docs for postgres_extra_settings, CI coverage, and examples
---------
Co-authored-by: Christian M. Adams <chadams@redhat.com>
2025-09-03 12:36:34 -04:00
Christian Adams
1c3c5d430d Guard against missing version status on existing CR (#2076) 2025-08-27 16:53:01 -04:00
Joel
6e47dc62c2 Fix installer update-ca-trust command (#1985)
The latest release of the update-ca-trust requires the --output param
if you run as non-root user.

See: 81a090f89a
And: https://github.com/ansible/awx-ee/issues/258#issuecomment-2439742296

Fixes: https://github.com/ansible/awx-ee/issues/258
2025-08-25 14:38:18 +02:00
Christian Adams
2e9615aa1e Add configurable pull secret file support to up.sh (#2073)
- Applies a pull-secret yaml file if it exists at hacking/awx-cr.yml
- The operator will look for a pull secret called
  redhat-operators-pull-secret
- This makes it possible to use a private operator image on your quay.io
  registry out of the box with the up.sh
- Add PULL_SECRET_FILE environment variable with default hacking/pull-secret.yml
2025-08-19 11:50:19 -04:00
lucas-benedito
e2aef8330e Update the default crd example for the up.sh (#2061) 2025-08-13 17:09:31 -04:00
Ricardo Carrillo Cruz
883baeb16b Revert "Run import_auth_config_to_gateway when public_url is defined … (#2068)
Revert "Run import_auth_config_to_gateway when public_url is defined (#2066)"

This reverts commit ba1bb878f1.
2025-07-31 12:59:43 -04:00
Dimitri Savineau
ba1bb878f1 Run import_auth_config_to_gateway when public_url is defined (#2066)
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Co-authored-by: Ricardo Carrillo Cruz <ricarril@redhat.com>
2025-07-30 23:23:49 -04:00
aknochow
45ce8185df Reverting #2064 and Updating descriptions in backup and restore roles (#2060)
* updating task descriptions in backup and restore roles

* Revert "Run import_auth_config_to_gateway when public_url is defined (#2064)"

This reverts commit 54293a0efb.
2025-07-29 23:21:38 +00:00
lucas-benedito
a55829e5d5 Fixes for passwords for FIPS compliance (#2062)
Set password_encryption to scram-sha-256 and re-encrypt db user passwords for FIPS compliance

(cherry picked from commit 0e76404357a77a5f773aee6e2b3a5b85d1f514b7)

Co-authored-by: Christian M. Adams <chadams@redhat.com>
2025-07-28 18:52:59 +01:00
Ricardo Carrillo Cruz
54293a0efb Run import_auth_config_to_gateway when public_url is defined (#2064) 2025-07-24 10:25:07 +02:00
Rebeccah Hunter
e506466d08 set api timeout to match proxy timeout (#2056)
feat: set api timeout to match proxy timeout

Timeout before the openshift route times out
not timing out before undercuts usefulness of our log-traceback-middleware in
django-ansible-base that logs a traceback from requests that get timed
out -- because uwsgi or gunicorn has to send the timeout signal to the
worker handling the request. Also leads to issues where requests that
envoy has already timed out are filling up queues of the workers of the
components.

Also, configure nginx to return a 503 if WSGI server doesn't respond.

Co-authored-by: Elijah DeLee <kdelee@redhat.com>
2025-07-03 20:19:50 +00:00
Albert Daunis
e9750b489e Update migrate_schema to use check_migrations (#2025)
Update migrate schema showmigrations conditional
2025-06-25 15:59:23 -04:00
Christian Adams
0a89fc87a6 Update kubernetes.core to 3.2.0 and sdk to v1.36.1 (#2052)
* Update collections to match the other ansible operators

* Update the ansible-operator base image to v1.36.1
2025-06-18 18:12:58 -04:00
Dimitri Savineau
65a82f706c Fix jquery version in redirect page
Other installer uses 3.7.1 and the file on disk is also using 3.7.1
from the rest framework directory.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2025-06-04 12:17:08 -04:00
Sharvari Khedkar
e8f0306ec2 Add route_annotations feature to mesh ingress CRD (#2045)
* Add route_annotations feature to mesh ingress CRD
* update route_annotations type to string
* display Route Annotations only when ingress_type=route
2025-05-12 18:07:21 -04:00
Bruno Rocha
f1660c8bd1 Address review comments 2025-05-09 15:08:17 -04:00
Bruno Rocha
f967c7d341 fix: explicitly import ldap on config file
File "/etc/tower/conf.d/ldap.py", line 2, in <module>
ldap.OPT_X_TLS_REQUIRE_CERT: True,
^^^^
NameError: name 'ldap' is not defined
2025-05-09 15:08:17 -04:00
aknochow
54072d6a46 fixing backup pvc namespace quotes (#2042) 2025-04-28 08:14:50 -04:00
Christian Adams
fb13011aad Check if pg_isready before trying to restore to new postgresql pod (#2039) 2025-04-24 17:08:50 -04:00
Ricardo Carrillo Cruz
24cb6006f6 Grant postgres to awx user on migrate_data (#2038)
This is needed in case customers move to
operator platform.

Fixes https://issues.redhat.com/browse/AAP-41592
2025-04-24 09:58:48 +02:00
Christian Adams
4c05137fb8 Update kubernetes.core to 2.4.2 to fix k8s_cp module usage against OCP with virt (#2031) 2025-03-17 12:12:32 -04:00
aknochow
07540c29da fixing quotes on namespace to support namespace names with only numbers (#2030) 2025-03-17 09:19:02 -04:00
jamesmarshall24
5bb2b2ac87 Add deployment type shortname for legacy API url (#2026)
* Add deployment type shortname for legacy API url

* Add trailing slash to legacy API url

Co-authored-by: Christian Adams <rooftopcellist@gmail.com>

---------

Co-authored-by: Christian Adams <rooftopcellist@gmail.com>
2025-03-05 15:04:01 -05:00
shellclear
039157d070 Parameterization of the client_max_body_size directive in Nginx (#2014)
Enables users to customize client_max_body_size in Nginx conf to allow
for larger file uploads. This is useful in cases when users need to upload
large subscription manifest files.

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2025-02-24 12:50:08 -05:00
Christian Adams
bb4f4c2eb4 Fail early if postgres_configuration_secret is specified by does not exist (#2015) 2025-02-17 12:38:06 -05:00
Christian Adams
97efcab2a2 Accepts new status conditions from the operator on the CR object (#2016) 2025-02-17 12:36:43 -05:00
aknochow
c08c1027a1 idle_deployment - Scale down deployments to put AWX into an idle state (#2012)
- separating database_configuration and deployment tasks into separate files to add ability to call configuration independently
2025-02-11 11:01:18 -05:00
Yuval Lahav
3d1ecc19f4 AAP-38745 Increase limits in manager.py (#2006)
* AAP-38745 Increase limits in manager.py

Closes https://issues.redhat.com/browse/AAP-38745

* Update manager.yaml
2025-01-20 11:32:49 -05:00
72 changed files with 780 additions and 543 deletions

View File

@@ -16,7 +16,7 @@ jobs:
- --skip-tags=replicas
- -t replicas
env:
DOCKER_API_VERSION: "1.41"
DOCKER_API_VERSION: "1.44"
DEBUG_OUTPUT_DIR: /tmp/awx_operator_molecule_test
steps:
- uses: actions/checkout@v4

View File

@@ -1,86 +0,0 @@
name: Publish AWX Operator on operator-hub
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag_name:
description: 'Name for the tag of the release.'
required: true
operator_hub_fork:
description: 'Fork of operator-hub where the PR will be created from. default: awx-auto'
required: true
default: 'awx-auto'
image_registry:
description: 'Image registry where the image is published to. default: quay.io'
required: true
default: 'quay.io'
image_registry_organization:
description: 'Image registry organization where the image is published to. default: ansible'
required: true
default: 'ansible'
community_operator_github_org:
description: 'Github organization for community-opeartor project. default: k8s-operatorhub'
required: true
default: 'k8s-operatorhub'
community_operator_prod_github_org:
description: 'GitHub organization for community-operator-prod project. default: redhat-openshift-ecosystem'
required: true
default: 'redhat-openshift-ecosystem'
jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Set GITHUB_ENV from workflow_dispatch event
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo "VERSION=${{ github.event.inputs.tag_name }}" >> $GITHUB_ENV
echo "IMAGE_REGISTRY=${{ github.event.inputs.image_registry }}" >> $GITHUB_ENV
echo "IMAGE_REGISTRY_ORGANIZATION=${{ github.event.inputs.image_registry_organization }}" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_GITHUB_ORG=${{ github.event.inputs.community_operator_github_org }}" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_PROD_GITHUB_ORG=${{ github.event.inputs.community_operator_prod_github_org }}" >> $GITHUB_ENV
- name: Set GITHUB_ENV for release event
if: ${{ github.event_name == 'release' }}
run: |
echo "VERSION=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
echo "IMAGE_REGISTRY=quay.io" >> $GITHUB_ENV
echo "IMAGE_REGISTRY_ORGANIZATION=ansible" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_GITHUB_ORG=k8s-operatorhub" >> $GITHUB_ENV
echo "COMMUNITY_OPERATOR_PROD_GITHUB_ORG=redhat-openshift-ecosystem" >> $GITHUB_ENV
- name: Log in to image registry
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login ${{ env.IMAGE_REGISTRY }} -u ${{ secrets.QUAY_USER }} --password-stdin
- name: Checkout awx-operator at workflow branch
uses: actions/checkout@v4
with:
path: awx-operator
- name: Checkout awx-opearator at ${{ env.VERSION }}
uses: actions/checkout@v4
with:
fetch-tags: true
ref: ${{ env.VERSION }}
path: awx-operator-${{ env.VERSION }}
fetch-depth: 0 # fetch all history so that git describe works
- name: Copy scripts to awx-operator-${{ env.VERSION }}
run: |
cp -f \
awx-operator/hack/publish-to-operator-hub.sh \
awx-operator-${{ env.VERSION }}/hack/publish-to-operator-hub.sh
cp -f \
awx-operator/Makefile \
awx-operator-${{ env.VERSION }}/Makefile
- name: Build and publish bundle to operator-hub
working-directory: awx-operator-${{ env.VERSION }}
env:
IMG_REPOSITORY: ${{ env.IMAGE_REGISTRY }}/${{ env.IMAGE_REGISTRY_ORGANIZATION }}
GITHUB_TOKEN: ${{ secrets.AWX_AUTO_GITHUB_TOKEN }}
run: |
git config --global user.email "awx-automation@redhat.com"
git config --global user.name "AWX Automation"
./hack/publish-to-operator-hub.sh

View File

@@ -18,7 +18,7 @@ jobs:
- name: Check out repo
uses: actions/checkout@v4
- name: Setup nox
uses: wntrblm/nox@2024.10.09
uses: wntrblm/nox@2024.04.15
with:
python-versions: "${{ matrix.python-versions }}"
- name: "Run nox -s ${{ matrix.session }}"

1
.gitignore vendored
View File

@@ -11,3 +11,4 @@ gh-pages/
__pycache__
/site
venv/*
hacking/

View File

@@ -1,8 +1,8 @@
FROM quay.io/operator-framework/ansible-operator:v1.34.2
FROM quay.io/operator-framework/ansible-operator:v1.40.0
USER root
RUN dnf update --security --bugfix -y && \
dnf install -y openssl
RUN dnf update --security --bugfix -y --disableplugin=subscription-manager && \
dnf install -y --disableplugin=subscription-manager openssl
USER 1001

View File

@@ -105,6 +105,10 @@ docker-buildx: ## Build and push docker image for the manager for cross-platform
- docker buildx build --push $(BUILD_ARGS) --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile .
- docker buildx rm project-v3-builder
.PHONY: podman-buildx
podman-buildx: ## Build and push podman image for the manager for cross-platform support
podman build --platform=$(PLATFORMS) $(BUILD_ARGS) --manifest ${IMG} -f Dockerfile .
podman manifest push --all ${IMG} ${IMG}
##@ Deployment
@@ -161,7 +165,7 @@ ifeq (,$(shell which operator-sdk 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(OPERATOR_SDK)) ;\
curl -sSLo $(OPERATOR_SDK) https://github.com/operator-framework/operator-sdk/releases/download/v1.34.2/operator-sdk_$(OS)_$(ARCHA) ;\
curl -sSLo $(OPERATOR_SDK) https://github.com/operator-framework/operator-sdk/releases/download/v1.40.0/operator-sdk_$(OS)_$(ARCHA) ;\
chmod +x $(OPERATOR_SDK) ;\
}
else
@@ -177,7 +181,7 @@ ifeq (,$(shell which ansible-operator 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(ANSIBLE_OPERATOR)) ;\
curl -sSLo $(ANSIBLE_OPERATOR) https://github.com/operator-framework/ansible-operator-plugins/releases/download/v1.34.0/ansible-operator_$(OS)_$(ARCHA) ;\
curl -sSLo $(ANSIBLE_OPERATOR) https://github.com/operator-framework/ansible-operator-plugins/releases/download/v1.40.0/ansible-operator_$(OS)_$(ARCHA) ;\
chmod +x $(ANSIBLE_OPERATOR) ;\
}
else
@@ -208,7 +212,7 @@ ifeq (,$(shell which opm 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(OPM)) ;\
curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.26.0/$(OS)-$(ARCHA)-opm ;\
curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.55.0/$(OS)-$(ARCHA)-opm ;\
chmod +x $(OPM) ;\
}
else

View File

@@ -37,6 +37,9 @@ spec:
metadata:
type: object
spec:
x-kubernetes-validations:
- rule: "has(self.postgres_image) && has(self.postgres_image_version) || !has(self.postgres_image) && !has(self.postgres_image_version)"
message: "Both postgres_image and postgres_image_version must be set when required"
type: object
x-kubernetes-preserve-unknown-fields: true
required:
@@ -48,6 +51,10 @@ spec:
backup_pvc:
description: Name of the backup PVC
type: string
create_backup_pvc:
description: If true (default), automatically create the backup PVC if it does not exist
type: boolean
default: true
backup_pvc_namespace:
description: (Deprecated) Namespace the PVC is in
type: string
@@ -81,6 +88,10 @@ spec:
pg_dump_suffix:
description: Additional parameters for the pg_dump command
type: string
use_db_compression:
description: Enable compression for database dumps using pg_dump built-in compression.
type: boolean
default: true
postgres_label_selector:
description: Label selector used to identify postgres pod for backing up data
type: string

View File

@@ -69,6 +69,9 @@ spec:
ingress_annotations:
description: Annotations to add to the Ingress Controller
type: string
route_annotations:
description: Annotations to add to the OpenShift Route
type: string
ingress_class_name:
description: The name of ingress class to use instead of the cluster default.
type: string

View File

@@ -37,6 +37,9 @@ spec:
metadata:
type: object
spec:
x-kubernetes-validations:
- rule: "has(self.postgres_image) && has(self.postgres_image_version) || !has(self.postgres_image) && !has(self.postgres_image_version)"
message: "Both postgres_image and postgres_image_version must be set when required"
type: object
x-kubernetes-preserve-unknown-fields: true
required:

View File

@@ -36,6 +36,17 @@ spec:
metadata:
type: object
spec:
x-kubernetes-validations:
- rule: "has(self.image) && has(self.image_version) || !has(self.image) && !has(self.image_version)"
message: "Both image and image_version must be set when required"
- rule: "has(self.redis_image) && has(self.redis_image_version) || !has(self.redis_image) && !has(self.redis_image_version)"
message: "Both redis_image and redis_image_version must be set when required"
- rule: "has(self.postgres_image) && has(self.postgres_image_version) || !has(self.postgres_image) && !has(self.postgres_image_version)"
message: "Both postgres_image and postgres_image_version must be set when required"
- rule: >-
has(self.metrics_utility_image) && has(self.metrics_utility_image_version) ||
!has(self.metrics_utility_image) && !has(self.metrics_utility_image_version)
message: "Both metrics_utility_image and metrics_utility_image_version must be set when required"
properties:
deployment_type:
description: Name of the deployment type
@@ -1736,6 +1747,9 @@ spec:
nginx_worker_connections:
description: Set the number of connections per worker for nginx
type: integer
nginx_client_max_body_size:
description: Sets the maximum allowed size of the client request body in megabytes (defaults to 5M)
type: integer
nginx_worker_cpu_affinity:
description: Set the CPU affinity for nginx workers
type: string
@@ -1825,9 +1839,25 @@ spec:
description: Assign a preexisting priority class to the postgres pod
type: string
postgres_extra_args:
description: "(Deprecated, use postgres_extra_settings parameter) Define postgres configuration arguments to use"
type: array
items:
type: string
postgres_extra_settings:
description: "PostgreSQL configuration settings to be added to postgresql.conf"
type: array
items:
type: object
properties:
setting:
description: "PostgreSQL configuration parameter name"
type: string
value:
description: "PostgreSQL configuration parameter value"
type: string
required:
- setting
- value
postgres_data_volume_init:
description: Sets permissions on the /var/lib/pgdata/data for postgres container using an init container (not Openshift)
type: boolean
@@ -1965,6 +1995,9 @@ spec:
description: Disable web container's nginx ipv6 listener
type: boolean
default: false
idle_deployment:
description: Scale down deployments to put AWX into an idle state
type: boolean
metrics_utility_enabled:
description: Enable metrics utility
type: boolean
@@ -2014,6 +2047,7 @@ spec:
type: string
type: object
status:
x-kubernetes-preserve-unknown-fields: true
properties:
URL:
description: URL to access the deployed instance
@@ -2062,5 +2096,6 @@ spec:
type: string
type: object
type: array
x-kubernetes-preserve-unknown-fields: true
type: object
type: object

View File

@@ -20,11 +20,11 @@ resources:
- ../manager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
- metrics_service.yaml
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- path: manager_auth_proxy_patch.yaml
- path: manager_metrics_patch.yaml
target:
kind: Deployment

View File

@@ -1,40 +0,0 @@
# This patch inject a sidecar container which is a HTTP proxy for the
# controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: kube-rbac-proxy
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- "ALL"
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=0"
ports:
- containerPort: 8443
protocol: TCP
name: https
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 5m
memory: 64Mi
- name: awx-manager
args:
- "--health-probe-bind-address=:6789"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"
- "--leader-election-id=awx-operator"

View File

@@ -0,0 +1,12 @@
# This patch adds the args to allow exposing the metrics endpoint using HTTPS
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-bind-address=:8443
# This patch adds the args to allow securing the metrics endpoint
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-secure
# This patch adds the args to allow RBAC-based authn/authz for the metrics endpoint
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-require-rbac

View File

@@ -3,6 +3,8 @@ kind: Service
metadata:
labels:
control-plane: controller-manager
app.kubernetes.io/name: awx-operator
app.kubernetes.io/managed-by: kustomize
name: controller-manager-metrics-service
namespace: system
spec:
@@ -10,6 +12,7 @@ spec:
- name: https
port: 8443
protocol: TCP
targetPort: https
targetPort: 8443
selector:
control-plane: controller-manager
app.kubernetes.io/name: awx-operator

View File

@@ -38,6 +38,7 @@ spec:
- args:
- --leader-elect
- --leader-election-id=awx-operator
- --health-probe-bind-address=:6789
image: controller:latest
imagePullPolicy: IfNotPresent
name: awx-manager
@@ -73,8 +74,8 @@ spec:
memory: "32Mi"
cpu: "50m"
limits:
memory: "960Mi"
cpu: "1500m"
memory: "4000Mi"
cpu: "2000m"
serviceAccountName: controller-manager
imagePullSecrets:
- name: redhat-operators-pull-secret

View File

@@ -50,6 +50,12 @@ spec:
path: ingress_annotations
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:text
- displayName: Route Annotations
path: route_annotations
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:advanced'
- 'urn:alm:descriptor:com.tectonic.ui:text'
- 'urn:alm:descriptor:com.tectonic.ui:fieldDependency:ingress_type:Route'
- displayName: Ingress Class Name
path: ingress_class_name
x-descriptors:
@@ -169,6 +175,12 @@ spec:
path: additional_labels
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- description: Enable compression for database dumps using pg_dump built-in compression
displayName: Use DB Compression
path: use_db_compression
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:booleanSwitch
- displayName: Node Selector for backup management pod
path: db_management_pod_node_selector
x-descriptors:
@@ -584,6 +596,11 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:number
- urn:alm:descriptor:com.tectonic.ui:hidden
- displayName: Set the maximum allowed size of the client request body in megabytes for nginx
path: nginx_client_max_body_size
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:number
- displayName: Task Replicas
path: task_replicas
x-descriptors:
@@ -686,11 +703,16 @@ spec:
x-descriptors:
- urn:alm:descriptor:io.kubernetes:StorageClass
- urn:alm:descriptor:com.tectonic.ui:advanced
- displayName: Postgres Extra Arguments
- displayName: Postgres Extra Arguments (Deprecated)
path: postgres_extra_args
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:hidden
- displayName: Postgres Extra Settings
path: postgres_extra_settings
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:hidden
- description: Specify extra volumes to add to the postgres pod
displayName: Postgres Extra Volumes
path: postgres_extra_volumes
@@ -1155,6 +1177,13 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:booleanSwitch
- urn:alm:descriptor:com.tectonic.ui:fieldDependency:metrics_utility_enabled:true
- description: Scale down deployments to put AWX into an idle state
displayName: Idle AWX
path: idle_deployment
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:advanced
- urn:alm:descriptor:com.tectonic.ui:booleanSwitch
- urn:alm:descriptor:com.tectonic.ui:hidden
statusDescriptors:
- description: Route to access the instance deployed
displayName: URL

View File

@@ -9,10 +9,6 @@ resources:
- role_binding.yaml
- leader_election_role.yaml
- leader_election_role_binding.yaml
# Comment the following 4 lines if you want to disable
# the auth proxy (https://github.com/brancz/kube-rbac-proxy)
# which protects your /metrics endpoint.
- auth_proxy_service.yaml
- auth_proxy_role.yaml
- auth_proxy_role_binding.yaml
- auth_proxy_client_clusterrole.yaml
- metrics_auth_role.yaml
- metrics_auth_role_binding.yaml
- metrics_reader_role.yaml

View File

@@ -1,7 +1,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: proxy-role
name: metrics-auth-role
rules:
- apiGroups:
- authentication.k8s.io

View File

@@ -1,11 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: proxy-rolebinding
name: metrics-auth-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: proxy-role
name: metrics-auth-role
subjects:
- kind: ServiceAccount
name: controller-manager

View File

@@ -14,10 +14,13 @@ resources:
- ../crd
- ../rbac
- ../manager
- ../default/metrics_service.yaml
images:
- name: testing
newName: testing-operator
patches:
- path: manager_image.yaml
- path: debug_logs_patch.yaml
- path: ../default/manager_auth_proxy_patch.yaml
- path: ../default/manager_metrics_patch.yaml
target:
kind: Deployment

View File

@@ -0,0 +1,30 @@
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: clusterip
ingress_type: Route
postgres_extra_settings:
- setting: max_connections
value: "999"
- setting: ssl_ciphers
value: "HIGH:!aNULL:!MD5"
# requires custom-postgres-configuration secret to be pre-created
# postgres_configuration_secret: custom-postgres-configuration
postgres_resource_requirements:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 800m
memory: 1Gi
postgres_storage_requirements:
requests:
storage: 20Gi
limits:
storage: 100Gi

View File

@@ -7,7 +7,7 @@ spec:
service_type: clusterip
ingress_type: Route
# Secrets
admin_password_secret: custom-admin-password
postgres_configuration_secret: custom-pg-configuration
secret_key_secret: custom-secret-key
# # Secrets
# admin_password_secret: custom-admin-password
# postgres_configuration_secret: custom-pg-configuration
# secret_key_secret: custom-secret-key

View File

@@ -8,20 +8,3 @@ After the draft release is created, publish it and the [Promote AWX Operator ima
- Publish image to Quay
- Release Helm chart
After the GHA is complete, the final step is to run the [publish-to-operator-hub.sh](https://github.com/ansible/awx-operator/blob/devel/hack/publish-to-operator-hub.sh) script, which will create a PR in the following repos to add the new awx-operator bundle version to OperatorHub:
- <https://github.com/k8s-operatorhub/community-operators> (community operator index)
- <https://github.com/redhat-openshift-ecosystem/community-operators-prod> (operator index shipped with Openshift)
!!! note
The usage is documented in the script itself, but here is an example of how you would use the script to publish the 2.5.3 awx-opeator bundle to OperatorHub.
Note that you need to specify the version being released, as well as the previous version. This is because the bundle has a pointer to the previous version that is it being upgrade from. This is used by OLM to create a dependency graph.
```bash
VERSION=2.5.3 PREV_VERSION=2.5.2 ./hack/publish-to-operator-hub.sh
```
There are some quirks with running this on OS X that still need to be fixed, but the script runs smoothly on linux.
As soon as CI completes successfully, the PR's will be auto-merged. Please remember to monitor those PR's to make sure that CI passes, sometimes it needs a retry.

View File

@@ -1,8 +1,54 @@
# Development Guide
There are development scripts and yaml exaples in the [`dev/`](../dev) directory that, along with the up.sh and down.sh scripts in the root of the repo, can be used to build, deploy and test changes made to the awx-operator.
There are development scripts and yaml examples in the [`dev/`](../dev) directory that, along with the up.sh and down.sh scripts in the root of the repo, can be used to build, deploy and test changes made to the awx-operator.
## Prerequisites
You will need to have the following tools installed:
* [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* [podman](https://podman.io/docs/installation) or [docker](https://docs.docker.com/get-docker/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [oc](https://docs.openshift.com/container-platform/4.11/cli_reference/openshift_cli/getting-started-cli.html) (if using Openshift)
You will also need to have a container registry account. This guide uses quay.io, but any container registry will work. You will need to create a robot account and login at the CLI with `podman login` or `docker login`.
## Quay.io Setup for Development
Before using the development scripts, you'll need to set up a Quay.io repository and pull secret:
### 1. Create a Private Quay.io Repository
- Go to [quay.io](https://quay.io) and create a private repository named `awx-operator` under your username
- The repository URL should be `quay.io/username/awx-operator`
### 2. Create a Bot Account
- In your Quay.io repository, go to Settings → Robot Accounts
- Create a new robot account with write permissions to your repository
- Click on the robot account name to view its credentials
### 3. Generate Kubernetes Pull Secret
- In the robot account details, click "Kubernetes Secret"
- Copy the generated YAML content from the pop-up
### 4. Create Local Pull Secret File
- Create a file at `hacking/pull-secret.yml` in your awx-operator checkout
- Paste the Kubernetes secret YAML content into this file
- **Important**: Change the `name` field in the secret from the default to `redhat-operators-pull-secret`
- The `hacking/` directory is in `.gitignore`, so this file won't be committed to git
Example `hacking/pull-secret.yml`:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: redhat-operators-pull-secret # Change this name
namespace: awx
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: <base64-encoded-credentials>
```
## Build and Deploy
@@ -17,7 +63,7 @@ export TAG=test
You can add those variables to your .bashrc file so that you can just run `./up.sh` in the future.
> Note: the first time you run this, it will create quay.io repos on your fork. You will need to either make those public, or create a global pull secret on your Openshift cluster.
> Note: the first time you run this, it will create quay.io repos on your fork. If you followed the Quay.io setup steps above and created the `hacking/pull-secret.yml` file, the script will automatically handle the pull secret. Otherwise, you will need to either make those repos public, or create a global pull secret on your cluster.
To get the URL, if on **Openshift**, run:

View File

@@ -24,13 +24,6 @@ Past that, it is often useful to inspect various resources the AWX Operator mana
* secrets
* serviceaccount
And if installing via OperatorHub and OLM:
* subscription
* csv
* installPlan
* catalogSource
To inspect these resources you can use these commands
```sh

View File

@@ -115,6 +115,7 @@ configuration.
* [worker_cpu_affinity](http://nginx.org/en/docs/ngx_core_module.html#worker_cpu_affinity) with `nginx_worker_cpu_affinity` (default "auto")
* [worker_connections](http://nginx.org/en/docs/ngx_core_module.html#worker_connections) with `nginx_worker_connections` (minimum of 1024)
* [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) with `nginx_listen_queue_size` (default same as uwsgi listen queue size)
* [client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) with `nginx_client_max_body_size` (default of 5M)
## Custom Logos

View File

@@ -69,6 +69,7 @@ The following variables are customizable for the managed PostgreSQL service
| postgres_storage_requirements | PostgreSQL container storage requirements | requests: {storage: 8Gi} |
| postgres_storage_class | PostgreSQL PV storage class | Empty string |
| postgres_priority_class | Priority class used for PostgreSQL pod | Empty string |
| postgres_extra_settings | PostgreSQL configuration settings to be added to postgresql.conf | `[]` |
Example of customization could be:
@@ -89,14 +90,78 @@ spec:
limits:
storage: 50Gi
postgres_storage_class: fast-ssd
postgres_extra_args:
- '-c'
- 'max_connections=1000'
postgres_extra_settings:
- setting: max_connections
value: "1000"
```
!!! note
If `postgres_storage_class` is not defined, PostgreSQL will store it's data on a volume using the default storage class for your cluster.
## PostgreSQL Extra Settings
!!! warning "Deprecation Notice"
The `postgres_extra_args` parameter is **deprecated** and should no longer be used. Use `postgres_extra_settings` instead for configuring PostgreSQL parameters. The `postgres_extra_args` parameter will be removed in a future version of the AWX operator.
You can customize PostgreSQL configuration by adding settings to the `postgresql.conf` file using the `postgres_extra_settings` parameter. This allows you to tune PostgreSQL performance, security, and behavior according to your specific requirements.
The `postgres_extra_settings` parameter accepts an array of setting objects, where each object contains a `setting` name and its corresponding `value`.
!!! note
The `postgres_extra_settings` parameter replaces the deprecated `postgres_extra_args` parameter and provides a more structured way to configure PostgreSQL settings.
### Configuration Format
```yaml
spec:
postgres_extra_settings:
- setting: max_connections
value: "499"
- setting: ssl_ciphers
value: "HIGH:!aNULL:!MD5"
```
**Common PostgreSQL settings you might want to configure:**
| Setting | Description | Example Value |
|---------|-------------|---------------|
| `max_connections` | Maximum number of concurrent connections | `"200"` |
| `ssl_ciphers` | SSL cipher suites to use | `"HIGH:!aNULL:!MD5"` |
| `shared_buffers` | Amount of memory for shared memory buffers | `"256MB"` |
| `effective_cache_size` | Planner's assumption about effective cache size | `"1GB"` |
| `work_mem` | Amount of memory for internal sort operations | `"4MB"` |
| `maintenance_work_mem` | Memory for maintenance operations | `"64MB"` |
| `checkpoint_completion_target` | Target for checkpoint completion | `"0.9"` |
| `wal_buffers` | Amount of memory for WAL buffers | `"16MB"` |
### Important Notes
!!! warning
- Changes to `postgres_extra_settings` require a PostgreSQL pod restart to take effect.
- Some settings may require specific PostgreSQL versions or additional configuration.
- Always test configuration changes in a non-production environment first.
!!! tip
- String values should be quoted in the YAML configuration.
- Numeric values can be provided as strings or numbers.
- Boolean values should be provided as strings ("on"/"off" or "true"/"false").
For a complete list of available PostgreSQL configuration parameters, refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/runtime-config.html).
**Verification:**
You can verify that your settings have been applied by connecting to the PostgreSQL database and running:
```bash
kubectl exec -it <postgres-pod-name> -n <namespace> -- psql
```
Then run the following query:
```sql
SELECT name, setting FROM pg_settings;
```
## Note about overriding the postgres image
We recommend you use the default image sclorg image. If you are coming from a deployment using the old postgres image from dockerhub (postgres:13), upgrading from awx-operator version 2.12.2 and below to 2.15.0+ will handle migrating your data to the new postgresql image (postgresql-15-c9s).

View File

@@ -1,123 +0,0 @@
#!/bin/bash
# Create PR to Publish to community-operators and community-operators-prod
#
# * Create upstream awx-operator release
# * Check out tag (1.1.2).
# * Run VERSION=1.1.2 make bundle
# * Clone https://github.com/k8s-operatorhub/community-operators --branch main
# * mkdir -p operators/awx-operator/0.31.0/
# * Copy in manifests/ metadata/ and tests/ directories into operators/awx-operator/1.1.2/
# * Use sed to add in a replaces or skip entry. replace by default.
# * No need to update config.yaml
# * Build and Push operator and bundle images
# * Open PR or at least push to a branch so that a PR can be manually opened from it.
#
# Usage:
# First, check out awx-operator tag you intend to release, in this case, 1.0.0
# $ VERSION=1.1.2 PREV_VERSION=1.1.1 FORK=<your-fork> ./hack/publish-to-operator-hub.sh
#
# Remember to change update the VERSION and PREV_VERSION before running!!!
set -e
VERSION=${VERSION:-$(make print-VERSION)}
PREV_VERSION=${PREV_VERSION:-$(make print-PREV_VERSION)}
BRANCH=publish-awx-operator-$VERSION
FORK=${FORK:-awx-auto}
GITHUB_TOKEN=${GITHUB_TOKEN:-$AWX_AUTO_GITHUB_TOKEN}
IMG_REPOSITORY=${IMG_REPOSITORY:-quay.io/ansible}
OPERATOR_IMG=$IMG_REPOSITORY/awx-operator:$VERSION
CATALOG_IMG=$IMG_REPOSITORY/awx-operator-catalog:$VERSION
BUNDLE_IMG=$IMG_REPOSITORY/awx-operator-bundle:$VERSION
COMMUNITY_OPERATOR_GITHUB_ORG=${COMMUNITY_OPERATOR_GITHUB_ORG:-k8s-operatorhub}
COMMUNITY_OPERATOR_PROD_GITHUB_ORG=${COMMUNITY_OPERATOR_PROD_GITHUB_ORG:-redhat-openshift-ecosystem}
# Build bundle directory
make bundle IMG=$OPERATOR_IMG
# Build bundle and catalog images
make bundle-build bundle-push BUNDLE_IMG=$BUNDLE_IMG IMG=$OPERATOR_IMG
make catalog-build catalog-push CATALOG_IMG=$CATALOG_IMG BUNDLE_IMGS=$BUNDLE_IMG BUNDLE_IMG=$BUNDLE_IMG IMG=$OPERATOR_IMG
# Set containerImage & namespace variables in CSV
sed -i.bak -e "s|containerImage: quay.io/ansible/awx-operator:devel|containerImage: ${OPERATOR_IMG}|g" bundle/manifests/awx-operator.clusterserviceversion.yaml
sed -i.bak -e "s|namespace: placeholder|namespace: awx|g" bundle/manifests/awx-operator.clusterserviceversion.yaml
# Add replaces to dependency graph for upgrade path
if ! grep -qF 'replaces: awx-operator.v${PREV_VERSION}' bundle/manifests/awx-operator.clusterserviceversion.yaml; then
sed -i.bak -e "/version: ${VERSION}/a \\
replaces: awx-operator.v$PREV_VERSION" bundle/manifests/awx-operator.clusterserviceversion.yaml
fi
# Rename CSV to contain version in name
mv bundle/manifests/awx-operator.clusterserviceversion.yaml bundle/manifests/awx-operator.v${VERSION}.clusterserviceversion.yaml
# Set Openshift Support Range (bump minKubeVersion in CSV when changing)
if ! grep -qF 'openshift.versions' bundle/metadata/annotations.yaml; then
sed -i.bak -e "/annotations:/a \\
com.redhat.openshift.versions: v4.11" bundle/metadata/annotations.yaml
fi
# Remove .bak files from bundle result from sed commands
find bundle -name "*.bak" -type f -delete
echo "-- Create branch on community-operators fork --"
git clone https://github.com/$COMMUNITY_OPERATOR_GITHUB_ORG/community-operators.git
mkdir -p community-operators/operators/awx-operator/$VERSION/
cp -r bundle/* community-operators/operators/awx-operator/$VERSION/
pushd community-operators/operators/awx-operator/$VERSION/
git checkout -b $BRANCH
git add ./
git status
message='operator [N] [CI] awx-operator'
commitMessage="${message} ${VERSION}"
git commit -m "$commitMessage" -s
git remote add upstream https://$GITHUB_TOKEN@github.com/$FORK/community-operators.git
git push upstream --delete $BRANCH || true
git push upstream $BRANCH
gh pr create \
--title "operator awx-operator (${VERSION})" \
--body "operator awx-operator (${VERSION})" \
--base main \
--head $FORK:$BRANCH \
--repo $COMMUNITY_OPERATOR_GITHUB_ORG/community-operators
popd
echo "-- Create branch on community-operators-prod fork --"
git clone https://github.com/$COMMUNITY_OPERATOR_PROD_GITHUB_ORG/community-operators-prod.git
mkdir -p community-operators-prod/operators/awx-operator/$VERSION/
cp -r bundle/* community-operators-prod/operators/awx-operator/$VERSION/
pushd community-operators-prod/operators/awx-operator/$VERSION/
git checkout -b $BRANCH
git add ./
git status
message='operator [N] [CI] awx-operator'
commitMessage="${message} ${VERSION}"
git commit -m "$commitMessage" -s
git remote add upstream https://$GITHUB_TOKEN@github.com/$FORK/community-operators-prod.git
git push upstream --delete $BRANCH || true
git push upstream $BRANCH
gh pr create \
--title "operator awx-operator (${VERSION})" \
--body "operator awx-operator (${VERSION})" \
--base main \
--head $FORK:$BRANCH \
--repo $COMMUNITY_OPERATOR_PROD_GITHUB_ORG/community-operators-prod
popd

View File

@@ -49,3 +49,8 @@ spec:
{% if additional_fields is defined %}
{{ additional_fields | to_nice_yaml | indent(2) }}
{% endif %}
postgres_extra_settings:
- setting: max_connections
value: "499"
- setting: ssl_ciphers
value: "HIGH:!aNULL:!MD5"

View File

@@ -5,10 +5,21 @@
name: '{{ item.metadata.name }}'
all_containers: true
register: all_container_logs
ignore_errors: yes
- name: Store logs in file
ansible.builtin.copy:
content: "{{ all_container_logs.log_lines | join('\n') }}"
content: |-
{% if all_container_logs is failed %}
Failed to retrieve logs for pod {{ item.metadata.name }}:
{{ all_container_logs.msg | default(all_container_logs.stderr | default('No additional details provided.')) }}
{% elif all_container_logs.log_lines is defined %}
{{ all_container_logs.log_lines | join('\n') }}
{% elif all_container_logs.log is defined %}
{{ all_container_logs.log }}
{% else %}
No log content returned by kubernetes.core.k8s_log.
{% endif %}
dest: '{{ debug_output_dir }}/{{ item.metadata.name }}.log'
# TODO: all_containser option dump all of the output in a single output make it hard to read we probably should iterate through each of the container to get specific logs

View File

@@ -1,6 +1,6 @@
---
collections:
- name: kubernetes.core
version: '>=2.3.2'
- name: operator_sdk.util
version: "0.5.0"
- name: kubernetes.core
version: "3.2.0"

View File

@@ -8,6 +8,9 @@ api_version: '{{ deployment_type }}.ansible.com/v1beta1'
backup_pvc: ''
backup_pvc_namespace: "{{ ansible_operator_meta.namespace }}"
# If true (default), automatically create the backup PVC if it does not exist
create_backup_pvc: true
# Size of backup PVC if created dynamically
backup_storage_requirements: ''
@@ -39,6 +42,9 @@ backup_resource_requirements:
# Allow additional parameters to be added to the pg_dump backup command
pg_dump_suffix: ''
# Enable compression for database dumps (pg_dump -F custom built-in compression)
use_db_compression: true
# Labels defined on the resource, which should be propagated to child resources
additional_labels: []

View File

@@ -22,17 +22,18 @@
block:
- name: Set error message
set_fact:
error_msg: "{{ backup_pvc }} does not exist, please create this pvc first."
error_msg: "{{ backup_pvc }} does not exist, please create this pvc first or ensure create_backup_pvc is set to true (default) for automatic backup_pvc creation."
- name: Handle error
import_tasks: error_handling.yml
- name: Fail early if pvc is defined but does not exist
fail:
msg: "{{ backup_pvc }} does not exist, please create this pvc first."
msg: "{{ backup_pvc }} does not exist, please create this pvc first or ensure create_backup_pvc is set to true (default) for automatic backup_pvc creation."
when:
- backup_pvc != ''
- provided_pvc.resources | length == 0
- not create_backup_pvc | bool
# If backup_pvc is defined, use in management-pod.yml.j2
- name: Set default pvc name
@@ -42,7 +43,7 @@
# by default, it will re-use the old pvc if already created (unless a pvc is provided)
- name: Set PVC to use for backup
set_fact:
backup_claim: "{{ backup_pvc | default(_default_backup_pvc, true) }}"
backup_pvc: "{{ backup_pvc | default(_default_backup_pvc, true) }}"
- block:
- name: Create PVC for backup
@@ -56,11 +57,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ deployment_name }}-backup-claim"
name: "{{ backup_pvc }}"
namespace: "{{ backup_pvc_namespace }}"
ownerReferences: null
when:
- backup_pvc == '' or backup_pvc is not defined
- (backup_pvc == '' or backup_pvc is not defined) or (create_backup_pvc | bool)
- name: Set default postgres image
set_fact:

View File

@@ -72,7 +72,7 @@
command: >-
touch {{ backup_dir }}/tower.db
- name: Set full resolvable host name for postgres pod
- name: Set resolvable_db_host
set_fact:
resolvable_db_host: '{{ (awx_postgres_type == "managed") | ternary(awx_postgres_host + "." + ansible_operator_meta.namespace + ".svc", awx_postgres_host) }}' # yamllint disable-line rule:line-length
no_log: "{{ no_log }}"
@@ -121,6 +121,7 @@
-d {{ awx_postgres_database }}
-p {{ awx_postgres_port }}
-F custom
{{ use_db_compression | bool | ternary('', '-Z 0') }}
{{ pg_dump_suffix }}
no_log: "{{ no_log }}"

View File

@@ -9,5 +9,5 @@
namespace: "{{ ansible_operator_meta.namespace }}"
status:
backupDirectory: "{{ backup_dir }}"
backupClaim: "{{ backup_claim }}"
backupClaim: "{{ backup_pvc }}"
when: backup_complete

View File

@@ -2,8 +2,8 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ deployment_name }}-backup-claim
namespace: {{ backup_pvc_namespace }}
name: {{ backup_pvc }}
namespace: "{{ backup_pvc_namespace }}"
ownerReferences: null
labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}

View File

@@ -3,15 +3,15 @@ apiVersion: v1
kind: Event
metadata:
name: backup-error.{{ now }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
involvedObject:
apiVersion: awx.ansible.com/v1beta1
kind: {{ kind }}
name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
message: {{ error_msg }}
reason: BackupFailed
type: Warning
firstTimestamp: {{ now }}
lastTimestamp: {{ now }}
firstTimestamp: "{{ now }}"
lastTimestamp: "{{ now }}"
count: 1

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: Pod
metadata:
name: {{ ansible_operator_meta.name }}-db-management
namespace: {{ backup_pvc_namespace }}
namespace: "{{ backup_pvc_namespace }}"
labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}
spec:
@@ -27,6 +27,6 @@ spec:
volumes:
- name: {{ ansible_operator_meta.name }}-backup
persistentVolumeClaim:
claimName: {{ backup_claim }}
claimName: {{ backup_pvc }}
readOnly: false
restartPolicy: Never

View File

@@ -1,5 +1,6 @@
---
deployment_type: awx
deployment_type_shortname: awx
kind: 'AWX'
api_version: '{{ deployment_type }}.ansible.com/v1beta1'
@@ -421,14 +422,20 @@ projects_persistence: false
# Define an existing PersistentVolumeClaim to use
projects_existing_claim: ''
#
# Define postgres configuration arguments to use
# Define postgres configuration arguments to use (Deprecated)
postgres_extra_args: ''
#
# Define postgresql.conf configurations
postgres_extra_settings: []
postgres_data_volume_init: false
postgres_init_container_commands: |
chown 26:0 /var/lib/pgsql/data
chmod 700 /var/lib/pgsql/data
# Enable PostgreSQL SCRAM-SHA-256 migration
postgres_scram_migration_enabled: true
# Configure postgres connection keepalive
postgres_keepalives: true
postgres_keepalives_idle: 5
@@ -488,8 +495,12 @@ ipv6_disabled: false
# - hostname
host_aliases: ''
# receptor default values
receptor_log_level: info
# common default values
client_request_timeout: 30
# UWSGI default values
uwsgi_processes: 5
# NOTE: to increase this value, net.core.somaxconn must also be increased
@@ -497,11 +508,19 @@ uwsgi_processes: 5
# Also see https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls for how
# to allow setting this sysctl, which requires kubelet configuration to add to allowlist
uwsgi_listen_queue_size: 128
uwsgi_timeout: "{{ (([(client_request_timeout | int), 10] | max) / 3) | int }}"
uwsgi_timeout_grace_period: 2
# NGINX default values
nginx_worker_processes: 1
nginx_worker_connections: "{{ uwsgi_listen_queue_size }}"
nginx_worker_cpu_affinity: 'auto'
nginx_listen_queue_size: "{{ uwsgi_listen_queue_size }}"
nginx_client_max_body_size: 5
nginx_read_timeout: "{{ (([(client_request_timeout | int), 10] | max) / 2) | int }}" # used in nginx config
extra_settings_files: {}
# idle_deployment - Scale down deployments to put AWX into an idle state
idle_deployment: false

View File

@@ -25,7 +25,10 @@
- name: Set previous_version version based on AWX CR version status
ansible.builtin.set_fact:
previous_version: "{{ existing_cr.resources[0].status.version }}"
when: existing_cr['resources'] | length
when:
- existing_cr.resources | length
- existing_cr.resources[0].status is defined
- existing_cr.resources[0].status.version is defined
- name: If previous_version is less than or equal to gating_version, set upgraded_from to previous_version
ansible.builtin.set_fact:

View File

@@ -0,0 +1,189 @@
---
- name: Get database configuration
include_tasks: database_configuration.yml
- name: Create postgresql.conf ConfigMap
k8s:
apply: true
definition: "{{ lookup('template', 'configmaps/postgres_extra_settings.yaml.j2') }}"
when: postgres_extra_settings | length
# It is possible that N-2 postgres pods may still be present in the namespace from previous upgrades.
# So we have to take that into account and preferentially set the most recent one.
- name: Get the old postgres pod (N-1)
k8s_info:
kind: Pod
namespace: "{{ ansible_operator_meta.namespace }}"
field_selectors:
- status.phase=Running
register: _running_pods
- block:
- name: Filter pods by name
set_fact:
filtered_old_postgres_pods: "{{ _running_pods.resources |
selectattr('metadata.name', 'match', ansible_operator_meta.name + '-postgres.*-0') |
rejectattr('metadata.name', 'search', '-' + supported_pg_version | string + '-0') |
list }}"
# Sort pods by name in reverse order (most recent PG version first) and set
- name: Set info for previous postgres pod
set_fact:
sorted_old_postgres_pods: "{{ filtered_old_postgres_pods |
sort(attribute='metadata.name') |
reverse | list }}"
when: filtered_old_postgres_pods | length
- name: Set info for previous postgres pod
set_fact:
old_postgres_pod: "{{ sorted_old_postgres_pods | first }}"
when: filtered_old_postgres_pods | length
when: _running_pods.resources | length
- name: Look up details for this deployment
k8s_info:
api_version: "{{ api_version }}"
kind: "{{ kind }}"
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: this_awx
# If this deployment has been upgraded before or if upgrade has already been started, set this var
- name: Set previous PG version var
set_fact:
_previous_upgraded_pg_version: "{{ this_awx['resources'][0]['status']['upgradedPostgresVersion'] | default(false) }}"
when:
- this_awx['resources'][0] is defined
- "'upgradedPostgresVersion' in this_awx['resources'][0]['status']"
- name: Check if postgres pod is running an older version
block:
- name: Get old PostgreSQL version
k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ old_postgres_pod['metadata']['name'] }}"
command: |
bash -c """
if [ -f "{{ _postgres_data_path }}/PG_VERSION" ]; then
cat "{{ _postgres_data_path }}/PG_VERSION"
elif [ -f '/var/lib/postgresql/data/pgdata/PG_VERSION' ]; then
cat '/var/lib/postgresql/data/pgdata/PG_VERSION'
fi
"""
register: _old_pg_version
- debug:
msg: "--- Upgrading from {{ old_postgres_pod['metadata']['name'] | default('NONE')}} Pod ---"
- name: Migrate from md5 to scram-sha-256
k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ old_postgres_pod['metadata']['name'] }}"
command: |
bash -c "
psql -U postgres -c \"ALTER SYSTEM SET password_encryption = 'scram-sha-256';\" &&
psql -U postgres -c \"SELECT pg_reload_conf();\" &&
psql -U postgres -c \"ALTER USER \\\"{{ awx_postgres_user }}\\\" WITH PASSWORD '{{ awx_postgres_pass }}';\"
"
register: _migration_output
no_log: "{{ no_log }}"
when:
- postgres_scram_migration_enabled
- (_old_pg_version.stdout | default(0) | int ) == 13
- name: Upgrade data dir from old Postgres to {{ supported_pg_version }} if applicable
include_tasks: upgrade_postgres.yml
when:
- (_old_pg_version.stdout | default(0) | int ) < supported_pg_version
when:
- managed_database
- (_previous_upgraded_pg_version | default(false)) | ternary(_previous_upgraded_pg_version | int < supported_pg_version, true)
- old_postgres_pod | length # If empty, then old pg pod has been removed and we can assume the upgrade is complete
- block:
- name: Create Database if no database is specified
k8s:
apply: true
definition: "{{ lookup('template', 'statefulsets/postgres.yaml.j2') }}"
register: create_statefulset_result
- name: Scale down Deployment for migration
include_tasks: scale_down_deployment.yml
when: create_statefulset_result.changed
rescue:
- name: Scale down Deployment for migration
include_tasks: scale_down_deployment.yml
- name: Scale down PostgreSQL statefulset for migration
kubernetes.core.k8s_scale:
api_version: apps/v1
kind: StatefulSet
name: "{{ ansible_operator_meta.name }}-postgres-{{ supported_pg_version }}"
namespace: "{{ ansible_operator_meta.namespace }}"
replicas: 0
wait: yes
- name: Remove PostgreSQL statefulset for upgrade
k8s:
state: absent
api_version: apps/v1
kind: StatefulSet
name: "{{ ansible_operator_meta.name }}-postgres-{{ supported_pg_version }}"
namespace: "{{ ansible_operator_meta.namespace }}"
wait: yes
when: create_statefulset_result.error == 422
- name: Recreate PostgreSQL statefulset with updated values
k8s:
apply: true
definition: "{{ lookup('template', 'statefulsets/postgres.yaml.j2') }}"
when: managed_database
- name: Set Default label selector for custom resource generated postgres
set_fact:
postgres_label_selector: "app.kubernetes.io/instance=postgres-{{ supported_pg_version }}-{{ ansible_operator_meta.name }}"
when: postgres_label_selector is not defined
- name: Get the postgres pod information
k8s_info:
kind: Pod
namespace: "{{ ansible_operator_meta.namespace }}"
label_selectors:
- "{{ postgres_label_selector }}"
field_selectors:
- status.phase=Running
register: postgres_pod
- name: Wait for Database to initialize if managed DB
k8s_info:
kind: Pod
namespace: '{{ ansible_operator_meta.namespace }}'
label_selectors:
- "{{ postgres_label_selector }}"
field_selectors:
- status.phase=Running
register: postgres_pod
until:
- "postgres_pod['resources'] | length"
- "postgres_pod['resources'][0]['status']['phase'] == 'Running'"
- "postgres_pod['resources'][0]['status']['containerStatuses'][0]['ready'] == true"
delay: 5
retries: 60
when: managed_database
- name: Look up details for this deployment
k8s_info:
api_version: "{{ api_version }}"
kind: "{{ kind }}"
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: this_awx
- name: Migrate data from old Openshift instance
import_tasks: migrate_data.yml
when:
- old_pg_config['resources'] is defined
- old_pg_config['resources'] | length
- this_awx['resources'][0]['status']['migratedFromSecret'] is not defined

View File

@@ -51,6 +51,14 @@
set_fact:
_default_postgres_image: "{{ _postgres_image }}:{{_postgres_image_version }}"
- name: Fail if PostgreSQL secret is specified, but not found
fail:
msg: "PostgreSQL configuration {{ postgres_configuration_secret }} not found in namespace {{ ansible_operator_meta.namespace }}"
when:
- postgres_configuration_secret | length
- _custom_pg_config_resources is defined
- _custom_pg_config_resources['resources'] | length == 0
- name: Set PostgreSQL configuration
set_fact:
_pg_config: '{{ _custom_pg_config_resources["resources"] | default([]) | length | ternary(_custom_pg_config_resources, _default_pg_config_resources) }}'
@@ -106,167 +114,3 @@
- name: Set database as managed
set_fact:
managed_database: "{{ pg_config['resources'][0]['data']['type'] | default('') | b64decode == 'managed' }}"
# It is possible that N-2 postgres pods may still be present in the namespace from previous upgrades.
# So we have to take that into account and preferentially set the most recent one.
- name: Get the old postgres pod (N-1)
k8s_info:
kind: Pod
namespace: "{{ ansible_operator_meta.namespace }}"
field_selectors:
- status.phase=Running
register: _running_pods
- block:
- name: Filter pods by name
set_fact:
filtered_old_postgres_pods: "{{ _running_pods.resources |
selectattr('metadata.name', 'match', ansible_operator_meta.name + '-postgres.*-0') |
rejectattr('metadata.name', 'search', '-' + supported_pg_version | string + '-0') |
list }}"
# Sort pods by name in reverse order (most recent PG version first) and set
- name: Set info for previous postgres pod
set_fact:
sorted_old_postgres_pods: "{{ filtered_old_postgres_pods |
sort(attribute='metadata.name') |
reverse | list }}"
when: filtered_old_postgres_pods | length
- name: Set info for previous postgres pod
set_fact:
old_postgres_pod: "{{ sorted_old_postgres_pods | first }}"
when: filtered_old_postgres_pods | length
when: _running_pods.resources | length
- name: Look up details for this deployment
k8s_info:
api_version: "{{ api_version }}"
kind: "{{ kind }}"
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: this_awx
# If this deployment has been upgraded before or if upgrade has already been started, set this var
- name: Set previous PG version var
set_fact:
_previous_upgraded_pg_version: "{{ this_awx['resources'][0]['status']['upgradedPostgresVersion'] | default(false) }}"
when:
- this_awx['resources'][0] is defined
- "'upgradedPostgresVersion' in this_awx['resources'][0]['status']"
- name: Check if postgres pod is running an older version
block:
- name: Get old PostgreSQL version
k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ old_postgres_pod['metadata']['name'] }}"
command: |
bash -c """
if [ -f "{{ _postgres_data_path }}/PG_VERSION" ]; then
cat "{{ _postgres_data_path }}/PG_VERSION"
elif [ -f '/var/lib/postgresql/data/pgdata/PG_VERSION' ]; then
cat '/var/lib/postgresql/data/pgdata/PG_VERSION'
fi
"""
register: _old_pg_version
- debug:
msg: "--- Upgrading from {{ old_postgres_pod['metadata']['name'] | default('NONE')}} Pod ---"
- name: Upgrade data dir from old Postgres to {{ supported_pg_version }} if applicable
include_tasks: upgrade_postgres.yml
when:
- (_old_pg_version.stdout | default(0) | int ) < supported_pg_version
when:
- managed_database
- (_previous_upgraded_pg_version | default(false)) | ternary(_previous_upgraded_pg_version | int < supported_pg_version, true)
- old_postgres_pod | length # If empty, then old pg pod has been removed and we can assume the upgrade is complete
- block:
- name: Create Database if no database is specified
k8s:
apply: true
definition: "{{ lookup('template', 'statefulsets/postgres.yaml.j2') }}"
register: create_statefulset_result
- name: Scale down Deployment for migration
include_tasks: scale_down_deployment.yml
when: create_statefulset_result.changed
rescue:
- name: Scale down Deployment for migration
include_tasks: scale_down_deployment.yml
- name: Scale down PostgreSQL statefulset for migration
kubernetes.core.k8s_scale:
api_version: apps/v1
kind: StatefulSet
name: "{{ ansible_operator_meta.name }}-postgres-{{ supported_pg_version }}"
namespace: "{{ ansible_operator_meta.namespace }}"
replicas: 0
wait: yes
- name: Remove PostgreSQL statefulset for upgrade
k8s:
state: absent
api_version: apps/v1
kind: StatefulSet
name: "{{ ansible_operator_meta.name }}-postgres-{{ supported_pg_version }}"
namespace: "{{ ansible_operator_meta.namespace }}"
wait: yes
when: create_statefulset_result.error == 422
- name: Recreate PostgreSQL statefulset with updated values
k8s:
apply: true
definition: "{{ lookup('template', 'statefulsets/postgres.yaml.j2') }}"
when: managed_database
- name: Set Default label selector for custom resource generated postgres
set_fact:
postgres_label_selector: "app.kubernetes.io/instance=postgres-{{ supported_pg_version }}-{{ ansible_operator_meta.name }}"
when: postgres_label_selector is not defined
- name: Get the postgres pod information
k8s_info:
kind: Pod
namespace: "{{ ansible_operator_meta.namespace }}"
label_selectors:
- "{{ postgres_label_selector }}"
field_selectors:
- status.phase=Running
register: postgres_pod
- name: Wait for Database to initialize if managed DB
k8s_info:
kind: Pod
namespace: '{{ ansible_operator_meta.namespace }}'
label_selectors:
- "{{ postgres_label_selector }}"
field_selectors:
- status.phase=Running
register: postgres_pod
until:
- "postgres_pod['resources'] | length"
- "postgres_pod['resources'][0]['status']['phase'] == 'Running'"
- "postgres_pod['resources'][0]['status']['containerStatuses'][0]['ready'] == true"
delay: 5
retries: 60
when: managed_database
- name: Look up details for this deployment
k8s_info:
api_version: "{{ api_version }}"
kind: "{{ kind }}"
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: this_awx
- name: Migrate data from old Openshift instance
import_tasks: migrate_data.yml
when:
- old_pg_config['resources'] is defined
- old_pg_config['resources'] | length
- this_awx['resources'][0]['status']['migratedFromSecret'] is not defined

View File

@@ -0,0 +1,34 @@
---
- name: Scale down AWX Deployments
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ item }}"
namespace: "{{ ansible_operator_meta.namespace }}"
spec:
replicas: 0
loop:
- '{{ ansible_operator_meta.name }}-task'
- '{{ ansible_operator_meta.name }}-web'
- name: Get database configuration
include_tasks: database_configuration.yml
- name: Scale down PostgreSQL Statefulset
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ ansible_operator_meta.name }}-postgres-{{ supported_pg_version }}"
namespace: "{{ ansible_operator_meta.namespace }}"
spec:
replicas: 0
when: managed_database
- name: End Playbook
ansible.builtin.meta: end_play

View File

@@ -8,7 +8,7 @@
bash -c "echo 'from django.contrib.auth.models import User;
nsu = User.objects.filter(is_superuser=True, username=\"{{ admin_user }}\").count();
exit(0 if nsu > 0 else 1)'
| awx-manage shell"
| awx-manage shell --no-imports"
ignore_errors: true
register: users_result
changed_when: users_result.return_code > 0

View File

@@ -74,8 +74,8 @@
- name: Include set_images tasks
include_tasks: set_images.yml
- name: Include database configuration tasks
include_tasks: database_configuration.yml
- name: Include Database tasks
include_tasks: database.yml
- name: Load Route TLS certificate
include_tasks: load_route_tls_secret.yml

View File

@@ -1,4 +1,33 @@
---
- name: Idle AWX
include_tasks: idle_deployment.yml
when: idle_deployment | bool
- name: Look up details for this deployment
k8s_info:
api_version: "{{ api_version }}"
kind: "{{ kind }}"
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: this_awx
- name: set annotations based on this_awx
set_fact:
this_annotations: "{{ this_awx['resources'][0]['metadata']['annotations'] | default({}) }}"
- name: set client_request_timeout based on annotation
set_fact:
client_request_timeout: "{{ (this_annotations['aap.ansible.io/client-request-timeout'][:-1]) | int }}"
client_request_timeout_overidden: true
when:
- "'aap.ansible.io/client-request-timeout' in this_annotations"
- this_annotations['aap.ansible.io/client-request-timeout'] is match('^\\d+s$')
- name: client_request_timeout has been changed
debug:
msg: "client_request_timeout's default 30s value has been overriden by the annotation 'aap.ansible.io/client-request-timeout' to {{ client_request_timeout }}s"
when: client_request_timeout_overidden | default(false)
- name: Check for presence of old awx Deployment
k8s_info:
api_version: apps/v1

View File

@@ -77,7 +77,9 @@
trap 'end_keepalive \"$keepalive_file\" \"$keepalive_pid\"' EXIT SIGINT SIGTERM
echo keepalive_pid: $keepalive_pid
set -e -o pipefail
psql -c 'GRANT postgres TO {{ awx_postgres_user }}'
PGPASSWORD=\"$PGPASSWORD_OLD\" {{ pgdump }} | PGPASSWORD=\"$POSTGRES_PASSWORD\" {{ pg_restore }}
psql -c 'REVOKE postgres FROM {{ awx_postgres_user }}'
set +e +o pipefail
echo 'Successful'
"

View File

@@ -6,7 +6,7 @@
pod: "{{ awx_web_pod_name }}"
container: "{{ ansible_operator_meta.name }}-web"
command: >-
bash -c "awx-manage showmigrations | grep -v '[X]' | grep '[ ]' | wc -l"
bash -c "awx-manage showmigrations | grep -v '(no migrations)' | grep -v '[X]' | grep '[ ]' | wc -l"
changed_when: false
when: awx_web_pod_name != ''
register: database_check

View File

@@ -224,7 +224,7 @@
_custom_image: "{{ image }}:{{ image_version }}"
when:
- image | default([]) | length
- image_version is defined or image_version != ''
- image_version is defined and image_version != ''
- name: Set AWX app image URL
set_fact:
@@ -239,7 +239,7 @@
_custom_redis_image: "{{ redis_image }}:{{ redis_image_version }}"
when:
- redis_image | default([]) | length
- redis_image_version is defined or redis_image_version != ''
- redis_image_version is defined and redis_image_version != ''
- name: Set Redis image URL
set_fact:

View File

@@ -72,7 +72,7 @@
- "app.kubernetes.io/managed-by={{ deployment_type }}-operator"
register: old_postgres_svc
- name: Set full resolvable host name for postgres pod
- name: Set resolvable_db_host
set_fact:
resolvable_db_host: "{{ old_postgres_svc['resources'][0]['metadata']['name'] }}.{{ ansible_operator_meta.namespace }}.svc" # yamllint disable-line rule:line-length
no_log: "{{ no_log }}"

View File

@@ -109,13 +109,25 @@ data:
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
client_max_body_size 5M;
client_max_body_size {{ nginx_client_max_body_size }}M;
map $http_x_trusted_proxy $trusted_proxy_present {
default "trusted-proxy";
"" "-";
}
map $http_x_dab_jw_token $dab_jwt_present {
default "dab-jwt";
"" "-";
}
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
'"$http_user_agent" "$http_x_forwarded_for" '
'$trusted_proxy_present $dab_jwt_present';
access_log /dev/stdout main;
error_log /dev/stderr warn;
map $http_upgrade $connection_upgrade {
default upgrade;
@@ -187,7 +199,7 @@ data:
allow 127.0.0.1;
deny all;
}
location {{ (ingress_path + '/static').replace('//', '/') }} {
alias /var/lib/awx/public/static/;
}
@@ -229,7 +241,7 @@ data:
location {{ ingress_path }} {
# Add trailing / if missing
rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;
uwsgi_read_timeout 125s;
uwsgi_read_timeout {{ nginx_read_timeout }}s;
uwsgi_pass uwsgi;
include /etc/nginx/uwsgi_params;
include /etc/nginx/conf.d/*.conf;
@@ -243,6 +255,23 @@ data:
add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Expires "0";
add_header Pragma "no-cache";
# Return 503 Service Unavailable with JSON response if uWSGI fails to respond
error_page 504 =503 /json_503;
error_page 502 =503 /json_503; # Optional, in case uWSGI is completely down
}
location = /json_503 {
# Custom JSON response for 503 Service Unavailable
internal;
add_header Content-Type application/json;
# Check if X-Request-ID is set and include it in the response
if ($http_x_request_id) {
return 503 '{"status": "error", "message": "Service Unavailable", "code": 503, "request_id": "$http_x_request_id"}';
}
# If X-Request-ID is not set, just return the basic JSON response
return 503 '{"status": "error", "message": "Service Unavailable", "code": 503}';
}
}
}
@@ -251,6 +280,7 @@ data:
unixsocketperm 777
port 0
bind 127.0.0.1
timeout 300
receptor_conf: |
---
- log-level: {{ receptor_log_level }}
@@ -304,8 +334,8 @@ data:
max-requests = 1000
buffer-size = 32768
harakiri = 120
harakiri-graceful-timeout = 115
harakiri = {{ uwsgi_timeout }}
harakiri-graceful-timeout = {{ uwsgi_timeout_grace_period }}
harakiri-graceful-signal = 6
py-call-osafterfork = true

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ ansible_operator_meta.name }}-postgres-extra-settings'
namespace: '{{ ansible_operator_meta.namespace }}'
labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}
data:
99-overrides.conf: |
{% for pg_setting in postgres_extra_settings %}
{% if pg_setting.value is string %}
{{ pg_setting.setting }} = '{{ pg_setting.value }}'
{% else %}
{{ pg_setting.setting }} = {{ pg_setting.value }}
{% endif %}
{% endfor %}

View File

@@ -2,7 +2,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ ansible_operator_meta.name }}-redirect-page
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
data:
redirect-page.html: |
<!DOCTYPE html>
@@ -66,11 +66,11 @@ data:
<p class="doc-note">
The API endpoints for this platform service will temporarily remain available at the URL for this service.
Please use the Ansible Automation Platform API endpoints corresponding to this component in the future.
These can be found at <a href="{{ public_base_url }}/api/{{ deployment_type }}" target="_blank">{{ public_base_url }}/api/{{ deployment_type }}</a>.
These can be found at <a href="{{ public_base_url }}/api/{{ deployment_type_shortname }}" target="_blank">{{ public_base_url }}/api/{{ deployment_type_shortname }}/</a>.
</p>
<!-- Include any additional scripts if needed -->
<script src="static/rest_framework/js/jquery-3.5.1.min.js"></script>
<script src="static/rest_framework/js/jquery-3.7.1.min.js"></script>
<script src="static/rest_framework/js/bootstrap.min.js"></script>
</body>
</html>

View File

@@ -84,7 +84,7 @@ spec:
- -c
- |
mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2}
update-ca-trust extract
update-ca-trust extract --output /etc/pki/ca-trust/extracted
volumeMounts:
- name: "ca-trust-extracted"
mountPath: "/etc/pki/ca-trust/extracted"

View File

@@ -93,7 +93,7 @@ spec:
- -c
- |
mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2}
update-ca-trust extract
update-ca-trust extract --output /etc/pki/ca-trust/extracted
volumeMounts:
- name: "ca-trust-extracted"
mountPath: "/etc/pki/ca-trust/extracted"
@@ -202,7 +202,7 @@ spec:
volumeMounts:
{% if public_base_url is defined %}
- name: redirect-page
mountPath: '/var/lib/awx/venv/awx/lib/python3.11/site-packages/awx/ui/build/index.html'
mountPath: '/var/lib/awx/venv/awx/lib/python3.12/site-packages/awx/ui/build/index.html'
subPath: redirect-page.html
{% endif %}
{% if bundle_ca_crt %}

View File

@@ -24,7 +24,7 @@ spec:
- -c
- |
mkdir -p /etc/pki/ca-trust/extracted/{java,pem,openssl,edk2}
update-ca-trust extract
update-ca-trust extract --output /etc/pki/ca-trust/extracted
volumeMounts:
- name: "ca-trust-extracted"
mountPath: "/etc/pki/ca-trust/extracted"

View File

@@ -1,9 +1,13 @@
AUTH_LDAP_GLOBAL_OPTIONS = {
{% if ldap_cacert_ca_crt %}
import ldap
AUTH_LDAP_GLOBAL_OPTIONS = {
ldap.OPT_X_TLS_REQUIRE_CERT: True,
ldap.OPT_X_TLS_CACERTFILE: "/etc/openldap/certs/ldap-ca.crt"
{% endif %}
}
{% else %}
AUTH_LDAP_GLOBAL_OPTIONS = {}
{% endif %}
# Load LDAP BIND password from Kubernetes secret if define
{% if ldap_password_secret -%}

View File

@@ -34,6 +34,11 @@ spec:
app.kubernetes.io/component: 'database'
app.kubernetes.io/part-of: '{{ ansible_operator_meta.name }}'
app.kubernetes.io/managed-by: '{{ deployment_type }}-operator'
annotations:
{% if postgres_extra_settings | length > 0 %}
checksum-postgres_extra_settings: "{{ lookup('template', 'configmaps/postgres_extra_settings.yaml.j2') | sha1 }}"
{% endif %}
checksum-secret-postgres_configuration_secret: "{{ lookup('ansible.builtin.vars', 'pg_config', default='')["resources"][0]["data"] | default('') | sha1 }}"
{% if postgres_annotations %}
{{ postgres_annotations | indent(width=8) }}
{% endif %}
@@ -137,6 +142,11 @@ spec:
- name: postgres-{{ supported_pg_version }}
mountPath: '{{ _postgres_data_path | dirname }}'
subPath: '{{ _postgres_data_path | dirname | basename }}'
{% if postgres_extra_settings | length > 0 %}
- name: pg-overrides
mountPath: /opt/app-root/src/postgresql-cfg
readOnly: true
{% endif %}
{% if postgres_extra_volume_mounts %}
{{ postgres_extra_volume_mounts | indent(width=12, first=True) }}
{% endif %}
@@ -149,9 +159,19 @@ spec:
tolerations:
{{ postgres_tolerations | indent(width=8) }}
{% endif %}
{% if postgres_extra_volumes %}
{% if (postgres_extra_volumes | length + postgres_extra_settings | length) > 0 %}
volumes:
{% if postgres_extra_volumes %}
{{ postgres_extra_volumes | indent(width=8, first=False) }}
{% endif %}
{% if postgres_extra_settings | length > 0 %}
- name: pg-overrides
configMap:
name: '{{ ansible_operator_meta.name }}-postgres-extra-settings'
items:
- key: 99-overrides.conf
path: 99-overrides.conf
{% endif %}
{% endif %}
volumeClaimTemplates:
- metadata:

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ _metrics_utility_pvc_claim }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
ownerReferences: null
labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}

View File

@@ -6,6 +6,7 @@ ingress_api_version: 'networking.k8s.io/v1'
ingress_annotations: ''
ingress_class_name: ''
ingress_controller: ''
route_annotations: ''
set_self_owneref: true

View File

@@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
spec:
selector:
matchLabels:

View File

@@ -6,7 +6,7 @@ apiVersion: '{{ ingress_api_version }}'
kind: Ingress
metadata:
name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
annotations:
{% if ingress_annotations %}
{{ ingress_annotations | indent(width=4) }}
@@ -41,7 +41,7 @@ apiVersion: '{{ ingress_api_version }}'
kind: IngressRouteTCP
metadata:
name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
annotations:
{% if ingress_annotations %}
{{ ingress_annotations | indent(width=4) }}
@@ -67,8 +67,11 @@ kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
{% if route_annotations %}
{{ route_annotations | indent(width=4) }}
{% endif %}
name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
spec:
{% if external_hostname is defined %}
host: {{ external_hostname }}

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ ansible_operator_meta.name }}-receptor-config
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
data:
receptor_conf: |
---

View File

@@ -40,5 +40,8 @@ additional_labels: []
# Maintain some of the recommended `app.kubernetes.io/*` labels on the resource (self)
set_self_labels: true
# If set to true, the restore process will drop and recreate the database schema before restoring
force_drop_db: false
spec_overrides: {}
...

View File

@@ -5,7 +5,7 @@
postgres_configuration_secret: "{{ spec['postgres_configuration_secret'] | default(postgres_configuration_secret) }}"
- name: Check for specified PostgreSQL configuration
k8s_info:
kubernetes.core.k8s_info:
kind: Secret
namespace: '{{ ansible_operator_meta.namespace }}'
name: '{{ postgres_configuration_secret }}'
@@ -29,7 +29,7 @@
- block:
- name: Get the postgres pod information
k8s_info:
kubernetes.core.k8s_info:
kind: Pod
namespace: '{{ ansible_operator_meta.namespace }}'
label_selectors:
@@ -47,7 +47,7 @@
when: awx_postgres_type == 'managed'
- name: Check for presence of AWX Deployment
k8s_info:
kubernetes.core.k8s_info:
api_version: apps/v1
kind: Deployment
name: "{{ deployment_name }}-task"
@@ -55,7 +55,7 @@
register: this_deployment
- name: Scale down Deployment for migration
k8s_scale:
kubernetes.core.k8s_scale:
api_version: apps/v1
kind: Deployment
name: "{{ item }}"
@@ -67,21 +67,40 @@
- "{{ deployment_name }}-web"
when: this_deployment['resources'] | length
- name: Set full resolvable host name for postgres pod
- name: Set resolvable_db_host
set_fact:
resolvable_db_host: '{{ (awx_postgres_type == "managed") | ternary(awx_postgres_host + "." + ansible_operator_meta.namespace + ".svc." + cluster_name, awx_postgres_host) }}' # yamllint disable-line rule:line-length
no_log: "{{ no_log }}"
- name: Set pg_isready command
ansible.builtin.set_fact:
pg_isready: >-
pg_isready
-h {{ resolvable_db_host }}
-p {{ awx_postgres_port }}
no_log: "{{ no_log }}"
- name: Set pg_restore command
set_fact:
pg_restore: >-
pg_restore --clean --if-exists
pg_restore {{ force_drop_db | bool | ternary('', '--clean --if-exists') }} --no-owner --no-acl
-U {{ awx_postgres_user }}
-h {{ resolvable_db_host }}
-d {{ awx_postgres_database }}
-p {{ awx_postgres_port }}
no_log: "{{ no_log }}"
- name: Grant CREATEDB privilege to database user for force_drop_db
kubernetes.core.k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ postgres_pod_name }}"
container: postgres
command: >-
psql -c "ALTER USER {{ awx_postgres_user }} CREATEDB;"
when:
- force_drop_db | bool
- awx_postgres_type == 'managed'
- name: Force drop and create database if force_drop_db is true
block:
- name: Set drop db command
@@ -111,8 +130,8 @@
{{ pg_create_db }}
when: force_drop_db
- name: Restore database dump to the new postgresql container
k8s_exec:
- name: Restore Postgres database
kubernetes.core.k8s_exec:
namespace: "{{ backup_pvc_namespace }}"
pod: "{{ ansible_operator_meta.name }}-db-management"
container: "{{ ansible_operator_meta.name }}-db-management"
@@ -126,6 +145,11 @@
exit $rc
}
keepalive_file=\"$(mktemp)\"
until {{ pg_isready }} &> /dev/null
do
echo \"Waiting until Postgres is accepting connections...\"
sleep 2
done
while [[ -f \"$keepalive_file\" ]]; do
echo 'Migrating data from old database...'
sleep 60
@@ -142,3 +166,14 @@
"
register: data_migration
no_log: "{{ no_log }}"
- name: Revoke CREATEDB privilege from database user
kubernetes.core.k8s_exec:
namespace: "{{ ansible_operator_meta.namespace }}"
pod: "{{ postgres_pod_name }}"
container: postgres
command: >-
psql -c "ALTER USER {{ awx_postgres_user }} NOCREATEDB;"
when:
- force_drop_db | bool
- awx_postgres_type == 'managed'

View File

@@ -3,15 +3,15 @@ apiVersion: v1
kind: Event
metadata:
name: restore-error.{{ now }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
involvedObject:
apiVersion: awx.ansible.com/v1beta1
kind: {{ kind }}
name: {{ ansible_operator_meta.name }}
namespace: {{ ansible_operator_meta.namespace }}
namespace: "{{ ansible_operator_meta.namespace }}"
message: {{ error_msg }}
reason: RestoreFailed
type: Warning
firstTimestamp: {{ now }}
lastTimestamp: {{ now }}
firstTimestamp: "{{ now }}"
lastTimestamp: "{{ now }}"
count: 1

View File

@@ -3,7 +3,7 @@ apiVersion: v1
kind: Pod
metadata:
name: {{ ansible_operator_meta.name }}-db-management
namespace: {{ backup_pvc_namespace }}
namespace: "{{ backup_pvc_namespace }}"
labels:
{{ lookup("template", "../common/templates/labels/common.yaml.j2") | indent(width=4) | trim }}
spec:

View File

@@ -14,7 +14,4 @@ broadcast_websocket_secret: '{{ deployment_name }}-broadcast-websocket'
postgres_configuration_secret: '{{ deployment_name }}-postgres-configuration'
supported_pg_version: 15
image_pull_policy: IfNotPresent
# If set to true, the restore process will delete the existing database and create a new one
force_drop_db: false
pg_drop_create: ''

24
up.sh
View File

@@ -5,6 +5,7 @@
# -- Usage
# NAMESPACE=awx TAG=dev QUAY_USER=developer ./up.sh
# NAMESPACE=awx TAG=dev QUAY_USER=developer PULL_SECRET_FILE=my-secret.yml ./up.sh
# -- User Variables
NAMESPACE=${NAMESPACE:-awx}
@@ -12,6 +13,7 @@ QUAY_USER=${QUAY_USER:-developer}
TAG=${TAG:-$(git rev-parse --short HEAD)}
DEV_TAG=${DEV_TAG:-dev}
DEV_TAG_PUSH=${DEV_TAG_PUSH:-true}
PULL_SECRET_FILE=${PULL_SECRET_FILE:-hacking/pull-secret.yml}
# -- Check for required variables
# Set the following environment variables
@@ -72,6 +74,10 @@ for file in "${files[@]}"; do
fi
done
# Create redhat-operators-pull-secret if pull credentials file exists
if [ -f "$PULL_SECRET_FILE" ]; then
$KUBE_APPLY $PULL_SECRET_FILE
fi
# Delete old operator deployment
kubectl delete deployment awx-operator-controller-manager
@@ -115,12 +121,20 @@ fi
# -- Build & Push Operator Image
echo "Preparing to build $IMG:$TAG ($IMG:$DEV_TAG) with $ENGINE..."
sleep 3
make docker-build docker-push IMG=$IMG:$TAG
# Tag and Push DEV_TAG Image when DEV_TAG_PUSH is 'True'
if $DEV_TAG_PUSH ; then
$ENGINE tag $IMG:$TAG $IMG:$DEV_TAG
make docker-push IMG=$IMG:$DEV_TAG
# Detect architecture and use multi-arch build for ARM hosts
HOST_ARCH=$(uname -m)
if [[ "$HOST_ARCH" == "aarch64" || "$HOST_ARCH" == "arm64" ]] && [ "$ENGINE" = "podman" ]; then
echo "ARM architecture detected ($HOST_ARCH). Using multi-arch build..."
make podman-buildx IMG=$IMG:$TAG ENGINE=$ENGINE
else
make docker-build docker-push IMG=$IMG:$TAG
# Tag and Push DEV_TAG Image when DEV_TAG_PUSH is 'True'
if $DEV_TAG_PUSH ; then
$ENGINE tag $IMG:$TAG $IMG:$DEV_TAG
make docker-push IMG=$IMG:$DEV_TAG
fi
fi
# -- Deploy Operator

Binary file not shown.