- Update role name for README.md
- Avoid the this_awx['resources'][0] is undefined in database_configuration.yml
- Add update_status variable to include or not the update_status.yml
- metrics_utility_enabled exists in CRD but not as variable
Co-authored-by: Christian Adams <chadams@redhat.com>
With ansible 2.9.27 (operator-sdk v1.27.0) then the rejectattr filter
returns a generator so we need to cast it to list.
The behavior doesn't exist when using a more recent operator-sdk
version like v1.34.0 (ansible-core 2.15.8) but using the list
filter on that version works too (even if not needed)
"<generator object select_or_reject at 0x7fbbf0443728>"
This is a similar issue as 80a9e8c.
TASK [Get the new resource pod information after updating resource.]
********************************
FAILED! => {"msg": "The conditional check '_new_pod['resources'] | rejectattr('metadata.deletionTimestamp', 'defined') | length' failed.
The error was: Unexpected templating type error occurred on ({% if _new_pod['resources'] | rejectattr('metadata.deletionTimestamp', 'defined') | length %} True {% else %} False {% endif %}): object of type 'generator' has no len()
This also removes the unneeded quotes on the when conditions.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
adding new variables for redhat hybrid cloud console shipping
simplifying configmap and secret setup
making pvc creation conditional on ship_target type being directory
* in the sclorg Postgresql 15 image, the PGDATA directory is hardcoded
* if users were to modify this directory, they would only change the
directory the pvc is mounted to, not the directory PostgreSQL uses.
This would result in loss of data.
* switch from /var/lib/pgsql/data/pgdata to /var/lib/pgsql/data/userdata
With ansible 2.9.27 (operator-sdk v1.27.0) then the reverse filter
returns an iterator so we need to cast it to list.
The behavior doesn't exist when using a more recent operator-sdk
version like v1.34.0 (ansible-core 2.15.8) but using the list
filter on that version works too (even if not needed)
"sorted_old_postgres_pods": "<list_reverseiterator object at 0x7f539eaa5610>"
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
* Change default value for postgres_data_path
/var/lib/postgresql/data/pgdata
to
/var/lib/pgsql/data/pgdata
postgres 15 uses a different location for
postgres data directory.
Fixes issue were database was not being written
to the mounted in volume, and if the postgres
container restarted, data would be lost.
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
---------
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Leave old postgres-13 volume alone in case of unforseen upgrade failure for restore purposes
User can manually delete old PVC after verifying upgrade is completed
* Fix awx_kube_devel
* Sanitize version name for kube_dev
When in development mode, awx version may look
like 23.9.1.dev18+gee9eac15dc.d20240311
k8s job to the migration can only have
a name with alphanumeric, and '.', '-'
so we can just drop off the +
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
---------
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Seth Foster <fosterbseth@gmail.com>
* bind ee_images, control_plane_ee_image and init_container_image with DEFAULT_AWX_VERSION instead of "latest"
* fix when condition on init_container_image_version check
* Use DEFAULT_AWX_VERSION for AWXMeshIngress
* Add back AWX EE latest for backward compatibility
---------
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
* Upgrading to postgres:15
* Changing image from postgres to sclorg
* Handle scenario where upgrade status is not defined & correct pg tag
* Rework the upgrade logic to be more resiliant for multiple upgrades
---------
Co-authored-by: john-westcott-iv <john-westcott-iv@users.noreply.github.com>
Co-authored-by: Christian M. Adams <chadams@redhat.com>
* Replace api version for deployment kind to apps/v1
* Add new multiple ingress spec and deprecate hostname and ingress_tls_secret
* Manage new ingress_hosts.tls_secret backup separately
* Fix ci molecule lint warnings and error
* Fix documentation
* Fix ingress_hosts tls_secret key being optional
* Remove fieldDependency:ingress_type:Ingress for Ingress Hosts
* Fix scenario when neither hostname or ingress_hosts is defined
---------
Co-authored-by: Guillaume Lefevre <guillaume.lefevre@agoda.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
Co-authored-by: Christian Adams <chadams@redhat.com>
- rename scale_down vars to the new deployments since the old one no longer exists
- rename postgres.yml scale down vars as it references the old ones as well
update logic for determining if install.yml task should be run
to respect the auto_upgrade field in awx resource
conditions and expected behavior
```
auto_upgrade awx awx-web awx-task run install.yml
-------------- ----- --------- ---------- -----------------
T - - - T
F T - - F
F - T T F
F - T F T
F - F T T
F - F F T
```
* first pass, still WIP, need tolerations etc
* add tolerations that don't work bc idk
* bug hunting
* local push, still a WIP
* affinity still needs testfor to_nice_yaml, tolerations logic is working
* fixed task deployment and affinity for both
With the previous approach, not all associated (mounted) CM/Secrets
changes caused the Deployment to be rolled out, but also the Deployment
could have been rolled out unnecessary during e.g. Ingress or Service
changes (which do not require Pod restarts).
Previously existing Pod removal (state: absent) was not complete as
other pods continued to exist, but also is not needed with this commit
change due to added Pods annotations.
The added Deployment Pod annotations now cause the new ReplicaSet
version to be rolled out, effectively causing replacement of the
previously existing Pods in accordance with the deployment `strategy`
(https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#deploymentstrategy-v1-apps,
`RollingUpdate`) whenever there is a change in the associated CMs or
Secrets referenced in annotations. This implementation is quite standard
and widely used for Helm workflows -
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
Do not consider Pods marked for deletion when calculating tower_pod to
address replicas scale down case - where normally Pods spawned recently
are being taken for removal. As well as the case when operator kicked
off but some old replicas are still terminating.
Respect `creationTimestamp` so to make sure that the newest Pod is taken
after Deployment application, in which case multiple RS Pods (from old
RS and new RS) could be running simultaneously while the rollout is
happening.
Proper waiting is already performed earlier during Deplyment{apply: yes, wait: yes} -
e6ac874098/plugins/module_utils/k8s/waiter.py (L27).
And also not every Deployment change produces new RS/Pods. For example,
changing Deployment labels won't cause new rollout, but will cause
`until` loop to be invoked unnecessarily (when replicas=1).