move chmod chgrp for projects_persistence to different init
default init container is the awx-ee because and unable to run command chgrp
moving it into a different init container
note this is not needed for openshift so this is conditional on is_k8s
SUPERVISOR_WEB_CONFIG_PATH is used in the old deployment for task container to reach into the web container and restart services
this is no longer possible/needed after splitting the deployment
renaming SUPERVISOR_WEB_CONFIG_PATH to SUPERVISOR_CONFIG_PATH
and setting it to the supervisor file for the container
this can still be useful to help run `supervisorctl -c $SUPERVISOR_CONFIG_PATH`
- rename scale_down vars to the new deployments since the old one no longer exists
- rename postgres.yml scale down vars as it references the old ones as well
Added the following volume mounts to the web container:
- receptor-work-signing
- receptor-ca
- work-public-key.pem
Also added these corresponding volumes to the web deployments:
- receptor-ca
- receptor-work-signing
update logic for determining if install.yml task should be run
to respect the auto_upgrade field in awx resource
conditions and expected behavior
```
auto_upgrade awx awx-web awx-task run install.yml
-------------- ----- --------- ---------- -----------------
T - - - T
F T - - F
F - T T F
F - T F T
F - F T T
F - F F T
```
* first pass, still WIP, need tolerations etc
* add tolerations that don't work bc idk
* bug hunting
* local push, still a WIP
* affinity still needs testfor to_nice_yaml, tolerations logic is working
* fixed task deployment and affinity for both
We could let the web container terminate as usual, as there are no
reasons to keep it running as it doesn't participate in
job control. Additionally, it stops receiving traffic with the beginning
of termination
> At the same time as the kubelet is starting graceful shutdown, the
> control plane removes that shutting-down Pod from EndpointSlice (and
> Endpoints) objects where these represent a Service with a configured
> selector
@ https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
- previously, there was no way to auto-assign a port by default
which led to conflicts with other deployments at times
- nodeport_port param can still be used to specify a port if desired
With the previous approach, not all associated (mounted) CM/Secrets
changes caused the Deployment to be rolled out, but also the Deployment
could have been rolled out unnecessary during e.g. Ingress or Service
changes (which do not require Pod restarts).
Previously existing Pod removal (state: absent) was not complete as
other pods continued to exist, but also is not needed with this commit
change due to added Pods annotations.
The added Deployment Pod annotations now cause the new ReplicaSet
version to be rolled out, effectively causing replacement of the
previously existing Pods in accordance with the deployment `strategy`
(https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#deploymentstrategy-v1-apps,
`RollingUpdate`) whenever there is a change in the associated CMs or
Secrets referenced in annotations. This implementation is quite standard
and widely used for Helm workflows -
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
Do not consider Pods marked for deletion when calculating tower_pod to
address replicas scale down case - where normally Pods spawned recently
are being taken for removal. As well as the case when operator kicked
off but some old replicas are still terminating.
Respect `creationTimestamp` so to make sure that the newest Pod is taken
after Deployment application, in which case multiple RS Pods (from old
RS and new RS) could be running simultaneously while the rollout is
happening.