We could let the web container terminate as usual, as there are no
reasons to keep it running as it doesn't participate in
job control. Additionally, it stops receiving traffic with the beginning
of termination
> At the same time as the kubelet is starting graceful shutdown, the
> control plane removes that shutting-down Pod from EndpointSlice (and
> Endpoints) objects where these represent a Service with a configured
> selector
@ https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
- previously, there was no way to auto-assign a port by default
which led to conflicts with other deployments at times
- nodeport_port param can still be used to specify a port if desired
With the previous approach, not all associated (mounted) CM/Secrets
changes caused the Deployment to be rolled out, but also the Deployment
could have been rolled out unnecessary during e.g. Ingress or Service
changes (which do not require Pod restarts).
Previously existing Pod removal (state: absent) was not complete as
other pods continued to exist, but also is not needed with this commit
change due to added Pods annotations.
The added Deployment Pod annotations now cause the new ReplicaSet
version to be rolled out, effectively causing replacement of the
previously existing Pods in accordance with the deployment `strategy`
(https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#deploymentstrategy-v1-apps,
`RollingUpdate`) whenever there is a change in the associated CMs or
Secrets referenced in annotations. This implementation is quite standard
and widely used for Helm workflows -
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
Do not consider Pods marked for deletion when calculating tower_pod to
address replicas scale down case - where normally Pods spawned recently
are being taken for removal. As well as the case when operator kicked
off but some old replicas are still terminating.
Respect `creationTimestamp` so to make sure that the newest Pod is taken
after Deployment application, in which case multiple RS Pods (from old
RS and new RS) could be running simultaneously while the rollout is
happening.
Proper waiting is already performed earlier during Deplyment{apply: yes, wait: yes} -
e6ac874098/plugins/module_utils/k8s/waiter.py (L27).
And also not every Deployment change produces new RS/Pods. For example,
changing Deployment labels won't cause new rollout, but will cause
`until` loop to be invoked unnecessarily (when replicas=1).
There are cases when having a new Deployment may be taking above the
default timeout of 120s.
For instance, when a Deployment has multiple replicas, and each replica
starts on a separate node, and the Deployment specifies new images, then
just pulling these new images for each replica may be taking above the
default timeout of 120s.
Having the default time multiplied by the number of replicas should
provide generally enough time for all replicas to start
* Move label templates into `common` role
So that there is single source of labels management, and labels are
unified across the other roles
* Introduce `additional_labels`
* Fix paths for labels templates
* Return `additional_labels_items` as list
* Add molecule tests
- Reconfigure index file generation
- checkout gh-pages branch in promote.yaml
- fix helm-index make target
- add gh-pages folder in .gitignore
Signed-off-by: Miles Wilson <wilson.mil@icloud.com>
Co-authored-by: Hao Liu <haoli@redhat.com>
Co-authored-by: Christian Adams <rooftopcellist@gmail.com>
In order to get information during CI debugging then turning off the
no_log statement will help with non hidden output.
FAILED! => {"censored": "the output has been hidden due to the fact that
'no_log: true' was specified for this result"}
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
I had hard times to identify how to declare to include statement for a custom certifcate bundle within the Kustomize file.
The tricky part for me was to spot the option "disableNameSuffixHash: true" in order to avoid renaming the secret name with an has suffix
* Add an option to specify affinity rules for the awx pod
In some cases, you may want to use affinity rules instead of a
node selector so you can have more flexbility. For example if you want
to have "soft" rules i.e. run my pod on this node if possible otherwise
run it anywhere
* Rename `node_affinity` to `affinity`
* Maintain defaults and CSV
* Add fields validation
Co-authored-by: Olivier <oliverf1ca@yahoo.com>