With the previous approach, not all associated (mounted) CM/Secrets
changes caused the Deployment to be rolled out, but also the Deployment
could have been rolled out unnecessary during e.g. Ingress or Service
changes (which do not require Pod restarts).
Previously existing Pod removal (state: absent) was not complete as
other pods continued to exist, but also is not needed with this commit
change due to added Pods annotations.
The added Deployment Pod annotations now cause the new ReplicaSet
version to be rolled out, effectively causing replacement of the
previously existing Pods in accordance with the deployment `strategy`
(https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#deploymentstrategy-v1-apps,
`RollingUpdate`) whenever there is a change in the associated CMs or
Secrets referenced in annotations. This implementation is quite standard
and widely used for Helm workflows -
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
* Move label templates into `common` role
So that there is single source of labels management, and labels are
unified across the other roles
* Introduce `additional_labels`
* Fix paths for labels templates
* Return `additional_labels_items` as list
* Add molecule tests
* Add an option to specify affinity rules for the awx pod
In some cases, you may want to use affinity rules instead of a
node selector so you can have more flexbility. For example if you want
to have "soft" rules i.e. run my pod on this node if possible otherwise
run it anywhere
* Rename `node_affinity` to `affinity`
* Maintain defaults and CSV
* Add fields validation
Co-authored-by: Olivier <oliverf1ca@yahoo.com>
Support external execution nodes
- Allow receptor.conf to be editable at runtime
- Create CA cert and key as a k8s secret
- Create work signing RSA keypair as a k8s secret
- Setup volume mounts for containers to have access to the needed
Receptor keys / certs to facilitate generating the install bundle
for a new execution node
- added firewall rule, work signing and tls cert configuration to default receptor.conf
The volume mount changes in this PR fulfill the following:
- `receptor.conf` need to be shared between task container and ee container
- **task** container writes the `receptor.conf`
- **ee** consume the `receptor.conf`
- receptor ca cert/key need to be mounted by both ee container and web container
- **ee** container need the ca cert
- **web** container will need the ca key to sign client cert for remote execution node
- **web** container will need the ca cert to generate install bundle for remote execution node
- receptor work private/public key need to be mounted by both ee container and web container
- **ee** container need to private key to sign the work
- **web** container need the public key to generate install bundle for remote execution node
- **task** container need the private key to sign the work
Signed-off-by: Hao Liu <haoli@redhat.com>
Co-Authored-By: Seth Foster <fosterbseth@gmail.com>
Co-Authored-By: Shane McDonald <me@shanemcd.com>
Signed-off-by: Hao Liu <haoli@redhat.com>
Co-authored-by: Shane McDonald <me@shanemcd.com>
Co-authored-by: Seth Foster <fosterbseth@gmail.com>
* Bump Postgresql, Nginx and Redis versions
* pg12 --> pg13 upgrade path
* Set supported pg version as a variable to remain DRY
* Make deleting the old db data pvc after upgrade configurable
* Use labels to find the postgres pod
* backup/restore: fix postgres label selector value
We need to use the deployment_name variable for the postgres instance
name.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
* backup/restore: add missing default supported_pg_version variable
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
* restore: update database_host fact with pg suffix
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Co-authored-by: Dimitri Savineau <dsavinea@redhat.com>
When the task_resource_requirements variable has no "limits" key (which
is the default value) then the config template generation fails
----------------------------------
looking for "config.yaml.j2" at "/opt/ansible/roles/installer/templates/config.yaml.j2"
File lookup using /opt/ansible/roles/installer/templates/config.yaml.j2 as file
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: the inline if-expression on line 36 evaluated to false and no else section was defined.
The error appears to be in /opt/ansible/roles/installer/tasks/resources_configuration.yml: line 30, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Apply Resources
^ here
----------------------------------
The current condition doesn't have a else statement so the template fails
when the "limits" key isn't present.
This rewrite the current if/else statement in jinja template.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
* added capability to set the redis container resources
* Reduce resource requests so that it can be scheduled on GitHub workflows
Co-authored-by: Cedric Morin <cedric.morin_ext@michelin.com>
When there are e.g. multiple authenticated container registries used
we need to be able to add multiple imagePullSecrets to the k8s resource
Co-authored-by: Maximilian Meister <maximilian.meister@pm.me>
- This avoids issues with multple initContainers trying to mount the
postgres pvc at once, as is the case when there are multiple
replicas.
Signed-off-by: Christian M. Adams <chadams@redhat.com>
Otherwise, we get the too-low setting of the request, which
will be a rough experience for folks who have been using the operator
and are used to the experience of having entire underlying node capacity
Users can still set the setting via extra_settings to get the experience
of having each pod with a individualized capacity, or set a limit.