* Update awx.ansible.com_awxs.yaml with rsyslog resource containers
* Update awx-operator.clusterserviceversion.yaml with x-descriptors
* Add default values in main.yml
* Template resource_requirements in web.yaml.j2 and task.yaml.j2
* feat: add HostAliases to web/task containers (fixes#646)
* feat: add HostAliases to web/task containers
* Make host_aliases display in the Operator UI
* Add default value for host_aliases and add to web deployment template
Co-authored-by: zhangpeng.zong <zhangpeng.zong@funplus.com>
Co-authored-by: Dimitri Savineau <savineau.dimitri@gmail.com>
- In some use cases, limits must be set for every container in a
cluster. To address this, we will use the task and web resource
requirements for the initContainers where applicable.
This was fixed in 6cae8df but the task/web split rebase didn't apply this
to the web deployment.
This prevents to deploy the operator when FIPS is enabled.
{"msg": "An unhandled exception occurred while running the lookup plugin
'template'. Error was a <class 'ValueError'>, original message:
[digital envelope routines: EVP_DigestInit_ex] disabled for FIPS"}
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
move chmod chgrp for projects_persistence to different init
default init container is the awx-ee because and unable to run command chgrp
moving it into a different init container
note this is not needed for openshift so this is conditional on is_k8s
SUPERVISOR_WEB_CONFIG_PATH is used in the old deployment for task container to reach into the web container and restart services
this is no longer possible/needed after splitting the deployment
renaming SUPERVISOR_WEB_CONFIG_PATH to SUPERVISOR_CONFIG_PATH
and setting it to the supervisor file for the container
this can still be useful to help run `supervisorctl -c $SUPERVISOR_CONFIG_PATH`
Added the following volume mounts to the web container:
- receptor-work-signing
- receptor-ca
- work-public-key.pem
Also added these corresponding volumes to the web deployments:
- receptor-ca
- receptor-work-signing
* first pass, still WIP, need tolerations etc
* add tolerations that don't work bc idk
* bug hunting
* local push, still a WIP
* affinity still needs testfor to_nice_yaml, tolerations logic is working
* fixed task deployment and affinity for both
We could let the web container terminate as usual, as there are no
reasons to keep it running as it doesn't participate in
job control. Additionally, it stops receiving traffic with the beginning
of termination
> At the same time as the kubelet is starting graceful shutdown, the
> control plane removes that shutting-down Pod from EndpointSlice (and
> Endpoints) objects where these represent a Service with a configured
> selector
@ https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
With the previous approach, not all associated (mounted) CM/Secrets
changes caused the Deployment to be rolled out, but also the Deployment
could have been rolled out unnecessary during e.g. Ingress or Service
changes (which do not require Pod restarts).
Previously existing Pod removal (state: absent) was not complete as
other pods continued to exist, but also is not needed with this commit
change due to added Pods annotations.
The added Deployment Pod annotations now cause the new ReplicaSet
version to be rolled out, effectively causing replacement of the
previously existing Pods in accordance with the deployment `strategy`
(https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#deploymentstrategy-v1-apps,
`RollingUpdate`) whenever there is a change in the associated CMs or
Secrets referenced in annotations. This implementation is quite standard
and widely used for Helm workflows -
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
* Move label templates into `common` role
So that there is single source of labels management, and labels are
unified across the other roles
* Introduce `additional_labels`
* Fix paths for labels templates
* Return `additional_labels_items` as list
* Add molecule tests
* Add an option to specify affinity rules for the awx pod
In some cases, you may want to use affinity rules instead of a
node selector so you can have more flexbility. For example if you want
to have "soft" rules i.e. run my pod on this node if possible otherwise
run it anywhere
* Rename `node_affinity` to `affinity`
* Maintain defaults and CSV
* Add fields validation
Co-authored-by: Olivier <oliverf1ca@yahoo.com>
Support external execution nodes
- Allow receptor.conf to be editable at runtime
- Create CA cert and key as a k8s secret
- Create work signing RSA keypair as a k8s secret
- Setup volume mounts for containers to have access to the needed
Receptor keys / certs to facilitate generating the install bundle
for a new execution node
- added firewall rule, work signing and tls cert configuration to default receptor.conf
The volume mount changes in this PR fulfill the following:
- `receptor.conf` need to be shared between task container and ee container
- **task** container writes the `receptor.conf`
- **ee** consume the `receptor.conf`
- receptor ca cert/key need to be mounted by both ee container and web container
- **ee** container need the ca cert
- **web** container will need the ca key to sign client cert for remote execution node
- **web** container will need the ca cert to generate install bundle for remote execution node
- receptor work private/public key need to be mounted by both ee container and web container
- **ee** container need to private key to sign the work
- **web** container need the public key to generate install bundle for remote execution node
- **task** container need the private key to sign the work
Signed-off-by: Hao Liu <haoli@redhat.com>
Co-Authored-By: Seth Foster <fosterbseth@gmail.com>
Co-Authored-By: Shane McDonald <me@shanemcd.com>
Signed-off-by: Hao Liu <haoli@redhat.com>
Co-authored-by: Shane McDonald <me@shanemcd.com>
Co-authored-by: Seth Foster <fosterbseth@gmail.com>