Commit Graph

255 Commits

Author SHA1 Message Date
Yuriy Novostavskiy
0f6b4c873a Limit compatibility to Helm =>v3.0.0,<4.0.0 (#1039)
SUMMARY
Helm v4 is a major version with backward-incompatible changes, including to the flags and output of the Helm CLI and to the SDK. This version is currently not supported in the kubernetes.core. This PR is related to #1038 and is a short-term solution to mark compatibility explicitly
ISSUE TYPE

Bugfix Pull Request
Docs Pull Request

COMPONENT NAME

helm
helm_template
helm_info
helm_repository
helm_pull
helm_registry_auth
helm_plugin
helm_plugin_info

ADDITIONAL INFORMATION
Added `validate_helm_version()`` method to AnsibleHelmModule that enforces version constraint >=3.0.0,<4.0.0.
Fails fast with clear error message: "Helm version must be >=3.0.0,<4.0.0, current version is {version}"
Some modules (i.e. helm_registry_auth) technically is compatible with Helm v4, but validation was added to all helm modules.
Partially coauthored by GitHub Copilot with Claude Sonnet 4 model.
Addresses issue #1038

Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
Reviewed-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
(cherry picked from commit 13791ec7bf)

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
2026-01-26 11:03:26 -08:00
patchback[bot]
acc168e61f Fix K8S_AUTH_VERIFY_SSL environment value handling in kubectl connection plugin (#1049) (#1068)
This is a backport of PR #1049 as merged into main (12abc9b).
SUMMARY
Fixed a bug where setting K8S_AUTH_VERIFY_SSL=true (or any string value) caused the value to be treated as a separate kubectl command argument instead of being properly converted to a boolean.
The option key name is validate_certs, which does NOT end with "verify_ssl", so the original condition key.endswith("verify_ssl") at line 327 failed. This caused the code to fall through to the else block which added the value as separate
arguments: ["--insecure-skip-tls-verify", "true"], making "true" appear as a kubectl command.
Fixes #1021
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
kubernetes.core.kubectl
ADDITIONAL INFORMATION
Changes Made

Changed condition from key.endswith("verify_ssl") to key == "validate_certs"
Added import of boolean function from ansible.module_utils.parsing.convert_bool
Added proper boolean conversion using boolean(self.get_option(key), strict=False)

Partially used LLM (GitHub Copilot with Claude Sonnet 4).
Before Fix
K8S_AUTH_VERIFY_SSL=true
Command: ['/usr/bin/kubectl', '--insecure-skip-tls-verify', 'true', 'exec', ...]

                                                            ^^^^^ treated as kubectl command (BUG!)

After Fix
K8S_AUTH_VERIFY_SSL=true
Command: ['/usr/bin/kubectl', '--insecure-skip-tls-verify=false', 'exec', ...]
                                                           ^^^^^ properly converted (FIXED!)

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2026-01-23 20:24:33 +00:00
patchback[bot]
0ff5c952b8 address sanity issues (#1056) (#1065)
This is a backport of PR #1056 as merged into main (bd1cacc).
SUMMARY


helm/helm_info - Deprecate some parameters and add new ones to resolve sanity issues.
k8s - the return block doc is not aligned with what the module returns


ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

helm, helm_info, k8s
Fixes: #1046

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2026-01-22 13:25:20 +00:00
patchback[bot]
e00d4fc9d7 Extend k8s action group (#992) (#1025)
This is a backport of PR #992 as merged into main (798f549).
SUMMARY


Add all k8s_* modules to the action group in order to esaily set kubeconfig parameter
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

meta
ADDITIONAL INFORMATION

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-10-14 20:46:30 +00:00
patchback[bot]
623b8b2716 Selectively redact sensitive kubeconfig data from logs (#1014) (#1023)
This is a backport of PR #1014 as merged into main (4fa3648).
SUMMARY

Resolves #782

ISSUE TYPE


Bugfix Pull Request

ADDITIONAL INFORMATION


The proper redaction of kubeconfig data can be seen by running this example playbook with verbosity of -vvv against the code in this PR.
Prior to these changes, all info was redacted (as shown in the example below):
ok: [local] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "binary_path": null,
            "ca_cert": null,
            "context": null,
            "get_all_values": false,
            "host": null,
            "kubeconfig": {
                "apiVersion": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "clusters": [
                    {
                        "cluster": {
                            "insecure-skip-tls-verify": true,
                            "server": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
                    {
                        "cluster": {
                            "certificate-authority-data": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "server": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
                    {
                        "cluster": {
                            "certificate-authority": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "extensions": [
                                {
                                    "extension": {
                                        "last-update": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                                        "provider": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                                        "version": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                                    },
                                    "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                                }
                            ],
                            "server": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    }
                ],
                "contexts": [
                    {
                        "context": {
                            "cluster": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
                    {
                        "context": {
                            "cluster": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
[output shortened]

With the changes in this PR, only sensitive data is redacted:
ok: [local] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "binary_path": null,
            "ca_cert": null,
            "context": null,
            "get_all_values": false,
            "host": null,
            "kubeconfig": {
                "apiVersion": "v1",
                "clusters": [
                    {
                        "cluster": {
                            "insecure-skip-tls-verify": true,
                            "server": "<server address>"
                        },
                        "name": "exercise"
                    },
                    {
                        "cluster": {
                            "certificate-authority-data": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "server": "<server address>"
                        },
                        "name": "kind-drain-test"
                    },
                    {
                        "cluster": {
                            "certificate-authority": "<path to .crt>",
                            "extensions": [
                                {
                                    "extension": {
                                        "last-update": "Tue, 07 Oct 2025 11:25:54 EDT",
                                        "provider": "minikube.sigs.k8s.io",
                                        "version": "v1.35.0"
                                    },
                                    "name": "cluster_info"
                                }
                            ],
                            "server": "<server address>"
                        },
                        "name": "minikube"
                    }
                ],
                "contexts": [
                    {
                        "context": {
                            "cluster": "exercise-pod",
                            "user": "bianca"
                        },
                        "name": "exercise"
                    },
                    {
                        "context": {
                            "cluster": "kind-drain-test",
                            "user": "kind-drain-test"
                        },
                        "name": "kind-drain-test"
                    },
[output shortened]

Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-10-14 19:16:18 +00:00
Mandar Kulkarni
b16bb15d34 prepare release 5.4.1 (#1009)
SUMMARY


ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
2025-10-07 13:07:26 +00:00
patchback[bot]
cb4b792038 Added support for copying files to init Containers. (#971) (#1002)
This is a backport of PR #971 as merged into main (027700c).
SUMMARY
Was going trough the list with issues and found 958; which seemed a quick fix.
What I fixed with with this PR:

Added support for copying files to init containers.
Fixed the format message when an exec is failing for a pod (the order was wrong).
Added a check if the container that you try to run copy for is started.

ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
copy.py module
ADDITIONAL INFORMATION
Some testing.
Verify that the pod does not exist:
kubectl -n default get pod/yorick
Output:
Error from server (NotFound): pods "yorick" not found

Run the playbook to create the file, create the deployment, wait for the init container to be ready, copy the created file to the init container, cat the copied file (using kubernetes.core.k8s_exec) that is now in the init container and try to copy the created file to the (not started) container (which fails - to see the new error message for it):
cat << EOF | ansible-playbook /dev/stdin
- hosts: localhost
  gather_facts: False
  tasks:

  - ansible.builtin.copy:
      content: |
        Hi there
      dest: /tmp/yorick.txt

  - name: Deploy pod with initContainer with an unlimited while loop
    kubernetes.core.k8s:
      kubeconfig: "~/.kube/config"
      definition:
        apiVersion: v1
        kind: Pod
        metadata:
          name: "yorick"
          namespace: "default"
        spec:
          initContainers:
            - name: "yorick-init"
              image: busybox:latest
              command: ["/bin/sh"]
              args:
                - "-c"
                - |
                  echo "Init container started, waiting for file..."
                  # Wait for the file to be copied
                  while :;do
                    echo "Waiting for file"
                    sleep 5
                  done
                  echo "File received! Init container completing..."
          containers:
            - name: "yorick-container"
              image: busybox:latest
              command: ["/bin/sh"]
              args:
                - "-c"
                - |
                  # Keep container running for testing
                  sleep 300

  - kubernetes.core.k8s_info:
      kubeconfig: "~/.kube/config"
      api_version: v1
      kind: Pod
      name: "yorick"
      namespace: "default"
    register: pod_status
    until: >-
      pod_status.resources|length > 0
      and 'initContainerStatuses' in pod_status.resources.0.status
      and pod_status.resources.0.status.initContainerStatuses|length > 0
      and pod_status.resources.0.status.initContainerStatuses.0.started|bool

  - name: Copy /tmp/yorick.txt to the yorick-init init container
    kubernetes.core.k8s_cp:
      kubeconfig: "~/.kube/config"
      namespace: default
      pod: yorick
      remote_path: /tmp/yorick.txt
      local_path: /tmp/yorick.txt
      container: yorick-init

  - name: Execute a command
    kubernetes.core.k8s_exec:
      kubeconfig: "~/.kube/config"
      namespace: default
      pod: yorick
      container: yorick-init
      command: cat /tmp/yorick.txt
    register: exec_out

  - ansible.builtin.debug:
      var: exec_out.stdout

  - name: Try to copy /tmp/yorick.txt to the yorick-container container
    kubernetes.core.k8s_cp:
      kubeconfig: "~/.kube/config"
      namespace: default
      pod: yorick
      remote_path: /tmp/yorick.txt
      local_path: /tmp/yorick.txt
      container: yorick-container
EOF
Output:
PLAY [localhost] ********************************************************************************************************************************************************************

TASK [ansible.builtin.copy] *********************************************************************************************************************************************************
Thursday 31 July 2025  02:01:21 +0200 (0:00:00.016)       0:00:00.016 *********
ok: [localhost]

TASK [Deploy pod with initContainer with an unlimited while loop] *******************************************************************************************************************
Thursday 31 July 2025  02:01:21 +0200 (0:00:00.788)       0:00:00.804 *********
changed: [localhost]

TASK [kubernetes.core.k8s_info] *****************************************************************************************************************************************************
Thursday 31 July 2025  02:01:25 +0200 (0:00:03.963)       0:00:04.768 *********
FAILED - RETRYING: [localhost]: kubernetes.core.k8s_info (3 retries left).
ok: [localhost]

TASK [Copy /tmp/yorick.txt to the yorick-init init container] ***********************************************************************************************************************
Thursday 31 July 2025  02:01:32 +0200 (0:00:06.598)       0:00:11.366 *********
changed: [localhost]

TASK [Execute a command] ************************************************************************************************************************************************************
Thursday 31 July 2025  02:01:39 +0200 (0:00:07.017)       0:00:18.383 *********
changed: [localhost]

TASK [ansible.builtin.debug] ********************************************************************************************************************************************************
Thursday 31 July 2025  02:01:40 +0200 (0:00:00.644)       0:00:19.028 *********
ok: [localhost] => {
    "exec_out.stdout": "Hi there\n"
}

TASK [Try to copy /tmp/yorick.txt to the yorick-container container] ****************************************************************************************************************
Thursday 31 July 2025  02:01:40 +0200 (0:00:00.021)       0:00:19.050 *********
fatal: [localhost]: FAILED! => {
    "changed": false
}

MSG:

Pod container yorick-container is not started

PLAY RECAP **************************************************************************************************************************************************************************
localhost                  : ok=6    changed=3    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Playbook run took 0 days, 0 hours, 0 minutes, 21 seconds

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-09-25 15:10:52 +00:00
patchback[bot]
6215ac763b fix(k8s,service): Hide fields first before creating diffs (#915) (#962)
This is a backport of PR #915 as merged into main (6a0635a).
SUMMARY

By hiding fields first before creating a diff hidden fields will not be shown in the resulting diffs and therefore will also not trigger the changed condition.
The issue can only be reproduced when a mutating webhook changes the object while the kubernetes.core.k8s module is working with it.

kubevirt/kubevirt.core#145
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kubernetes.core.module_utils.k8s.service
ADDITIONAL INFORMATION


Run kubernetes.core.k8s and create object with hidden fields. After run kubernetes.core.k8s again and let a webhook mutate the object that the module is working with. The module should return with changed: no.

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-09-24 20:36:59 +00:00
Bianca Henderson
d5599f1964 Minor updates to changelog to match stable-6 and main (#991)
This will make all stable-5 changelog content match main and stable-6 so there will no longer be formatting issues when doing release work in the future (basically a manual backport of #989).

Reviewed-by: Alina Buzachis
2025-09-22 19:58:15 +00:00
patchback[bot]
90134ec3e0 [CI Fix] Remove ansible.module_utils.six imports (#998) (#999)
This is a backport of PR #998 as merged into main (448d3fe).
SUMMARY
This PR is essentially attempting Option B from issue #996 (Option A is implemented here); this code update accounts for the recent merge of sanity: warn on ansible.module_utils.six imports #85651.

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Mandar Kulkarni <mandar242@gmail.com>
2025-09-22 16:54:06 +00:00
Bianca Henderson
dcbe52e722 Prep kubernetes.core 5.4.0 release (#970)
SUMMARY

Prep kubernetes.core 5.4.0 release

COMPONENT NAME
Multiple

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Alina Buzachis
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-08-12 16:56:58 +00:00
Bianca Henderson
4b2dc4f974 Reapply "Remove kubeconfig value from module invocation log (#826)" (#899) (#965) (#980)
This reverts commit eb0aeeb from stable-5 (i.e., reapplies the changes from #965); this is a temporary fix for #782 as it will re-introduce #870.

Reviewed-by: Alina Buzachis
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-08-11 16:48:07 +00:00
patchback[bot]
eb0aeeb318 Revert "Remove kubeconfig value from module invocation log (#826)" (#899) (#965)
This is a backport of PR #899 as merged into main (1705ced).
This reverts commit 6efabd3.
SUMMARY

Fixes #870
A better solution is necessary to address #782. The current code makes getting manifests practically unusable. We need to revert this commit until a better solution is found.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kubeconfig

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-22 21:28:48 +00:00
patchback[bot]
f3d3696093 Fix the integration test for helm_registry_auth with helm >= 3.18.0 and clarify idempotency. (#946) (#961)
This is a backport of PR #946 as merged into main (642eb93).
SUMMARY
Fix the integration test for helm_registry_auth with helm >= 3.18.0 and clarify idempotency.
Fixes #944
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
helm_registry_auth
ADDITIONAL INFORMATION
Caused by the changes in helm starting from 3.18.0

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-15 16:01:23 +00:00
patchback[bot]
6560fb1c53 Fix unit tests (#939) (#940)
This is a backport of PR #939 as merged into main (34fd40d).
Some unit tests are broken with ansible-core 2.19, this PR aims to fix them.

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-11 02:23:06 +00:00
patchback[bot]
20084d119e Push changes from 3.3.1 into main branch (#893) (#894)
This is a backport of PR #893 as merged into main (0e7229c).
Release 3.3.1 is out; push changes to main branch

Reviewed-by: Bikouo Aubin
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-05-16 18:31:00 +00:00
Bikouo Aubin
c47343d7b6 Prepare release 5.3.0 (#929)
Release 5.3.0

Update galaxy.yml and README.md
Update release files using antsibull-changelog

Reviewed-by: Alina Buzachis
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Yuriy Novostavskiy
2025-05-16 08:16:23 +00:00
patchback[bot]
60b53b9dc9 Add helm insecure skip tls verify (#901) (#925)
This is a backport of PR #901 as merged into main (914a16e).
SUMMARY
Added the option insecure_skip_tls_verify  to the following helm modules:

helm_repository
helm
Unified the option with alias in helm_pull

For helm, added the option to the helm diff call, as it got fixed upstream.
Upstream Issue: databus23/helm-diff#503
Fixed with: helm/helm#12856
Fixes #694
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

kubernetes.core.helm
kubernetes.core.helm_repository
kubernetes.core.helm_pull

ADDITIONAL INFORMATION
Basically the option was added in the parameters set in the ansible job, in the docs and then injected in the helm and helm diff binary calls if set. Defaults to False.
Example
---
- name: Test helm modules
  tasks:
    - name: Test helm repository insecure
      kubernetes.core.helm_repository:
        name: insecure
        repo_url: "<helm-repo-with-self-signed-tls>"
        state: present
        insecure_skip_tls_verify: true
    - name: Test helm pull insecure
      kubernetes.core.helm_pull:
        chart_ref: "oci://<helm-repo-with-self-signed-tls>/ptroject"
        destination: /tmp
        insecure_skip_tls_verify: true
    - name: Test helm insecure
      kubernetes.core.helm:
        name: insecure
        chart_ref: "oci://<helm-repo-with-self-signed-tls>/project"
        namespace: helm-insecure-test
        state: present
        insecure_skip_tls_verify: true
Note
Might need an alias for telm_template, as the option is called insecure_registry, in the manual and docs of helm it would be --insecure-skip-tls-verify as well though.
Not included, as it was recently merged with #805

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-05-06 23:32:56 +00:00
patchback[bot]
d531368d64 Update ansible-lint version to 25.1.2 (#919) (#921)
This is a backport of PR #919 as merged into main (b594d35).
Update ansible-lint version to 25.1.2

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-29 18:03:42 +00:00
patchback[bot]
c3bf5cc47f add reset_then_reuse_values support to helm module (#802) (#917)
This is a backport of PR #802 as merged into main (00699ac).
SUMMARY
Starting with version 3.14.0, Helm supports --reset-then-reuse-values. As discussed on the original PR. This greatly improves on --reuse-values as it allows to avoid templates errors when new features are added to an upgraded chart.
Closes #803
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
helm
ADDITIONAL INFORMATION
This PR is greatly 'inspired' by #575 and because I wasn't sure how I could provide additional tests for it, I actually copied those build previously for --reuse-values (as it is an improvement on this feature.

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-29 17:54:15 +00:00
patchback[bot]
349534b85b Rebase PR #898 (#905) (#913)
This is a backport of PR #905 as merged into main (d329e7e).
This PR is a rebase of #898 for CI to pass
Thanks @efussi for your collaboration.
Closes #892

Reviewed-by: Bikouo Aubin
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-25 15:59:21 +00:00
patchback[bot]
d2dcb9e55f Run integration tests using ansible-core 2.19 (#888) (#895)
* fix integration test ``k8s_full`` running with ansible-core 2.19

* Fix templating issues

* fix test on current ansible version

* fix tests cases

* Fix additional tests

* fix the templating mechanism

* consider using variable_[start/end]_string while parsing template

* Remove support for omit into template option

* Remove unnecessary unit tests

(cherry picked from commit 2cb5d6c316)

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2025-04-25 16:49:10 +02:00
Mike Graves
0eff03dd19 Prep 5.2.0 release (#891)
SUMMARY

Prep 5.2.0 release

ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bikouo Aubin
Reviewed-by: Alina Buzachis
2025-03-31 17:20:25 +00:00
patchback[bot]
81fb8662da waiter.py Add ClusterOperator Test (#879) (#882)
This is a backport of PR #879 as merged into main (7cdf0d0).
SUMMARY
Fixes #869
During an OpenShift installation, one of the checks to see that the cluster is ready to proceed with configuration is to check to ensure that the Cluster Operators are in an Available: True Degraded: False Progressing: False state. While you can currently use the k8s_info module to get a json response, the resulting json needs to be iterated over several times to get the appropriate status.
This PR adds functionality into waiter.py which loops over all resource instances of the cluster operators. If any of them is not ready, waiter returns False and the task false. If the task returns, you can assume that all the cluster operators are healthy.


ISSUE TYPE


Feature Pull Request

COMPONENT NAME

waiter.py
ADDITIONAL INFORMATION



A simple playbook will trigger the waiter.py to watch the ClusterOperator object

---
- name: get operators
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Get cluster operators
      kubernetes.core.k8s_info:
        api_version: v1
        kind: ClusterOperator
        kubeconfig: "/home/ocp/one/auth/kubeconfig"
        wait: true
        wait_timeout: 30
      register: cluster_operators


This will produce the simple response if everything is functioning properly:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
ok: [localhost]

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

If the timeout is reached:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds"}

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

UNSOLVED: How to know which Operators are failing

Reviewed-by: Bikouo Aubin
2025-03-26 15:50:36 +00:00
patchback[bot]
f74ee14d71 Extend hidden_fields to allow more complicated field definitions (#872) (#887)
This is a backport of PR #872 as merged into main (9ec6912).
SUMMARY
This allows us to ignore e.g. the last-applied-configuration annotation by specifying
metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
hidden_fields
This replaces #643 as I no longer have permissions to push to branches in this repo

Reviewed-by: Bikouo Aubin
2025-03-25 13:36:12 +00:00
Yuriy Novostavskiy
c93a7e2459 prepare release 5.1.0 (#865)
SUMMARY
This release came with new module helm_registry_auth, and improvements to the error messages in the k8s_drain module, new parameter insecure_registry for helm_template module and several bug fixes.
ISSUE TYPE

New release pull request

Changelog
Minor Changes

Bump version of ansible-lint to minimum 24.7.0 (#765).
Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (#805).
connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (#741).
k8s_drain - Improve error message for pod disruption budget when draining a node (#797).

Bugfixes

helm - Helm version checks did not support RC versions. They now accept any version tags. (#745).
helm_pull - Apply no_log=True to pass_credentials to silence false positive warning.. (#796).
k8s_drain - Fix k8s_drain does not wait for single pod (#769).
k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set  (#792).
kubeconfig option should not appear in module invocation log (#782).
kustomize - kustomize plugin fails with deprecation warnings (#639).
waiter - Fix waiting for daemonset when desired number of pods is 0. (#756).

New Modules

helm_registry_auth - Helm registry authentication module

ADDITIONAL INFORMATION
Collection kubernets.core version 3.1.0 is compatible with ansible-core>=2.15.0

Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-01-20 15:23:54 +00:00
patchback[bot]
05aea7727d helm_pull: Silence false no_log warning (#796) (#858)
This is a backport of PR #796 as merged into main (ecc64ca).
SUMMARY
Apply no_log=True to pass_credentials to silence false positive warning.
Fixes similar issue to: #423
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
changelog/fragements/796-false-positive-helmull.yaml
plugins/modules/helm_pull.py
2025-01-17 16:21:23 +00:00
patchback[bot]
fcd47ca995 Remove kubeconfig value from module invocation log (#826) (#840)
(cherry picked from commit 6efabd3418)

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-12-17 18:32:59 +01:00
patchback[bot]
c37dc5b566 Parameter insecure_registry added to helm_template (#805) (#835)
* Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify

(cherry picked from commit 6609abdd5a)

Co-authored-by: Yuriy Novostavskiy <yuriy@novostavskiy.kiev.ua>
2024-12-17 17:47:35 +01:00
patchback[bot]
5d038db848 Update README.md with removing outdated communication channels (#790) (#825)
This is a backport of PR #790 as merged into main (b8e9873).
SUMMARY
As part of the consolidating Ansible discussion platforms and communication channels was decided to use the Ansible forum as the main place for questions and discussion.
Reference: https://forum.ansible.com/t/proposal-consolidating-ansible-discussion-platforms/6812
As part of this change, the IRC channel was removed by the PRs #778 and #774.
However, the README.md file wasn't fully cleaned up from the outdated information.
The #ansible-kubernetes channel on libera.chat IRC isn't used by maintainers and contributors anymore.
The Wiki page on the https://github.com/ansible/community/ was deprecated a long time ago
ISSUE TYPE

Docs Pull Request

COMPONENT NAME
README.md
2024-12-11 20:35:22 +00:00
Yuriy Novostavskiy
dac1448b9c README: Add Communication section with Forum information (#774) (#823)
SUMMARY


ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-12-11 18:54:13 +00:00
patchback[bot]
4bdff5d672 Make k8s_drain work when only one pod is present (#770) (#820)
This is a backport of PR #770 as merged into main (4c305e7).
SUMMARY
Fixes #769 .
k8s_drain was not checking if a pod has been deleted when there was only one pod on the node to be drained.
The list of pods, pods, was being "popped" before the first iteration of the while loop:
        pod = pods.pop()
        while (_elapsed_time() < wait_timeout or wait_timeout == 0) and pods:
When pods contains only one element, the while loop is skipped.


ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

k8s_drain
2024-12-11 16:23:41 +00:00
patchback[bot]
19a71c82ba Update Readme to match the template (#767) (#819)
This is a backport of PR #767 as merged into main (fdb8af7).
SUMMARY


Refer: https://issues.redhat.com/browse/ACA-1749
This PR updates the README doc to match the template
ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION
2024-12-11 16:17:45 +00:00
patchback[bot]
c73f3e3f75 Bump the ansible-lint version to 24.7.0 (#765) (#818)
This is a backport of PR #765 as merged into main (a89f19b).
SUMMARY

Bump the ansible-lint version to 24.7.0

ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION
2024-12-11 15:58:13 +00:00
patchback[bot]
e98605eb16 Improve error message for pod disruption budget when draining a node (#798) (#816)
This is a backport of PR #798 as merged into main (52f2cb5).
SUMMARY
Closes #797 .
The error message "Too Many Requests" is confusing and is changed to a more meaningful message:
TASK [Drain node] *************************************************************************
Montag 25 November 2024  09:20:28 +0100 (0:00:00.014)       0:00:00.014 ******* 
fatal: [host -> localhost]: FAILED! => {"changed": false, "msg": "Failed to delete pod kube-public/draintest-6b84677b99-9jf7m due to: Cannot evict pod as it would violate the pod's disruption budget."}


The new task output would allow to deal with a pod disruption budget with the retries/until logic in a more controlled way:
---
- hosts: "{{ target }}"
  serial: 1
  gather_facts: false
  tasks:
    - name: Drain node
      kubernetes.core.k8s_drain:
        kubeconfig: "{{ kubeconfig_path }}"
        name: "{{ inventory_hostname }}"
        delete_options:
          ignore_daemonsets: true
          delete_emptydir_data: true
          wait_timeout: 100
          disable_eviction: false
          wait_sleep: 1
      delegate_to: localhost
      retries: 10
      delay: 5
      until: drain_result is success or 'disruption budget' not in drain_result.msg
      register: drain_result

ISSUE TYPE


Feature Pull Request

COMPONENT NAME
k8s_drain
2024-12-11 15:14:49 +00:00
patchback[bot]
2098dfea5e Fix k8s_drain runs into timeout with pods from stateful sets. (#793) (#808)
This is a backport of PR #793 as merged into main (fca0dc0).
SUMMARY
Fixes #792 .
The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running:
            try:
                response = self._api_instance.read_namespaced_pod(
                    namespace=pod[0], name=pod[1]
                )
                if not response:
                    pod = None
                time.sleep(wait_sleep)
This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME
k8s_drain
2024-12-11 14:12:35 +00:00
patchback[bot]
67868442f3 [ci] fix github actions post 2.18 (#789) (#812)
This is a backport of PR #789 as merged into main (cd68631).
This PR includes a trivial fix for the GitHub Actions issue #788 and related to switching milestone and devel branches of ansible/ansible to version 2.19 and prepare repo to be ready to include test with Python 3.13 when ansible-network/github_actions/pull/162 is merged.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
GitHub actions/test

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-12-10 17:13:53 +00:00
patchback[bot]
5eefa9c308 fix: kustomize plugin fails with deprecation warnings (#728) (#764)
This is a backport of PR #728 as merged into main (5bc53db).
SUMMARY

error judgments are based on the exit codes of command execution, where 0 represents success and non-zero represents failure.
Optimize the run_command function to return a tuple like the run_command method of AnsibleModule.

Fixes #639
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kustomize lookup plugin
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-07-15 13:56:34 +00:00
patchback[bot]
4ed9105797 Fix waiting for daemonset when desired number of pods is 0 (#756) (#762)
This is a backport of PR #756 as merged into main (b07fbd6).
Fixes #755
SUMMARY
Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0
Code to reproduce:
- name: Create daemonset
  kubernetes.core.k8s:
    state: present
    wait: true
    definition:
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: my-daemonset
        namespace: default
      spec:
        selector:
          matchLabels:
            app: my-app
        template:
          metadata:
            labels:
              app: my-app
          spec:
            containers:
              - name: my-container
                image: nginx
            nodeSelector:
              non_exisiting_label: 1
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
kubernetes.core.plugins.module_utils.k8s.waiter
ADDITIONAL INFORMATION



TASK [Create daemonset] **********************************************************************************************************************************
changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}}

~$ kubectl get ds
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE
my-daemonset   0         0         0       0            0           non_exisiting_label=1   30s

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-07-10 14:32:11 +00:00
patchback[bot]
5761205513 helm: Accept release candidate versions for compatibility checks (#745) (#754)
This is a backport of PR #745 as merged into main (6a04f42).
SUMMARY

If the helm CLI version includes -rc.1 for example, the version checks fails due to an incomplete regex.
The error can be triggered if you use helm v3.15.0-rc.1 for example, and apply a helm chart with wait: true 
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME
helm
helm_pull
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-06-18 13:05:55 +00:00
Bikouo Aubin
7b0190f8d5 Prepare release 5.0.0 (#733) 2024-06-10 15:39:42 +02:00
patchback[bot]
c47e691101 Doc: add example of using kubectl connection plugin (#741) (#744)
[PR #741/fb80d973 backport][stable-5] Doc: add example of using kubectl connection plugin

This is a backport of PR #741 as merged into main (fb80d97).
SUMMARY
Currently documentation for collection don't include any examples of using kubenrenes.core.kubectl connection plugin and it's hard to start using that plugin.
ISSUE TYPE

Docs Pull Request

COMPONENT NAME
kubenrenes.core.kubectl connection plugin
ADDITIONAL INFORMATION
This PR was inspired by #288 and based on feedback on that PR and my own experience. Thanks @tpo for his try and @geerlingguy for his Ansible for DevOps book

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-06-06 14:34:01 +00:00
patchback[bot]
8ae6469696 Defer removal of inventory/k8s to 6.0.0 (#734) (#740)
Defer removal of inventory/k8s to 6.0.0

SUMMARY
Defer removal of inventory plugin k8s to release 6.0.0.

ISSUE TYPE

Feature Pull Request

Reviewed-by: Alina Buzachis
Reviewed-by: Mike Graves <mgraves@redhat.com>
(cherry picked from commit 0c5233a650)

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-05-31 10:09:39 +02:00
patchback[bot]
1174fee5c9 Remove support for ansible-core<2.15 (#737) (#739)
Drop support for ansible-core<2.15

SUMMARY

Remove support for ansible-core<2.15

ISSUE TYPE

Feature Pull Request

Reviewed-by: Mike Graves <mgraves@redhat.com>
(cherry picked from commit 8363a4debf)

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-05-31 10:06:48 +02:00
Bikouo Aubin
f22ffcab18 Prepare release 4.0.0 (#727)
* Prepare release 4.0.0

* update documentation
2024-05-28 11:37:32 +02:00
Bikouo Aubin
072a08091b Remove deprecated function from module_utils/common.py (#726)
Remove deprecated function from module_utils/common.py

SUMMARY

Remove deprecated functions and class from module_utils/common.py in order to prepare release 4.0.0

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

module_utils/common.py

Reviewed-by: Alina Buzachis
2024-05-24 05:29:46 +00:00
Alina Buzachis
cbadbe32f9 Defer removal of k8s inventory plugin to version 5.0. (#723)
Defer removal of k8s inventory plugin to version 5.0.

SUMMARY

Defer removal of k8s inventory plugin to version 5.0.

ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

inventory/k8s.py
ADDITIONAL INFORMATION

Reviewed-by: Bikouo Aubin
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-05-22 10:13:50 +00:00
Alina Buzachis
966fa7e906 k8s - remove support for merge_type=json (#722)
k8s - remove support for merge_type=json

SUMMARY

Support for merge_type=json has been removed in version 4.0.0. Please use kubernetes.core.k8s_json_patch instead.

ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

k8s.py
ADDITIONAL INFORMATION

Reviewed-by: Bikouo Aubin
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-05-22 10:13:47 +00:00
Mike Graves
485eae3b10 Release 3.1.0 (#719) (#720)
Sync stable-3 to main branch (#719)

Release 3.1.0
SUMMARY
Release prep for 3.1.0
ISSUE TYPE
Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Alina Buzachis
Reviewed-by: Helen Bailey hebailey@redhat.com
(cherry picked from commit ef829b8)

Reviewed-by: Alina Buzachis
2024-05-16 18:43:34 +00:00
Dennis Ochocki
ac943e9890 fixed typo in filename of 'k8s_json_patch'-action (#652)
fixed typo in filename of 'k8s_json_patch'-action 

SUMMARY

The filename/symlink of the action for the 'k8s_json_patch'-module was wrong. Renamed file from 'ks8_json_patch.py' to ' k8s_json_patch.py'

ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

k8s_json_patch
ADDITIONAL INFORMATION


Because of the wrong filename things like unvaulting kubeconfig files did not worked.

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-05-14 15:48:01 +00:00