Commit Graph

146 Commits

Author SHA1 Message Date
Yuriy Novostavskiy
3e32c12c40 Replace passing `warnings to exit_json with AnsibleModule.warn` for the few modules (#1033)
SUMMARY
Using exit_json or fail_json for warnings is deprecated in ansible-core>=2.19.0 and will be removed in ansible-core>=2.23.0
Tested with ansible-core 2.19.3 as the latest released version at the time of the start of this PR and with 2.16.0 as the lowest version supported by kubernetes.core 6.x
Resolves: #1031
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
k8s_drain
k8s_rollback
k8s_scale
ADDITIONAL INFORMATION
The initial version of this PR covers only the module k8s_drain, with the following commits extended to k8s_rollback
k8s_scale

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis
2026-01-26 19:52:15 +00:00
Yuriy Novostavskiy
13791ec7bf Limit compatibility to Helm =>v3.0.0,<4.0.0 (#1039)
SUMMARY
Helm v4 is a major version with backward-incompatible changes, including to the flags and output of the Helm CLI and to the SDK. This version is currently not supported in the kubernetes.core. This PR is related to #1038 and is a short-term solution to mark compatibility explicitly
ISSUE TYPE

Bugfix Pull Request
Docs Pull Request

COMPONENT NAME

helm
helm_template
helm_info
helm_repository
helm_pull
helm_registry_auth
helm_plugin
helm_plugin_info

ADDITIONAL INFORMATION
Added `validate_helm_version()`` method to AnsibleHelmModule that enforces version constraint >=3.0.0,<4.0.0.
Fails fast with clear error message: "Helm version must be >=3.0.0,<4.0.0, current version is {version}"
Some modules (i.e. helm_registry_auth) technically is compatible with Helm v4, but validation was added to all helm modules.
Partially coauthored by GitHub Copilot with Claude Sonnet 4 model.
Addresses issue #1038

Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
Reviewed-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2026-01-26 18:39:07 +00:00
Yuriy Novostavskiy
452fb3d7cb Replace deprecated ansible.module_utils._text imports (#1053)
SUMMARY
Importing from ansible.module_utils._text is deprecated in ansible-core 2.20 and removed in 2.24. All imports of to_bytes, to_native, and to_text now use ansible.module_utils.common.text.converters.
Before:
from ansible.module_utils._text import to_bytes, to_native, to_text

After:
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text

ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
plugins/module_utils/common.py
plugins/action/k8s_info.py
plugins/connection/kubectl.py
plugins/module_utils/{copy.py, k8s/runner.py}
plugins/modules/{k8s_cp.py, k8s_drain.py, k8s_exec.py, k8s_json_patch.py, k8s_scale.py, k8s_taint.py}
ADDITIONAL INFORMATION
It's not an actual Bugfix, more a lifecycle management to ensure compatibility with future Ansible versions.
Tested with ansible-core 2.20 to ensure no deprecation warnings are raised and with ansible-core 2.16 to ensure backward compatibility.
Patrially coauthored-by: GitHub Copilot with Claude Code 4.5 model.
Addresses issue #1052.

Reviewed-by: Bikouo Aubin
Reviewed-by: Alina Buzachis
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2026-01-24 01:28:15 +00:00
Bianca Henderson
4fa36487ab Selectively redact sensitive kubeconfig data from logs (#1014)
SUMMARY

Resolves #782

ISSUE TYPE


Bugfix Pull Request

ADDITIONAL INFORMATION


The proper redaction of kubeconfig data can be seen by running this example playbook with verbosity of -vvv against the code in this PR.
Prior to these changes, all info was redacted (as shown in the example below):
ok: [local] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "binary_path": null,
            "ca_cert": null,
            "context": null,
            "get_all_values": false,
            "host": null,
            "kubeconfig": {
                "apiVersion": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "clusters": [
                    {
                        "cluster": {
                            "insecure-skip-tls-verify": true,
                            "server": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
                    {
                        "cluster": {
                            "certificate-authority-data": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "server": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
                    {
                        "cluster": {
                            "certificate-authority": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "extensions": [
                                {
                                    "extension": {
                                        "last-update": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                                        "provider": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                                        "version": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                                    },
                                    "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                                }
                            ],
                            "server": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    }
                ],
                "contexts": [
                    {
                        "context": {
                            "cluster": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
                    {
                        "context": {
                            "cluster": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                        },
                        "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
                    },
[output shortened]

With the changes in this PR, only sensitive data is redacted:
ok: [local] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "binary_path": null,
            "ca_cert": null,
            "context": null,
            "get_all_values": false,
            "host": null,
            "kubeconfig": {
                "apiVersion": "v1",
                "clusters": [
                    {
                        "cluster": {
                            "insecure-skip-tls-verify": true,
                            "server": "<server address>"
                        },
                        "name": "exercise"
                    },
                    {
                        "cluster": {
                            "certificate-authority-data": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                            "server": "<server address>"
                        },
                        "name": "kind-drain-test"
                    },
                    {
                        "cluster": {
                            "certificate-authority": "<path to .crt>",
                            "extensions": [
                                {
                                    "extension": {
                                        "last-update": "Tue, 07 Oct 2025 11:25:54 EDT",
                                        "provider": "minikube.sigs.k8s.io",
                                        "version": "v1.35.0"
                                    },
                                    "name": "cluster_info"
                                }
                            ],
                            "server": "<server address>"
                        },
                        "name": "minikube"
                    }
                ],
                "contexts": [
                    {
                        "context": {
                            "cluster": "exercise-pod",
                            "user": "bianca"
                        },
                        "name": "exercise"
                    },
                    {
                        "context": {
                            "cluster": "kind-drain-test",
                            "user": "kind-drain-test"
                        },
                        "name": "kind-drain-test"
                    },
[output shortened]

Reviewed-by: Bikouo Aubin
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
Reviewed-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
Reviewed-by: Alina Buzachis
2025-10-13 15:01:22 +00:00
Yorick Gruijthuijzen
027700c3f4 Added support for copying files to init Containers. (#971)
SUMMARY
Was going trough the list with issues and found 958; which seemed a quick fix.
What I fixed with with this PR:

Added support for copying files to init containers.
Fixed the format message when an exec is failing for a pod (the order was wrong).
Added a check if the container that you try to run copy for is started.

ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
copy.py module
ADDITIONAL INFORMATION
Some testing.
Verify that the pod does not exist:
kubectl -n default get pod/yorick
Output:
Error from server (NotFound): pods "yorick" not found

Run the playbook to create the file, create the deployment, wait for the init container to be ready, copy the created file to the init container, cat the copied file (using kubernetes.core.k8s_exec) that is now in the init container and try to copy the created file to the (not started) container (which fails - to see the new error message for it):
cat << EOF | ansible-playbook /dev/stdin
- hosts: localhost
  gather_facts: False
  tasks:

  - ansible.builtin.copy:
      content: |
        Hi there
      dest: /tmp/yorick.txt

  - name: Deploy pod with initContainer with an unlimited while loop
    kubernetes.core.k8s:
      kubeconfig: "~/.kube/config"
      definition:
        apiVersion: v1
        kind: Pod
        metadata:
          name: "yorick"
          namespace: "default"
        spec:
          initContainers:
            - name: "yorick-init"
              image: busybox:latest
              command: ["/bin/sh"]
              args:
                - "-c"
                - |
                  echo "Init container started, waiting for file..."
                  # Wait for the file to be copied
                  while :;do
                    echo "Waiting for file"
                    sleep 5
                  done
                  echo "File received! Init container completing..."
          containers:
            - name: "yorick-container"
              image: busybox:latest
              command: ["/bin/sh"]
              args:
                - "-c"
                - |
                  # Keep container running for testing
                  sleep 300

  - kubernetes.core.k8s_info:
      kubeconfig: "~/.kube/config"
      api_version: v1
      kind: Pod
      name: "yorick"
      namespace: "default"
    register: pod_status
    until: >-
      pod_status.resources|length > 0
      and 'initContainerStatuses' in pod_status.resources.0.status
      and pod_status.resources.0.status.initContainerStatuses|length > 0
      and pod_status.resources.0.status.initContainerStatuses.0.started|bool

  - name: Copy /tmp/yorick.txt to the yorick-init init container
    kubernetes.core.k8s_cp:
      kubeconfig: "~/.kube/config"
      namespace: default
      pod: yorick
      remote_path: /tmp/yorick.txt
      local_path: /tmp/yorick.txt
      container: yorick-init

  - name: Execute a command
    kubernetes.core.k8s_exec:
      kubeconfig: "~/.kube/config"
      namespace: default
      pod: yorick
      container: yorick-init
      command: cat /tmp/yorick.txt
    register: exec_out

  - ansible.builtin.debug:
      var: exec_out.stdout

  - name: Try to copy /tmp/yorick.txt to the yorick-container container
    kubernetes.core.k8s_cp:
      kubeconfig: "~/.kube/config"
      namespace: default
      pod: yorick
      remote_path: /tmp/yorick.txt
      local_path: /tmp/yorick.txt
      container: yorick-container
EOF
Output:
PLAY [localhost] ********************************************************************************************************************************************************************

TASK [ansible.builtin.copy] *********************************************************************************************************************************************************
Thursday 31 July 2025  02:01:21 +0200 (0:00:00.016)       0:00:00.016 *********
ok: [localhost]

TASK [Deploy pod with initContainer with an unlimited while loop] *******************************************************************************************************************
Thursday 31 July 2025  02:01:21 +0200 (0:00:00.788)       0:00:00.804 *********
changed: [localhost]

TASK [kubernetes.core.k8s_info] *****************************************************************************************************************************************************
Thursday 31 July 2025  02:01:25 +0200 (0:00:03.963)       0:00:04.768 *********
FAILED - RETRYING: [localhost]: kubernetes.core.k8s_info (3 retries left).
ok: [localhost]

TASK [Copy /tmp/yorick.txt to the yorick-init init container] ***********************************************************************************************************************
Thursday 31 July 2025  02:01:32 +0200 (0:00:06.598)       0:00:11.366 *********
changed: [localhost]

TASK [Execute a command] ************************************************************************************************************************************************************
Thursday 31 July 2025  02:01:39 +0200 (0:00:07.017)       0:00:18.383 *********
changed: [localhost]

TASK [ansible.builtin.debug] ********************************************************************************************************************************************************
Thursday 31 July 2025  02:01:40 +0200 (0:00:00.644)       0:00:19.028 *********
ok: [localhost] => {
    "exec_out.stdout": "Hi there\n"
}

TASK [Try to copy /tmp/yorick.txt to the yorick-container container] ****************************************************************************************************************
Thursday 31 July 2025  02:01:40 +0200 (0:00:00.021)       0:00:19.050 *********
fatal: [localhost]: FAILED! => {
    "changed": false
}

MSG:

Pod container yorick-container is not started

PLAY RECAP **************************************************************************************************************************************************************************
localhost                  : ok=6    changed=3    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Playbook run took 0 days, 0 hours, 0 minutes, 21 seconds

Reviewed-by: spatterlight
Reviewed-by: Yorick Gruijthuijzen <yorick-1989@hotmail.com>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Bikouo Aubin
2025-09-24 20:36:56 +00:00
Bianca Henderson
448d3fe156 [CI Fix] Remove ansible.module_utils.six imports (#998)
SUMMARY
This PR is essentially attempting Option B from issue #996 (Option A is implemented here); this code update accounts for the recent merge of sanity: warn on ansible.module_utils.six imports #85651.

Reviewed-by: Alina Buzachis
Reviewed-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
2025-09-22 16:08:18 +00:00
Bianca Henderson
5148ee5f74 Reapply "Remove kubeconfig value from module invocation log (#826)" (#899) (#978)
This reverts commit 1705ced (i.e., reapplies the changes from #826); this is a temporary fix for #782 as it will re-introduce #870, which will need to be re-opened.

Reviewed-by: Alina Buzachis
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-08-11 16:46:40 +00:00
James Mighion
1705ced1b5 Revert "Remove kubeconfig value from module invocation log (#826)" (#899)
This reverts commit 6efabd3.
SUMMARY

Fixes #870
A better solution is necessary to address #782. The current code makes getting manifests practically unusable. We need to revert this commit until a better solution is found.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kubeconfig

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-22 16:49:34 +00:00
Felix Matouschek
6a0635a2bb fix(k8s,service): Hide fields first before creating diffs (#915)
SUMMARY

By hiding fields first before creating a diff hidden fields will not be shown in the resulting diffs and therefore will also not trigger the changed condition.
The issue can only be reproduced when a mutating webhook changes the object while the kubernetes.core.k8s module is working with it.

kubevirt/kubevirt.core#145
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kubernetes.core.module_utils.k8s.service
ADDITIONAL INFORMATION


Run kubernetes.core.k8s and create object with hidden fields. After run kubernetes.core.k8s again and let a webhook mutate the object that the module is working with. The module should return with changed: no.

Reviewed-by: Bikouo Aubin
Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-07-15 16:10:26 +00:00
Bikouo Aubin
d329e7ee42 Rebase PR #898 (#905)
This PR is a rebase of #898 for CI to pass
Thanks @efussi for your collaboration.
Closes #892

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-25 05:31:03 +00:00
Will Thames
9ec6912325 Extend hidden_fields to allow more complicated field definitions (#872)
SUMMARY
This allows us to ignore e.g. the last-applied-configuration annotation by specifying
metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
hidden_fields
This replaces #643 as I no longer have permissions to push to branches in this repo

Reviewed-by: Bikouo Aubin
Reviewed-by: Helen Bailey <hebailey@redhat.com>
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
Reviewed-by: Alina Buzachis
2025-03-20 10:35:51 +00:00
Steve Ovens
7cdf0d03f5 waiter.py Add ClusterOperator Test (#879)
SUMMARY
Fixes #869
During an OpenShift installation, one of the checks to see that the cluster is ready to proceed with configuration is to check to ensure that the Cluster Operators are in an Available: True Degraded: False Progressing: False state. While you can currently use the k8s_info module to get a json response, the resulting json needs to be iterated over several times to get the appropriate status.
This PR adds functionality into waiter.py which loops over all resource instances of the cluster operators. If any of them is not ready, waiter returns False and the task false. If the task returns, you can assume that all the cluster operators are healthy.


ISSUE TYPE


Feature Pull Request

COMPONENT NAME

waiter.py
ADDITIONAL INFORMATION



A simple playbook will trigger the waiter.py to watch the ClusterOperator object

---
- name: get operators
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Get cluster operators
      kubernetes.core.k8s_info:
        api_version: v1
        kind: ClusterOperator
        kubeconfig: "/home/ocp/one/auth/kubeconfig"
        wait: true
        wait_timeout: 30
      register: cluster_operators


This will produce the simple response if everything is functioning properly:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
ok: [localhost]

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

If the timeout is reached:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds"}

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

UNSOLVED: How to know which Operators are failing

Reviewed-by: Mandar Kulkarni <mandar242@gmail.com>
Reviewed-by: Bikouo Aubin
2025-02-26 17:53:12 +00:00
Bikouo Aubin
6efabd3418 Remove kubeconfig value from module invocation log (#826) 2024-12-17 17:50:22 +01:00
Yuriy Novostavskiy
aee847431a helm_registry_auth module to authenticate in OCI registry (#800)
* new module helm_registry_auth

* Initial integration tests

* final update copyright and integration test before pr

* update link to pr in changelog fragment

* reformat plugins/module_utils/helm.py with black

to fix linters in actions

* attempt to fix unit test

unit test was missing initially

* fix https://pycqa.github.io/isort/ linter

* next attemp to fix unit-test

* remove unused and unsupported helm_args_common

* remove unused imports and fix other linters errors

* another fix for unit test

* fix issue introducied by commit ff02893a12a31f9c44b5c48f9a8bf85057295961

* add binary_path to arg_spec

* return helm_cmd in the output of check mode

remove changlog fragment

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* remove changed from module return

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* remove redundant code

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* Update plugins/modules/helm_registry_auth.py

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* consider support of logout when user is not logged in

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* consider support helm < 3.0.0

* Revert "consider support helm < 3.0.0"

This reverts commit f20004d196.

* reintroduce support of helm version less than 3.8.0

reference: https://helm.sh/docs/topics/registries/#enabling-oci-support-prior-to-v380

* revert reintroducing support of helm < 3.8.0

reason: didn't find a quick way to deal with tests

* update documentation with the recent module updates

* Update plugins/modules/helm_registry_auth.py

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* add test of logout impendency

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* fix linters

* fix intendations in the integration tests

* create tests/integration/targets/helm_registry_auth/aliases

* fix integration test (typo)

* fix integration tests (test wrong cred)

* add stderr when module fail

* another attempt to fix integration test

* fix assertion in integration test to be not affceted by the #830

---------

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-12-17 15:39:42 +01:00
Ottavia Balducci
fca0dc0485 Fix k8s_drain runs into timeout with pods from stateful sets. (#793)
SUMMARY
Fixes #792 .
The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running:
            try:
                response = self._api_instance.read_namespaced_pod(
                    namespace=pod[0], name=pod[1]
                )
                if not response:
                    pod = None
                time.sleep(wait_sleep)
This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME
k8s_drain

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-12-10 15:35:07 +00:00
Artur Załęski
b07fbd6271 Fix waiting for daemonset when desired number of pods is 0 (#756)
Fixes #755
SUMMARY
Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0
Code to reproduce:
- name: Create daemonset
  kubernetes.core.k8s:
    state: present
    wait: true
    definition:
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: my-daemonset
        namespace: default
      spec:
        selector:
          matchLabels:
            app: my-app
        template:
          metadata:
            labels:
              app: my-app
          spec:
            containers:
              - name: my-container
                image: nginx
            nodeSelector:
              non_exisiting_label: 1
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
kubernetes.core.plugins.module_utils.k8s.waiter
ADDITIONAL INFORMATION



TASK [Create daemonset] **********************************************************************************************************************************
changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}}

~$ kubectl get ds
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE
my-daemonset   0         0         0       0            0           non_exisiting_label=1   30s

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-07-10 13:58:37 +00:00
Eric G
6a04f42d0b helm: Accept release candidate versions for compatibility checks (#745)
SUMMARY

If the helm CLI version includes -rc.1 for example, the version checks fails due to an incomplete regex.
The error can be triggered if you use helm v3.15.0-rc.1 for example, and apply a helm chart with wait: true 
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME
helm
helm_pull
ADDITIONAL INFORMATION

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Eric G.
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-06-17 18:58:42 +00:00
Bikouo Aubin
072a08091b Remove deprecated function from module_utils/common.py (#726)
Remove deprecated function from module_utils/common.py

SUMMARY

Remove deprecated functions and class from module_utils/common.py in order to prepare release 4.0.0

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

module_utils/common.py

Reviewed-by: Alina Buzachis
2024-05-24 05:29:46 +00:00
Alina Buzachis
966fa7e906 k8s - remove support for merge_type=json (#722)
k8s - remove support for merge_type=json

SUMMARY

Support for merge_type=json has been removed in version 4.0.0. Please use kubernetes.core.k8s_json_patch instead.

ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

k8s.py
ADDITIONAL INFORMATION

Reviewed-by: Bikouo Aubin
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-05-22 10:13:47 +00:00
Felix Matouschek
600c10dffb k8s: Display warnings to users (#701)
k8s: Display warnings to users

SUMMARY
This changes K8sService and the k8s module so warnings returned by the K8S API are displayed to the user.
Fixes kubevirt/kubevirt.core#30
Fixes kubevirt/kubevirt.core#31
ISSUE TYPE


Feature Pull Request

COMPONENT NAME


k8s module
K8sService

ADDITIONAL INFORMATION



Before:
TASK [Create VM] **********************************************************************************************************************************************
ok: [localhost]

After:
TASK [Create VM] **********************************************************************************************************************************************
[WARNING]: unknown field "spec.template.spec.disk"
[WARNING]: unknown field "spec.template.spec.domain.bogus"
ok: [localhost]

Reviewed-by: Adam Miller <admiller@redhat.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Felix Matouschek <felix@matouschek.org>
2024-05-06 13:35:52 +00:00
Wout Van De Wiel
9f7c865c9c helm - expand kubeconfig path with user's home dir (#654)
helm - expand kubeconfig path with user's home dir

SUMMARY

Currently the helm module fails when providing the default kubeconfig path explicitly, while the same path is fine for the k8s module.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

helm
ADDITIONAL INFORMATION



- name: Deploy kubelet-csr-approver
  delegate_to: client
  run_once: true
  kubernetes.core.helm:
    update_repo_cache: true
    kubeconfig: "~/.kube/config"
    state: present
    name: kubelet-csr-approver
    namespace: kubelet-csr-approver
    create_namespace: true
    chart_ref: kubelet-csr-approver/kubelet-csr-approver
    chart_version: 1.0.5
    values: "{{ lookup('template', 'values.yaml.j2') | from_yaml }}"
    atomic: true

Before change:
TASK [kubernetes/kubelet_csr_approver : Deploy kubelet-csr-approver] ***
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: FileNotFoundError: [Errno 2] No such file or directory: '~/.kube/config'
fatal: [node-1 -> client(192.168.121.56)]: FAILED! => {"changed": false, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n  File \"/home/vagrant/.ansible/tmp/ansible-tmp-1697293347.7135417-118207-9805169252135/AnsiballZ_helm.py\", line 107, in <module>\r\n    _ansiballz_main()\r\n  File \"/home/vagrant/.ansible/tmp/ansible-tmp-1697293347.7135417-118207-9805169252135/AnsiballZ_helm.py\", line 99, in _ansiballz_main\r\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n  File \"/home/vagrant/.ansible/tmp/ansible-tmp-1697293347.7135417-118207-9805169252135/AnsiballZ_helm.py\", line 47, in invoke_module\r\n    runpy.run_module(mod_name='ansible_collections.kubernetes.core.plugins.modules.helm', init_globals=dict(_module_fqn='ansible_collections.kubernetes.core.plugins.modules.helm', _modlib_path=modlib_path),\r\n  File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\r\n    return _run_module_code(code, init_globals, run_name, mod_spec)\r\n  File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\r\n    _run_code(code, mod_globals, init_globals,\r\n  File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n    exec(code, run_globals)\r\n  File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py\", line 924, in <module>\r\n  File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py\", line 737, in main\r\n  File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py\", line 435, in run_repo_update\r\n  File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py\", line 169, in run_helm_command\r\n  File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py\", line 162, in env_update\r\n  File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py\", line 120, in _prepare_helm_environment\r\nFileNotFoundError: [Errno 2] No such file or directory: '~/.kube/config'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

After change:
TASK [kubernetes/kubelet_csr_approver : Deploy kubelet-csr-approver] ***
changed: [node-1 -> client(192.168.121.56)]

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin
2024-03-13 13:16:38 +00:00
bastienbosser
1955989278 fix(Collection's util resource discovery fails when complex subresources present #659) (#676)
* fix(Collection's util resource discovery fails when complex subresources present #659)

* fix(add changelog fragment)

* update node image

* Create discovery.yml

* Update main.yml

---------

Co-authored-by: Bastien Bosser <bastien.bosser@eviden.com>
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-02-29 14:38:45 +01:00
GomathiselviS
fe9c12326d Update main branch post 3.0.0 release (#663)
Update main branch post 3.0.0 release

SUMMARY


ISSUE TYPE


Docs Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Bikouo Aubin
2023-11-21 17:23:25 +00:00
GomathiselviS
1670e35cd8 Update python kubernetes library to 24.2.0 , helm/kind-action to 1.8.0 2023-11-15 15:43:15 -05:00
GomathiselviS
b066a2dda3 Cleanup GitHub workflows (#655)
* Cleanup gha

* test by removing matrix excludes

* Rename sanity tests

* trigger integration tests

* Fix ansible-lint workflow

* Fix concurrency

* Add ansible-lint config

* Add ansible-lint config

* Fix integration and lint issues

* integration wf

* fix yamllint issues

* fix yamllint issues

* update readme and add ignore-2.16.txt

* fix ansible-doc

* Add version

* Use /dev/random to generate random data

The GHA environment has difficultly generating entropy. Trying to read
from /dev/urandom just blocks forever. We don't care if the random data
is cryptographically secure; it's just garbage data for the test. Read
from /dev/random, instead. This is only used during the k8s_copy test
target.

This also removes the custom test module that was being used to generate
the files. It's not worth maintaining this for two task that can be
replaced with some simple command/shell tasks.

* Fix saniry errors

* test github_action fix

* Address review comments

* Remove default types

* review comments

* isort fixes

* remove tags

* Add setuptools to venv

* Test gh changes

* update changelog

* update ignore-2.16

* Fix indentation in inventory plugin example

* Update .github/workflows/integration-tests.yaml

* Update integration-tests.yaml

---------

Co-authored-by: Mike Graves <mgraves@redhat.com>
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2023-11-10 16:33:40 +01:00
Will Thames
9e9962bc6c Provide a mechanism to hide fields from output (#629)
Provide a mechanism to hide fields from output

SUMMARY
The k8s and k8s_info modules can be a little noisy in verbose mode, and most of that is due to managedFields.
If we can provide a mechanism to hide managedFields, the output is a lot more useful.
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
k8s, k8s_info
ADDITIONAL INFORMATION
Before
ANSIBLE_COLLECTIONS_PATH=../../.. ansible -m k8s_info -a 'kind=ConfigMap name=hide-fields-cm namespace=hide-fields' localhost 
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
    "api_found": true,
    "changed": false,
    "resources": [
        {
            "apiVersion": "v1",
            "data": {
                "another": "value",
                "hello": "world"
            },
            "kind": "ConfigMap",
            "metadata": {
                "annotations": {
                    "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"another\":\"value\",\"hello\":\"world\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"hide-fields-cm\",\"namespace\":\"hide-fields\"}}\n"
                },
                "creationTimestamp": "2023-06-13T01:47:47Z",
                "managedFields": [
                    {
                        "apiVersion": "v1",
                        "fieldsType": "FieldsV1",
                        "fieldsV1": {
                            "f:data": {
                                ".": {},
                                "f:another": {},
                                "f:hello": {}
                            },
                            "f:metadata": {
                                "f:annotations": {
                                    ".": {},
                                    "f:kubectl.kubernetes.io/last-applied-configuration": {}
                                }
                            }
                        },
                        "manager": "kubectl-client-side-apply",
                        "operation": "Update",
                        "time": "2023-06-13T01:47:47Z"
                    }
                ],
                "name": "hide-fields-cm",
                "namespace": "hide-fields",
                "resourceVersion": "2557394",
                "uid": "f233da63-6374-4079-9825-3562c0ed123c"
            }
        }
    ]
}

After
ANSIBLE_COLLECTIONS_PATH=../../.. ansible -m k8s_info -a 'kind=ConfigMap name=hide-fields-cm namespace=hide-fields hidden_fields=metadata.managedFields' localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
    "api_found": true,
    "changed": false,
    "resources": [
        {
            "apiVersion": "v1",
            "data": {
                "another": "value",
                "hello": "world"
            },
            "kind": "ConfigMap",
            "metadata": {
                "annotations": {
                    "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"another\":\"value\",\"hello\":\"world\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"hide-fields-cm\",\"namespace\":\"hide-fields\"}}\n"
                },
                "creationTimestamp": "2023-06-13T01:47:47Z",
                "name": "hide-fields-cm",
                "namespace": "hide-fields",
                "resourceVersion": "2557394",
                "uid": "f233da63-6374-4079-9825-3562c0ed123c"
            }
        }
    ]
}

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Will Thames
2023-06-21 07:57:53 +00:00
Bikouo Aubin
151ed8245f make name optional to delete all resources for the specified resource type (#517)
make name optional to delete all resources for the specified resource type

SUMMARY

closes #504
k8s module should allow deleting all namespace resources for the specified resource type.

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

k8s
ADDITIONAL INFORMATION


Delete all Pods from namespace test

- k8s:
    namespace: test
    kind: Pod
    api_version: v1
    delete_all: true
    state: absent

Reviewed-by: Gonéri Le Bouder <goneri@lebouder.net>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin
2023-03-23 15:43:22 +00:00
Mandar Kulkarni
deaf8ee4f3 k8s_scale - handle scaling StatefulSets with 'updateStrategy=OnDelete' (#579)
k8s_scale - handle scaling StatefulSets with 'updateStrategy=OnDelete'

SUMMARY

Likely Fixes #503

Handle scaling StatefulSets with 'updateStrategy=OnDelete'
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

k8s_scale
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2023-02-06 20:04:41 +00:00
Bikouo Aubin
8ed4d4b6ed k8s_info - fix issue with kubernetes-client caching when api-server was available (#571)
k8s_info - fix issue with kubernetes-client caching when api-server was available

SUMMARY
closes #508
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

k8s_info
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
2023-01-24 10:43:30 +00:00
Bikouo Aubin
af7c24cba7 helm - add support for -set options when running helm install (#546)
helm - add support for -set options when running helm install

SUMMARY

helm support setting options -set, -set-string, -set-file and -set-json when running helm install

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

helm
ADDITIONAL INFORMATION

Reviewed-by: Alina Buzachis <None>
Reviewed-by: Bikouo Aubin <None>
Reviewed-by: Mike Graves <mgraves@redhat.com>
2023-01-23 16:19:42 +00:00
Bikouo Aubin
804b9ab57c Helm - Fix issue with alternative kubeconfig (#563)
Helm - Fix issue with alternative kubeconfig

SUMMARY

closes #538

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

helm modules

Reviewed-by: Mike Graves <mgraves@redhat.com>
2023-01-12 09:46:42 +00:00
Bikouo Aubin
26cd550bc0 fix multiple issues with dry_run logic (#561)
fix multiple issues with dry_run logic

SUMMARY

Fix multiple issues with dry_run logic

The parameter value passed to the client set to dry_run=All instead of dry_run=True.
Add conditional check for Kubernetes release for the dry_run to be set
Add integration test that checks to ensure server side dry run is being used during check mode.


ISSUE TYPE


Bugfix Pull Request

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Jill R <None>
2023-01-11 07:57:39 +00:00
Bikouo Aubin
42ee210ecf k8s_cp - fix issue when directory contains space in its name (#552)
k8s_cp - fix issue when directory contains space in its name

Depends-On: #549
SUMMARY

There is a remaining issue not addressed by  #512 when copying directory from Pod to local filesystem, if the directory contains space into its name, the directory was not copied

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

k8s_cp
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2022-12-15 18:17:21 +00:00
Bikouo Aubin
c073eea5b3 k8s - fix issue with server side apply (#549)
k8s - fix issue with server side apply

SUMMARY

Fix #548 and #547

ISSUE TYPE


Bugfix Pull Request

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2022-12-15 18:17:17 +00:00
Bikouo Aubin
979b492233 k8s_cp: add support for check_mode, fix doc issue, remove dependency with 'find' when state=from_pod (#512)
k8s_cp: add support for check_mode, fix doc issue, remove dependency with 'find' when state=from_pod

Depends-On: ansible/ansible-zuul-jobs#1635
Depends-On: ansible/ansible-zuul-jobs#1636
Depends-On: #518
Depends-On: #520
SUMMARY

add support for check_mode, closes #380
fix doc issue, closes #485
Remove dependency with 'find' executable when state=from_pod, closes #486

ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request

Reviewed-by: Gonéri Le Bouder <goneri@lebouder.net>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2022-12-09 16:00:14 +00:00
Fors1
2a3862b67a Added possibility to get all values by helm_info module (#531)
Added possibility to get all values by helm_info module

SUMMARY
Parameter get_all_values has been added, which is passed to function get_values. Default is False. Parameter is not required.
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
helm_info
ADDITIONAL INFORMATION
Unfortunately, helm_info module lacks functionality of getting all the values of a helm release, including the default ones. This restricts upgrade and config migration capabilities. Parameter get_all_values has been added. This parameter, if set, adds -a parameter to helm get values call. The parameter is not required and defaults to False, so backwards compability is complied.

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2022-10-19 15:41:37 +00:00
Bikouo Aubin
2092d921cd helm - new module to perform helm pull (#410)
helm - new module to perform helm pull

Depends-On: ansible/ansible-zuul-jobs#1586
SUMMARY

#355
new module to manage chart downloading helm pull

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

helm_pull

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2022-10-12 13:34:19 +00:00
Mike Graves
29c75fa1c6 Update for Ansible 2.15 sanity tests (#515)
Update for Ansible 2.15 sanity tests

Depends-On: ansible/ansible-zuul-jobs#1639
SUMMARY

Update for Ansible 2.15 sanity tests

Fixes #519
ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Bikouo Aubin <None>
2022-10-06 09:50:26 +00:00
Bikouo Aubin
a3a5f3cf4b helm - add support for in-memory kubeconfig (#497)
helm - add support for in-memory kubeconfig

SUMMARY

closes #492

ISSUE TYPE


Feature Pull Request

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
2022-09-12 09:13:19 +00:00
Bikouo Aubin
5ff3566f30 handle aliases for lookup and inventory plugins for authentication options (#500)
Honor aliases for lookup and inventory plugins

rebase and extend the following PR #71
ISSUE TYPE


Bugfix Pull Request

Reviewed-by: Mike Graves <mgraves@redhat.com>
2022-08-23 07:58:08 +00:00
Bikouo Aubin
7d0f0449ae Support resource definition using manifest URL (#478)
Support resource definition using manifest URL

SUMMARY

Closes #451

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

k8s
k8s_scale
k8s_service

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Abhijeet Kasurde <None>
Reviewed-by: Bikouo Aubin <None>
2022-07-04 12:49:53 +00:00
Mike Graves
4f1623fe9c Add deprecation notice 2022-06-17 10:39:52 -04:00
Mike Graves
beb53652db Ensure CoreExceptions are handled gracefully (#476)
Ensure CoreExceptions are handled gracefully

SUMMARY

CoreExceptions, when raised, should have a reasonably helpful and
actionable message associated with them. This adds a final check in
module execution to gracefully fail from these exceptions. A new
fail_from_exception method is added both to simplify exiting the module,
and to ensure that any chained exceptions are available when using -vvv.

ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Alina Buzachis <None>
Reviewed-by: Joseph Torcasso <None>
2022-06-15 13:26:24 +00:00
Mike Graves
92785f58da Port changes from main to refactored branch (#472)
Port changes from main to refactored branch

Depends-on: ansible/ansible-zuul-jobs#1563
SUMMARY

This PR contains several commits that complete the rebase of the 2.x-refactor branch onto main. Most of the changes here had to be manually backported after rebasing as the original changes were to code that will be deprecated. In addition, rather than trying to manually sort out conflicts and changes to the sanity ignores, I rewrote the refresh_ignore_files script to fully automate the management of ignore files. Previously, these files were both manually edited and auto-generated. This should no longer be the case, and these files should never be manually edited going forward.
For the purposes of reviewing and history, I kept all changes in separate commits tied to the original commit being backported.

ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Jill R <None>
2022-06-09 15:20:48 +00:00
Mike Graves
25644ac192 Move diff and wait to perform_action (#375)
This primarily moves the diff and wait logic from the various service
methods to perform_action to eliminate code duplication. I also moved
the diff_objects function out of the service object and moved most of
the find_resource logic to a new resource client method. We ended up
with several modules creating a service object just to use one of these
methods, so it seemed to make sense to make these more accessible.
2022-05-26 08:56:56 -04:00
Alina Buzachis
3bf147580f Migrate k8s (#311)
* Use refactored module_utils

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix runner

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix runner

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Update runner.py

* black runner

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix units

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix ResourceTimeout

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Attempt to fix 'Create custom resource'

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Update svc.find_resource(..., fail=True)

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Attempt to fix integration tests

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix apiVersion for Job

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix crd

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add exception = None

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix apiVersion for definition

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix assert

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix returned results

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Update runner to return results accordingly

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Fix assert

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Add validate-missing

Signed-off-by: Alina Buzachis <abuzachis@redhat.com>

* Update client.py

* Fix failures

* Fix black formatting

Co-authored-by: Mike Graves <mgraves@redhat.com>
2022-05-26 08:15:45 -04:00
Mike Graves
08a3d951d0 Move module dependency functions outside of module (#342)
Move module dependency functions outside of module

SUMMARY

This moves the has_at_least and requires functions that had been on the
module to top level functions. The functions on the module now call
these with a few added bits of functionality.
Moving these functions to the top level and removing their requirement
on having a module makes them usable in situations where we may not yet
have a module, such as during client creation.

ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Alina Buzachis <None>
Reviewed-by: None <None>
2022-05-24 14:28:30 -04:00
Mike Graves
8171c994df [backport/2.2] Migrate k8s_cp module to new refactored code (#329)
Co-authored-by: Alina Buzachis <abuzachis@redhat.com>
2022-05-24 14:28:27 -04:00
Alina Buzachis
d68da5bbdd k8s runner (#309)
k8s runner

SUMMARY

k8s runner
Requires: #307

ISSUE TYPE

New Module Pull Request

Reviewed-by: Abhijeet Kasurde <None>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: None <None>
2022-05-24 11:57:54 -04:00
Alina Buzachis
e2f54d3431 K8sService class (#307)
K8sService class

SUMMARY

This refactors the perform_action() logic from common.py into a separate K8sService class.
TODO:

 Unit tests.

ISSUE TYPE

New Module Pull Request

COMPONENT NAME

service.py

Reviewed-by: Abhijeet Kasurde <None>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: None <None>
2022-05-24 11:57:01 -04:00