Compare commits

..

61 Commits

Author SHA1 Message Date
Alina Buzachis
fa3d94f793 Prep kubernetes.core 6.1.0 (#977)
SUMMARY


ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-08-12 18:25:31 +00:00
patchback[bot]
9ec27cf37c CI fix for 976 (#982) (#985)
This is a backport of PR #982 as merged into main (a861079).
SUMMARY
Exclude plugins/connection/kubectl.py from ansible-lint, as this file contains only examples that is simplificated and may not be accordingally to linter rules.
resolves #976
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
CI
ADDITIONAL INFORMATION

Reviewed-by: Alina Buzachis
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-08-12 15:59:34 +00:00
Bianca Henderson
10cb241256 Reapply "Remove kubeconfig value from module invocation log (#826)" (#899) (#966) (#979)
This reverts commit 1d962fb from stable-6 (i.e., reapplies the changes from #966); this is a temporary fix for #782 as it will re-introduce #870.

Reviewed-by: Alina Buzachis
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-08-11 16:48:04 +00:00
patchback[bot]
92e1f581fe fix(k8s,service): Hide fields first before creating diffs (#915) (#963)
This is a backport of PR #915 as merged into main (6a0635a).
SUMMARY

By hiding fields first before creating a diff hidden fields will not be shown in the resulting diffs and therefore will also not trigger the changed condition.
The issue can only be reproduced when a mutating webhook changes the object while the kubernetes.core.k8s module is working with it.

kubevirt/kubevirt.core#145
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kubernetes.core.module_utils.k8s.service
ADDITIONAL INFORMATION


Run kubernetes.core.k8s and create object with hidden fields. After run kubernetes.core.k8s again and let a webhook mutate the object that the module is working with. The module should return with changed: no.

Reviewed-by: Alina Buzachis
2025-08-04 18:03:21 +00:00
patchback[bot]
7f69aff0d6 k8s_json_patch: support the hidden_fields param (#964) (#972)
This is a backport of PR #964 as merged into main (c48778d).
SUMMARY
Add support for hidden_fields on k8s_json_patch

ISSUE TYPE

Feature Pull Request

COMPONENT NAME
k8s_json_patch
ADDITIONAL INFORMATION
Works exactly the same as k8s
Haven't pushed the doc yet, because of many changes. Will do it on a separate commit if the tests pass.
1st commit here, sorry if I forget some things.
Thanks!

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-08-01 18:24:39 +00:00
patchback[bot]
8c65ac066d Add support for take-ownership Helm flag (#957) (#969)
This is a backport of PR #957 as merged into main (cf3c3a9).
SUMMARY
Add support for take-ownership Helm flag added in Helm 3.17.0
ISSUE TYPE

Feature Pull Request

COMPONENT NAME

kubernetes.core.helm

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-28 15:48:39 +00:00
patchback[bot]
1d962fb932 Revert "Remove kubeconfig value from module invocation log (#826)" (#899) (#966)
This is a backport of PR #899 as merged into main (1705ced).
This reverts commit 6efabd3.
SUMMARY

Fixes #870
A better solution is necessary to address #782. The current code makes getting manifests practically unusable. We need to revert this commit until a better solution is found.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kubeconfig

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-22 21:29:21 +00:00
patchback[bot]
27ce23aa72 Fix integration test with ansibe-core 2.20 (#951) (#960)
This is a backport of PR #951 as merged into main (f568c9d).
SUMMARY
Now that ansible-core 2.19.0rc1 has been released, ansible-core’s devel branch has been bumped from 2.19.0.dev0 to 2.20.0.dev0. This potentially requires collection CIs to be updated which rely on devel using tests/sanity/ignore-2.19.txt, for example. Also it’s now time to add stable-2.19 to CI if you relied on devel to cover 2.19 so far. Note that milestone has also been updated to 2.20.0dev0.
During testing, I noticed that the failed test tasks/test_helm_not_installed.yml due to the new error message with ansible 2.20, please find here and following comments.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
test/CI (tasks/test_helm_not_installed.yml)
ADDITIONAL INFORMATION
to be cherry-picked to the stable-6 and stable-5

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-15 15:34:33 +00:00
patchback[bot]
993652b581 Add more functionality coverage to k8s_rollback integration test (#950) (#956)
This is a backport of PR #950 as merged into main (94e4235).
SUMMARY

Resolves #344

This revision adds the following test coverage:

Label Selectors: Tests rollback using label selectors to target specific deployments.
No Rollout History: Tests the warning scenario when attempting to rollback a deployment with only one revision.
Unsupported Resource Types: Tests error handling when trying to rollback unsupported resources like Services.
Non-existent Resources: Tests behavior when attempting to rollback resources that don't exist.
Multiple Resource Rollback: Tests bulk rollback operations using label selectors on multiple deployments.
Return Value Validation: Comprehensive validation of the rollback_info structure and content.
Field Selectors: Tests rollback using field selectors to target specific resources.
Check Mode Validation: Additional validation of check mode behavior and return values.

COMPONENT NAME

tests/integration/targets/k8s_rollback/tasks/main.yml

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-14 14:36:29 +00:00
patchback[bot]
9bd9d22db3 Fix the integration test for helm_registry_auth with helm >= 3.18.0 and clarify idempotency. (#946) (#953)
This is a backport of PR #946 as merged into main (642eb93).
SUMMARY
Fix the integration test for helm_registry_auth with helm >= 3.18.0 and clarify idempotency.
Fixes #944
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
helm_registry_auth
ADDITIONAL INFORMATION
Caused by the changes in helm starting from 3.18.0

Reviewed-by: Bikouo Aubin
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-07-08 15:56:48 +00:00
patchback[bot]
0593426918 Add plain_http parameter to helm, helm_pull and helm_template (#934) (#949)
This is a backport of PR #934 as merged into main (775959c).
SUMMARY

This change introduces the plain_http parameter to modules that can interact with OCI registries. This in needed in cases where the OCI registry does not use SSL encryption, forcing Helm to send HTTP requests instead of HTTPS

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

helm, helm_pull and helm_template
ADDITIONAL INFORMATION


This is the output when trying to use an OCI registry that is not configured to use SSL certs.

fatal: [localhost]: FAILED! => {"changed": false, "command": "/usr/local/bin/helm show chart 'oci://<http-registry>/charts/foo'", "msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: Get \"https://<http-registry>/v2/charts/foo/tags/list\": http: server gave HTTP response to HTTPS client\n", "stderr": "Error: Get \"https://<http-registry>/v2/charts/foo/tags/list\": http: server gave HTTP response to HTTPS client\n", "stderr_lines": ["Error: Get \"https://<http-registry>/v2/charts/foo/tags/list\": http: server gave HTTP response to HTTPS client"], "stdout": "", "stdout_lines": []}

Reviewed-by: Bikouo Aubin
2025-06-12 10:53:42 +00:00
Bianca Henderson
8fa5b201a4 Prep release 6.0.0 (#933)
SUMMARY

Prep kubernetes.core 6.0.0
Prerequisite: Release of community.okd/redhat.openshift 4.0.2 needs to happen first

ISSUE TYPE


Feature Pull Request

COMPONENT NAME
Multiple

Reviewed-by: Bikouo Aubin
2025-06-03 16:56:33 +00:00
Bikouo Aubin
94c1f57f36 Push 5.x.x changes into main branch (#932)
Release 5.3.0 is out, update the main branch to reflect these changes.

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-05-16 16:06:45 +00:00
Bianca Henderson
d0b97319a5 Update README to remove information about backports. (#930)
Per this comment, I am removing information about backports that were added in #926; per the Cloud Content Handbook page on backports, we will only be backporting to the two latest versions, and since mentioning specific branches and versions in this collection's README in this manner will add to future maintenance/upkeep burden, I opted to remove this line entirely.
I will be creating a separate PR to manually backport the new README information to stable-5.

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis
2025-05-15 13:45:09 +00:00
Bianca Henderson
38d5c81051 Add information in README stating stable-4 is no longer supported (#926)
SUMMARY

Resolves ACA-2383.

ISSUE TYPE


Docs Pull Request

COMPONENT NAME

README.md
ADDITIONAL INFORMATION
Also added information about backporting only bugfixes to stable-3 and made some minor capitalization edits.

Reviewed-by: Bikouo Aubin
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis
2025-05-14 17:13:52 +00:00
Noah Lehmann
914a16ec5c Add helm insecure skip tls verify (#901)
SUMMARY
Added the option insecure_skip_tls_verify  to the following helm modules:

helm_repository
helm
Unified the option with alias in helm_pull

For helm, added the option to the helm diff call, as it got fixed upstream.
Upstream Issue: databus23/helm-diff#503
Fixed with: helm/helm#12856
Fixes #694
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

kubernetes.core.helm
kubernetes.core.helm_repository
kubernetes.core.helm_pull

ADDITIONAL INFORMATION
Basically the option was added in the parameters set in the ansible job, in the docs and then injected in the helm and helm diff binary calls if set. Defaults to False.
Example
---
- name: Test helm modules
  tasks:
    - name: Test helm repository insecure
      kubernetes.core.helm_repository:
        name: insecure
        repo_url: "<helm-repo-with-self-signed-tls>"
        state: present
        insecure_skip_tls_verify: true
    - name: Test helm pull insecure
      kubernetes.core.helm_pull:
        chart_ref: "oci://<helm-repo-with-self-signed-tls>/ptroject"
        destination: /tmp
        insecure_skip_tls_verify: true
    - name: Test helm insecure
      kubernetes.core.helm:
        name: insecure
        chart_ref: "oci://<helm-repo-with-self-signed-tls>/project"
        namespace: helm-insecure-test
        state: present
        insecure_skip_tls_verify: true
Note
Might need an alias for telm_template, as the option is called insecure_registry, in the manual and docs of helm it would be --insecure-skip-tls-verify as well though.
Not included, as it was recently merged with #805

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Noah Lehmann
Reviewed-by: Bikouo Aubin
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-05-02 16:24:26 +00:00
Yuriy Novostavskiy
cb2070c93f Initial update to 6.0.0-dev0: remove support of ansible-core<2.16.0 and k8s inventory plugin (#867)
SUMMARY
This is initial to prepare the main branch to version 6.0.0 (6.0.0-dev0 in galaxy.yml) and includes following braking changes:

removed support of ansible-core<2.16.0 as 2.15 reach EOL in Nov 2024;
removed the k8s inventory plugin that was deprecated in release 3.0.0.

ISSUE TYPE

Feature Pull Request

COMPONENT NAME

Documentation
galaxy.yml
inventory/k8s.py

ADDITIONAL INFORMATION
The initial version of this PR doesn't remove tests/sanity/ignore-2.14.txt and tests/sanity/ignore-2.15.txt, and CI part will require removing version 2.15 from the matrix in https://github.com/ansible-network/github_actions, so, we have external dependency here.

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-04-29 18:23:11 +00:00
Bianca Henderson
b594d35931 Update ansible-lint version to 25.1.2 (#919)
* Update ansible-lint version to 25.1.2

* Add changelog file
2025-04-29 11:54:09 -04:00
b0z02003
00699ac3e5 add reset_then_reuse_values support to helm module (#802)
SUMMARY
Starting with version 3.14.0, Helm supports --reset-then-reuse-values. As discussed on the original PR. This greatly improves on --reuse-values as it allows to avoid templates errors when new features are added to an upgraded chart.
Closes #803
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
helm
ADDITIONAL INFORMATION
This PR is greatly 'inspired' by #575 and because I wasn't sure how I could provide additional tests for it, I actually copied those build previously for --reuse-values (as it is an improvement on this feature.

Reviewed-by: Bikouo Aubin
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: b0z02003
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-28 15:11:58 +00:00
Bikouo Aubin
d329e7ee42 Rebase PR #898 (#905)
This PR is a rebase of #898 for CI to pass
Thanks @efussi for your collaboration.
Closes #892

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-25 05:31:03 +00:00
Yuriy Novostavskiy
d4fc22c74e Bugfix: fix unit-source for pre-release of ansible-core 2.20 (devel and milestone branch) (#903)
SUMMARY
CI fix for #904
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
tests/unit
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-04-24 14:56:24 +00:00
Mike Graves
b648f45e90 Prep 5.2.0 release (#891) (#896)
SUMMARY
Prep 5.2.0 release
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bikouo Aubin
Reviewed-by: Alina Buzachis
(cherry picked from commit 0eff03d)
SUMMARY


ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
2025-04-02 13:39:41 +00:00
Bikouo Aubin
2cb5d6c316 Run integration tests using ansible-core 2.19 (#888)
* fix integration test ``k8s_full`` running with ansible-core 2.19

* Fix templating issues

* fix test on current ansible version

* fix tests cases

* Fix additional tests

* fix the templating mechanism

* consider using variable_[start/end]_string while parsing template

* Remove support for omit into template option

* Remove unnecessary unit tests
2025-04-01 11:15:30 +02:00
Bikouo Aubin
0e7229cf8d Push changes from 3.3.1 into main branch (#893)
Release 3.3.1 is out; push changes to main branch

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2025-03-31 09:06:26 +00:00
Will Thames
9ec6912325 Extend hidden_fields to allow more complicated field definitions (#872)
SUMMARY
This allows us to ignore e.g. the last-applied-configuration annotation by specifying
metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
hidden_fields
This replaces #643 as I no longer have permissions to push to branches in this repo

Reviewed-by: Bikouo Aubin
Reviewed-by: Helen Bailey <hebailey@redhat.com>
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
Reviewed-by: Alina Buzachis
2025-03-20 10:35:51 +00:00
Steve Ovens
7cdf0d03f5 waiter.py Add ClusterOperator Test (#879)
SUMMARY
Fixes #869
During an OpenShift installation, one of the checks to see that the cluster is ready to proceed with configuration is to check to ensure that the Cluster Operators are in an Available: True Degraded: False Progressing: False state. While you can currently use the k8s_info module to get a json response, the resulting json needs to be iterated over several times to get the appropriate status.
This PR adds functionality into waiter.py which loops over all resource instances of the cluster operators. If any of them is not ready, waiter returns False and the task false. If the task returns, you can assume that all the cluster operators are healthy.


ISSUE TYPE


Feature Pull Request

COMPONENT NAME

waiter.py
ADDITIONAL INFORMATION



A simple playbook will trigger the waiter.py to watch the ClusterOperator object

---
- name: get operators
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Get cluster operators
      kubernetes.core.k8s_info:
        api_version: v1
        kind: ClusterOperator
        kubeconfig: "/home/ocp/one/auth/kubeconfig"
        wait: true
        wait_timeout: 30
      register: cluster_operators


This will produce the simple response if everything is functioning properly:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
ok: [localhost]

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

If the timeout is reached:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds"}

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

UNSOLVED: How to know which Operators are failing

Reviewed-by: Mandar Kulkarni <mandar242@gmail.com>
Reviewed-by: Bikouo Aubin
2025-02-26 17:53:12 +00:00
Yuriy Novostavskiy
91df2f10bc Fix linters in CI (#873)
SUMMARY
It seems that recent updates in linters break CI. Closes #874
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
CI
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Yuriy Novostavskiy
2025-02-06 15:16:55 +00:00
Yuriy Novostavskiy
1943dfc3d9 Post release 5.1.0 update (#866)
SUMMARY
This is a post-5.1.0 documentation update of the main branch that includes a cherry-pic of the changelog and an update version to 5.2.0-dev0.
ISSUE TYPE

Docs Pull Request

COMPONENT NAME

changelog
galaxy.yml

ADDITIONAL INFORMATION
The reason for this version bump is to understand of version when the collection is installed as ansible-galaxy collection install git+https://github.com/ansible-collections/kubernetes.core.git and don't mess up the main (that may contain some PRs that is not included to any released version) with the released version 5.1.0.

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Yuriy Novostavskiy
2025-01-21 15:53:54 +00:00
Yuriy Novostavskiy
eb731cd3a5 Remove deprecated .github/stale.yml to address #837 (#838)
SUMMARY
I noticed that even config for probot/stale is present in the repo, but the old issues and PRs weren't marked as stale and not closed by the bot. Investigated and found that this bot was added to community.kubernetes as ansible-collections/community.kubernetes#53 but wasn't moved to kubernetes.core and never worked here.
Moreover, this bot is completely deprecated and down, ref: probot/stale#430
So, the config to be removed.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
.github/stale.yml
ADDITIONAL INFORMATION
Closes #837
Trivial change that not require changelog

Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-01-17 16:26:28 +00:00
Irum Malik
ecc64cace1 helm_pull: Silence false no_log warning (#796)
SUMMARY
Apply no_log=True to pass_credentials to silence false positive warning.
Fixes similar issue to: #423
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
changelog/fragements/796-false-positive-helmull.yaml
plugins/modules/helm_pull.py

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Irum Malik
2025-01-17 15:52:58 +00:00
Yuriy Novostavskiy
bc0de24cba trivial doc: replace 2.5.0 with 3.0.0 (#831)
SUMMARY
Some parameters were added to the master in time where the latest version was 2.4.0 with version_added: 2.5.0, however the next version after 2.4.0 was a 3.0.0.
So, with this trivial doc PR (that most probably doesn't require a changelog fragment and including to changelog) I replacing  version_added: 2.5.0 to  version_added: 3.0.0 for:

reuse_values in kubernetes.core.helm module
reset_values in kubernetes.core.helm module
delete_all in  kubernetes.core.k8s module
hidden_fields  in  kubernetes.core.k8s module
hidden_fields   in  kubernetes.core.k8s_info module

All of them are introduced in kubernetes.core 3.0.0
ISSUE TYPE

Docs Pull Request

COMPONENT NAME

helm
k8s
8s_info


ADDITIONAL INFORMATION
PR to be backported to stable-3 and stable-5

Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-01-17 15:43:51 +00:00
Mike Graves
9f60b151ba Clean up test namespace (#852)
SUMMARY

The helm_set_values test target did not clean up its namespace which is leading to unstable tests in the k8s_drain target.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Alina Buzachis
Reviewed-by: Yuriy Novostavskiy
2025-01-17 14:53:21 +00:00
Yuriy Novostavskiy
159a63af97 fix linters in github actions (#848)
Fix bug #846
within this commit ansible/ansible-lint updated to 24.12.2 and config moved to .config folder
2025-01-16 14:45:27 -05:00
Bikouo Aubin
6efabd3418 Remove kubeconfig value from module invocation log (#826) 2024-12-17 17:50:22 +01:00
Yuriy Novostavskiy
aee847431a helm_registry_auth module to authenticate in OCI registry (#800)
* new module helm_registry_auth

* Initial integration tests

* final update copyright and integration test before pr

* update link to pr in changelog fragment

* reformat plugins/module_utils/helm.py with black

to fix linters in actions

* attempt to fix unit test

unit test was missing initially

* fix https://pycqa.github.io/isort/ linter

* next attemp to fix unit-test

* remove unused and unsupported helm_args_common

* remove unused imports and fix other linters errors

* another fix for unit test

* fix issue introducied by commit ff02893a12a31f9c44b5c48f9a8bf85057295961

* add binary_path to arg_spec

* return helm_cmd in the output of check mode

remove changlog fragment

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* description suggestion from reviewer/maintainer

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* remove changed from module return

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* remove redundant code

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* Update plugins/modules/helm_registry_auth.py

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* consider support of logout when user is not logged in

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* consider support helm < 3.0.0

* Revert "consider support helm < 3.0.0"

This reverts commit f20004d196.

* reintroduce support of helm version less than 3.8.0

reference: https://helm.sh/docs/topics/registries/#enabling-oci-support-prior-to-v380

* revert reintroducing support of helm < 3.8.0

reason: didn't find a quick way to deal with tests

* update documentation with the recent module updates

* Update plugins/modules/helm_registry_auth.py

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* add test of logout impendency

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>

* fix linters

* fix intendations in the integration tests

* create tests/integration/targets/helm_registry_auth/aliases

* fix integration test (typo)

* fix integration tests (test wrong cred)

* add stderr when module fail

* another attempt to fix integration test

* fix assertion in integration test to be not affceted by the #830

---------

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-12-17 15:39:42 +01:00
Yuriy Novostavskiy
6609abdd5a Parameter insecure_registry added to helm_template (#805)
* Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify
2024-12-17 11:59:14 +01:00
Pierre Ozoux
219c747a24 fix: typo (#804)
* fix: typo

replaces https://github.com/ansible-collections/kubernetes.core/pull/799

* doc: add changelog fragment

* Delete changelogs/fragments/804-drain-typo.yaml

---------

Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
2024-12-17 11:58:33 +01:00
Bikouo Aubin
7559b65946 Fix helm integration tests (#830)
SUMMARY
Fix charts ref on integration tests targets
ISSUE TYPE


Bugfix Pull Request

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Alina Buzachis
2024-12-17 10:18:17 +00:00
Mike Graves
c8a33c7180 Fix helm tests (#827)
SUMMARY

Some of the charts we've used for testing are no longer available at the old helm repository urls, as they've been moved to oci registries. This updates those charts.
In the longer term, we should find a better way to handle these kinds of test fixtures, probably by switching to local charts as much as possible.

ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Helen Bailey <hebailey@redhat.com>
Reviewed-by: Yuriy Novostavskiy
2024-12-13 21:50:37 +00:00
Ottavia Balducci
52f2cb5587 Improve error message for pod disruption budget when draining a node (#798)
SUMMARY
Closes #797 .
The error message "Too Many Requests" is confusing and is changed to a more meaningful message:
TASK [Drain node] *************************************************************************
Montag 25 November 2024  09:20:28 +0100 (0:00:00.014)       0:00:00.014 ******* 
fatal: [host -> localhost]: FAILED! => {"changed": false, "msg": "Failed to delete pod kube-public/draintest-6b84677b99-9jf7m due to: Cannot evict pod as it would violate the pod's disruption budget."}


The new task output would allow to deal with a pod disruption budget with the retries/until logic in a more controlled way:
---
- hosts: "{{ target }}"
  serial: 1
  gather_facts: false
  tasks:
    - name: Drain node
      kubernetes.core.k8s_drain:
        kubeconfig: "{{ kubeconfig_path }}"
        name: "{{ inventory_hostname }}"
        delete_options:
          ignore_daemonsets: true
          delete_emptydir_data: true
          wait_timeout: 100
          disable_eviction: false
          wait_sleep: 1
      delegate_to: localhost
      retries: 10
      delay: 5
      until: drain_result is success or 'disruption budget' not in drain_result.msg
      register: drain_result

ISSUE TYPE


Feature Pull Request

COMPONENT NAME
k8s_drain

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-12-11 14:45:47 +00:00
Mike Graves
513ff66fcf Remove kubevirt integration test workflow (#806)
SUMMARY

This removes the kubevirt integration tests. We don't maintain that collection or have any permissions on that repo, so there's no reason for these tests to be here.

ISSUE TYPE


Bugfix Pull Request


COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Bikouo Aubin
Reviewed-by: Helen Bailey <hebailey@redhat.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-12-10 16:18:14 +00:00
Ottavia Balducci
fca0dc0485 Fix k8s_drain runs into timeout with pods from stateful sets. (#793)
SUMMARY
Fixes #792 .
The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running:
            try:
                response = self._api_instance.read_namespaced_pod(
                    namespace=pod[0], name=pod[1]
                )
                if not response:
                    pod = None
                time.sleep(wait_sleep)
This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME
k8s_drain

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-12-10 15:35:07 +00:00
Yuriy Novostavskiy
cd686316e9 [ci] fix github actions post 2.18 (#789)
This PR includes a trivial fix for the GitHub Actions issue #788 and related to switching milestone and devel branches of ansible/ansible to version 2.19 and prepare repo to be ready to include test with Python 3.13 when ansible-network/github_actions/pull/162 is merged.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
GitHub actions/test

Reviewed-by: Andrew Klychkov <aklychko@redhat.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-11-04 17:12:45 +00:00
Yuriy Novostavskiy
b8e9873f64 Update README.md with removing outdated communication channels (#790)
Summary:
  As part of the consolidating Ansible discussion platforms and communication
  channels was decided to use the Ansible forum as the main place for questions
  and discussion.

  Reference: https://forum.ansible.com/t/proposal-consolidating-ansible-discussion-platforms/6812

  As part of this change, the IRC channel was removed by the PRs #778 and #774.

  However, the README.md file wasn't fully cleaned up from the outdated information.

  The `#ansible-kubernetes` channel on [libera.chat](https://libera.chat/) IRC isn't
  used by maintainers and contributors anymore.

  The Wiki page on the https://github.com/ansible/community/ was deprecated a long time ago
2024-11-04 14:31:08 +01:00
Ottavia Balducci
4c305e73f0 Make k8s_drain work when only one pod is present (#770)
SUMMARY
Fixes #769 .
k8s_drain was not checking if a pod has been deleted when there was only one pod on the node to be drained.
The list of pods, pods, was being "popped" before the first iteration of the while loop:
        pod = pods.pop()
        while (_elapsed_time() < wait_timeout or wait_timeout == 0) and pods:
When pods contains only one element, the while loop is skipped.


ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

k8s_drain

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-11-01 14:22:27 +00:00
Andrew Klychkov
c8a9326306 CONTRIBUTING.md remove IRC (#778) 2024-09-05 09:07:17 +02:00
Andrew Klychkov
445d367059 README: Add Communication section with Forum information (#774) 2024-08-19 10:57:24 +02:00
GomathiselviS
fdb8af7ca9 Update Readme to match the template (#767)
SUMMARY


Refer: https://issues.redhat.com/browse/ACA-1749
This PR updates the README doc to match the template
ISSUE TYPE


Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Alina Buzachis
2024-07-31 13:37:02 +00:00
Mandar Kulkarni
a89f19b4e5 Bump the ansible-lint version to 24.7.0 (#765)
* add minimum version for  ansible-lint to 24.7.0

* added changelog fragment

* add newline at eof
2024-07-26 13:48:56 -04:00
QCU
5bc53dba7c fix: kustomize plugin fails with deprecation warnings (#728)
SUMMARY

error judgments are based on the exit codes of command execution, where 0 represents success and non-zero represents failure.
Optimize the run_command function to return a tuple like the run_command method of AnsibleModule.

Fixes #639
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME

kustomize lookup plugin
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: QCU
2024-07-15 13:29:23 +00:00
Artur Załęski
b07fbd6271 Fix waiting for daemonset when desired number of pods is 0 (#756)
Fixes #755
SUMMARY
Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0
Code to reproduce:
- name: Create daemonset
  kubernetes.core.k8s:
    state: present
    wait: true
    definition:
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: my-daemonset
        namespace: default
      spec:
        selector:
          matchLabels:
            app: my-app
        template:
          metadata:
            labels:
              app: my-app
          spec:
            containers:
              - name: my-container
                image: nginx
            nodeSelector:
              non_exisiting_label: 1
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
kubernetes.core.plugins.module_utils.k8s.waiter
ADDITIONAL INFORMATION



TASK [Create daemonset] **********************************************************************************************************************************
changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}}

~$ kubectl get ds
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE
my-daemonset   0         0         0       0            0           non_exisiting_label=1   30s

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-07-10 13:58:37 +00:00
Mike Graves
44a2fc392a Merge pull request #757 from gravesm/gha-python-version
Remove ansible install step from kubevirt GHA
2024-07-09 11:52:07 -04:00
Mike Graves
6265a3e7ce Remove ansible install step from kubevirt GHA
Ansible 2.17 is already included in the ubuntu-latest runner image, so
there's no need for a separate install step. It was broken in any case
because the python version being used was too low for ansible 2.18.
2024-07-09 09:51:36 -04:00
Yuriy Novostavskiy
0afd257dd0 fix shields.io badges in README.md (#749)
SUMMARY
This PR fixes shields.io badges in README.md. It's just cosmetic bugfix
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
README.md
ADDITIONAL INFORMATION
Current README.md:

This PR:

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Yuriy Novostavskiy
2024-06-18 13:55:25 +00:00
Yuriy Novostavskiy
d192157ed8 update changelog with release 3.2.0 (#750)
SUMMARY
Minor/cosmetic documentation change with adding release 3.2.0 to changelog for master as the release is from stable-3 branch
ISSUE TYPE

Docs Pull Request

COMPONENT NAME
CHANGELOG.md
ADDITIONAL INFORMATION
Most probably this PR should be backported to the stable-5 branch after the merge to the main and should be with a skip-changelog tag.

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-06-17 18:58:44 +00:00
Eric G
6a04f42d0b helm: Accept release candidate versions for compatibility checks (#745)
SUMMARY

If the helm CLI version includes -rc.1 for example, the version checks fails due to an incomplete regex.
The error can be triggered if you use helm v3.15.0-rc.1 for example, and apply a helm chart with wait: true 
ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME
helm
helm_pull
ADDITIONAL INFORMATION

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Eric G.
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-06-17 18:58:42 +00:00
Bikouo Aubin
5064d722c3 Update changelog after release 5.0.0 (#747)
Push change from stable-5 after release 5.0.0

Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Yuriy Novostavskiy
2024-06-13 10:02:28 +00:00
Yuriy Novostavskiy
fb80d973c4 Doc: add example of using kubectl connection plugin (#741)
Doc: add example of using kubectl connection plugin

SUMMARY
Currently documentation for collection don't include any examples of using kubenrenes.core.kubectl connection plugin and it's hard to start using that plugin.
ISSUE TYPE

Docs Pull Request

COMPONENT NAME
kubenrenes.core.kubectl connection plugin
ADDITIONAL INFORMATION
This PR was inspired by #288 and based on feedback on that PR and my own experience. Thanks @tpo for his try and @geerlingguy for his Ansible for DevOps book

Reviewed-by: Bikouo Aubin
Reviewed-by: Sandra McCann <samccann@redhat.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: purdzan
2024-06-06 13:48:15 +00:00
Bikouo Aubin
8363a4debf Remove support for ansible-core<2.15 (#737)
Drop support for ansible-core<2.15

SUMMARY

Remove support for ansible-core<2.15

ISSUE TYPE


Feature Pull Request

Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-05-31 07:41:07 +00:00
Bikouo Aubin
0c5233a650 Defer removal of inventory/k8s to 6.0.0 (#734)
Defer removal of inventory/k8s to 6.0.0

SUMMARY
Defer removal of inventory plugin k8s to release 6.0.0.

ISSUE TYPE


Feature Pull Request

Reviewed-by: Alina Buzachis
Reviewed-by: Mike Graves <mgraves@redhat.com>
2024-05-31 07:41:04 +00:00
Bikouo Aubin
c0666a5137 kubevirt.core collection cross testing (#731)
* Initial

* update python version

* update python version

* checkout local version of collection

* add integration job

* indent

* Set workflow as non blocking
2024-05-30 15:34:29 +02:00
100 changed files with 1131 additions and 3282 deletions

View File

@@ -26,7 +26,6 @@ jobs:
with: with:
path: ${{ env.source_dir }} path: ${{ env.source_dir }}
fetch-depth: "0" fetch-depth: "0"
ref: ${{ github.event.pull_request.head.sha }}
- name: list changes for pull request - name: list changes for pull request
id: splitter id: splitter
@@ -55,14 +54,13 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
# Ref must match a branch/tag on github.com/ansible/ansible (e.g. stable-2.18, not 2.18). ansible-version:
ansible-version: ["stable-2.18", "milestone"] - milestone
enable-turbo-mode: [true, false]
exclude:
- ansible-version: "milestone"
enable-turbo-mode: true
python-version: python-version:
- "3.12" - "3.12"
enable-turbo-mode:
- true
- false
workflow-id: ${{ fromJson(needs.splitter.outputs.test_jobs) }} workflow-id: ${{ fromJson(needs.splitter.outputs.test_jobs) }}
name: "integration-py${{ matrix.python-version }}-${{ matrix.ansible-version }}-${{ matrix.workflow-id }}-enable_turbo=${{ matrix.enable-turbo-mode }}" name: "integration-py${{ matrix.python-version }}-${{ matrix.ansible-version }}-${{ matrix.workflow-id }}-enable_turbo=${{ matrix.enable-turbo-mode }}"
steps: steps:
@@ -108,7 +106,6 @@ jobs:
source_path: ${{ env.source }} source_path: ${{ env.source }}
- name: checkout ansible-collections/cloud.common - name: checkout ansible-collections/cloud.common
if: ${{ matrix.enable-turbo-mode == true }}
uses: ansible-network/github_actions/.github/actions/checkout_dependency@main uses: ansible-network/github_actions/.github/actions/checkout_dependency@main
with: with:
repository: ansible-collections/cloud.common repository: ansible-collections/cloud.common
@@ -130,7 +127,6 @@ jobs:
ref: main ref: main
- name: install cloud.common collection - name: install cloud.common collection
if: ${{ matrix.enable-turbo-mode == true }}
uses: ansible-network/github_actions/.github/actions/build_install_collection@main uses: ansible-network/github_actions/.github/actions/build_install_collection@main
with: with:
install_python_dependencies: true install_python_dependencies: true

View File

@@ -1,28 +0,0 @@
---
name: label new prs
on:
pull_request_target:
types:
- opened
- reopened
- converted_to_draft
- ready_for_review
jobs:
add_label:
if: github.actor != 'patchback[bot]'
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- name: Add 'needs_triage' label if the pr is not a draft
uses: actions-ecosystem/action-add-labels@v1
if: github.event.pull_request.draft == false
with:
labels: needs_triage
- name: Remove 'needs_triage' label if the pr is a draft
uses: actions-ecosystem/action-remove-labels@v1
if: github.event.pull_request.draft == true
with:
labels: needs_triage

3
.gitignore vendored
View File

@@ -21,6 +21,3 @@ tests/integration/*-chart-*.tgz
# ansible-test generated file # ansible-test generated file
tests/integration/inventory tests/integration/inventory
tests/integration/*-*.yml tests/integration/*-*.yml
# VS Code settings
.vscode/

View File

@@ -4,68 +4,38 @@ Kubernetes Collection Release Notes
.. contents:: Topics .. contents:: Topics
v5.4.2
v6.1.0
====== ======
Release Summary Release Summary
--------------- ---------------
This release includes bugfixes such as replacing the passing of ``warnings`` to ``exit_json`` with ``AnsibleModule.warn`` as well as a security update for selectively redacting sensitive information from kubeconfig. This release includes a fix for kubeconfig output, added ``plain_http`` and ``take_ownership`` parameters for helm modules, support for ``hidden_fields`` in ``k8s_json_patch``, documented lack of idempotency support in ``helm_registry_auth`` with ``helm ≥ 3.18.0``, and improved ``k8s_rollback`` test coverage.
Minor Changes Minor Changes
------------- -------------
- helm - add ``release_values`` key to ``status`` return value that can be accessed using Jinja2 dot notation (https://github.com/ansible-collections/kubernetes.core/pull/1056). - Module helm_registry_auth do not support idempotency with `helm >= 3.18.0` (https://github.com/ansible-collections/kubernetes.core/pull/946)
- helm_info - add ``release_values`` key to ``status`` return value that can be accessed using Jinja2 dot notation (https://github.com/ansible-collections/kubernetes.core/pull/1056). - Module k8s_json_patch - Add support for `hidden_fields` (https://github.com/ansible-collections/kubernetes.core/pull/964).
- helm - Parameter plain_http added for working with insecure OCI registries (https://github.com/ansible-collections/kubernetes.core/pull/934).
Deprecated Features - helm - Parameter take_ownership added (https://github.com/ansible-collections/kubernetes.core/pull/957).
------------------- - helm_pull - Parameter plain_http added for working with insecure OCI registries (https://github.com/ansible-collections/kubernetes.core/pull/934).
- helm_template - Parameter plain_http added for working with insecure OCI registries (https://github.com/ansible-collections/kubernetes.core/pull/934).
- helm - the ``status.values`` return value has been deprecated and will be removed in a release after 2027-01-08. Use ``status.release_values`` instead (https://github.com/ansible-collections/kubernetes.core/pull/1056).
- helm_info - the ``status.values`` return value has been deprecated and will be removed in a release after 2027-01-08. Use ``status.release_values`` instead (https://github.com/ansible-collections/kubernetes.core/pull/1056).
Security Fixes
--------------
- Selectively redact sensitive info from kubeconfig instead of applying blanket ``no_log=True`` (https://github.com/ansible-collections/kubernetes.core/pull/1014).
Bugfixes Bugfixes
-------- --------
- Add idempotency for ``helm_pull`` module (https://github.com/ansible-collections/kubernetes.core/pull/1055).
- Fixed a bug where setting ``K8S_AUTH_VERIFY_SSL=true`` (or any string value) caused the value to be treated as a separate ``kubectl`` command argument (https://github.com/ansible-collections/kubernetes.core/pull/1049).
- Limit supported versions of Helm to <4.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/1039).
- Replace passing ``warnings`` to ``exit_json`` with ``AnsibleModule.warn`` in the ``k8s_drain``, ``k8s_rollback.py`` and ``k8s_scale.py`` modules as it deprecated in ``ansible-core>=2.19.0`` and will be removed in ``ansible-core>=2.23.0`` (https://github.com/ansible-collections/kubernetes.core/pull/1033).
- k8s - Fix return block from the module documentation (https://github.com/ansible-collections/kubernetes.core/pull/1056).
- meta - Add ``k8s_cluster_info``, ``k8s_json_patch`` and ``k8s_rollback`` to k8s action group (https://github.com/ansible-collections/kubernetes.core/pull/992).
v5.4.1
======
Release Summary
---------------
This release includes bugfixes for k8s service field handling, k8s_cp init containers support, and removes deprecated ansible.module_utils.six imports.
Bugfixes
--------
- Remove ``ansible.module_utils.six`` imports to avoid warnings (https://github.com/ansible-collections/kubernetes.core/pull/998).
- Update the `k8s_cp` module to also work for init containers (https://github.com/ansible-collections/kubernetes.core/pull/971).
- module_utils/k8s/service - hide fields first before creating diffs (https://github.com/ansible-collections/kubernetes.core/pull/915). - module_utils/k8s/service - hide fields first before creating diffs (https://github.com/ansible-collections/kubernetes.core/pull/915).
v5.4.0 v6.0.0
====== ======
Release Summary Breaking Changes / Porting Guide
--------------- --------------------------------
This release updates the ``helm_registry_auth`` module to match the behavior of ``helm >= 3.18.0`` which reports a successful logout regardless of the current state (i.e., no idempotency). - Remove deprecated ``k8s`` invetory plugin (https://github.com/ansible-collections/kubernetes.core/pull/867).
- Remove support for ``ansible-core<2.16`` (https://github.com/ansible-collections/kubernetes.core/pull/867).
Minor Changes
-------------
- Module ``helm_registry_auth`` does not support idempotency with ``helm >= 3.18.0`` (https://github.com/ansible-collections/kubernetes.core/pull/946)
v5.3.0 v5.3.0
====== ======
@@ -73,20 +43,20 @@ v5.3.0
Release Summary Release Summary
--------------- ---------------
This release includes minor changes, bug fixes and also bumps ``ansible-lint`` version to ``25.1.2``. This release includes minor changes, bug fixes and also bumps ansible-lint version to ``25.1.2``.
Minor Changes Minor Changes
------------- -------------
- Bump version of ``ansible-lint`` to 25.1.2 (https://github.com/ansible-collections/kubernetes.core/pull/919). - Bump version of ansible-lint to 25.1.2 (https://github.com/ansible-collections/kubernetes.core/pull/919).
- action/k8s_info - update templating mechanism with changes from ``ansible-core 2.19`` (https://github.com/ansible-collections/kubernetes.core/pull/888). - action/k8s_info - update templating mechanism with changes from ``ansible-core 2.19`` (https://github.com/ansible-collections/kubernetes.core/pull/888).
- helm - add ``reset_then_reuse_values`` support to helm module (https://github.com/ansible-collections/kubernetes.core/issues/803). - helm - add reset_then_reuse_values support to helm module (https://github.com/ansible-collections/kubernetes.core/issues/803).
- helm - add support for ``insecure_skip_tls_verify`` option to helm and helm_repository(https://github.com/ansible-collections/kubernetes.core/issues/694). - helm - add support for ``insecure_skip_tls_verify`` option to helm and helm_repository(https://github.com/ansible-collections/kubernetes.core/issues/694).
Bugfixes Bugfixes
-------- --------
- module_utils/k8s/service - fix issue when trying to delete resource using ``delete_options`` and ``check_mode=true`` (https://github.com/ansible-collections/kubernetes.core/issues/892). - module_utils/k8s/service - fix issue when trying to delete resource using `delete_options` and `check_mode=true` (https://github.com/ansible-collections/kubernetes.core/issues/892).
v5.2.0 v5.2.0
====== ======
@@ -114,7 +84,7 @@ This release came with new module ``helm_registry_auth``, improvements to the er
Minor Changes Minor Changes
------------- -------------
- Bump version of ``ansible-lint`` to minimum 24.7.0 (https://github.com/ansible-collections/kubernetes.core/pull/765). - Bump version of ansible-lint to minimum 24.7.0 (https://github.com/ansible-collections/kubernetes.core/pull/765).
- Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (https://github.com/ansible-collections/kubernetes.core/pull/805). - Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (https://github.com/ansible-collections/kubernetes.core/pull/805).
- k8s_drain - Improve error message for pod disruption budget when draining a node (https://github.com/ansible-collections/kubernetes.core/issues/797). - k8s_drain - Improve error message for pod disruption budget when draining a node (https://github.com/ansible-collections/kubernetes.core/issues/797).

View File

@@ -1,5 +1,5 @@
# Also needs to be updated in galaxy.yml # Also needs to be updated in galaxy.yml
VERSION = 5.4.2 VERSION = 6.1.0
TEST_ARGS ?= "" TEST_ARGS ?= ""
PYTHON_VERSION ?= `python -c 'import platform; print(".".join(platform.python_version_tuple()[0:2]))'` PYTHON_VERSION ?= `python -c 'import platform; print(".".join(platform.python_version_tuple()[0:2]))'`

View File

@@ -21,21 +21,15 @@ For more information about communication, see the [Ansible communication guide](
## Requirements ## Requirements
<!--start requires_ansible--> <!--start requires_ansible-->
## Ansible Version Compatibility ## Ansible version compatibility
This collection has been tested against following Ansible versions: **>=2.15.0**. This collection has been tested against the following Ansible versions: **>=2.16.0**.
For collections that support Ansible 2.9, please ensure you update your `network_os` to use the
fully qualified collection name (for example, `cisco.ios.ios`).
Plugins and modules within a collection may be tested with only specific Ansible versions. Plugins and modules within a collection may be tested with only specific Ansible versions.
A collection may contain metadata that identifies these versions. A collection may contain metadata that identifies these versions.
PEP440 is the schema used to describe the versions of Ansible. PEP440 is the schema used to describe the versions of Ansible.
<!--end requires_ansible--> <!--end requires_ansible-->
### Helm Version Compatibility
Helm modules in this collection are compatible with Helm v3.x and are not yet compatible with Helm v4. Individual modules and their parameters may support a more specific range of Helm versions.
### Python Support ### Python Support
* Collection supports 3.9+ * Collection supports 3.9+
@@ -51,22 +45,17 @@ This collection supports Kubernetes versions >= 1.24.
Click on the name of a plugin or module to view that content's documentation: Click on the name of a plugin or module to view that content's documentation:
<!--start collection content--> <!--start collection content-->
### Connection Plugins ### Connection plugins
Name | Description Name | Description
--- | --- --- | ---
[kubernetes.core.kubectl](https://github.com/ansible-collections/kubernetes.core/blob/main/docs/kubernetes.core.kubectl_connection.rst)|Execute tasks in pods running on Kubernetes. [kubernetes.core.kubectl](https://github.com/ansible-collections/kubernetes.core/blob/main/docs/kubernetes.core.kubectl_connection.rst)|Execute tasks in pods running on Kubernetes.
### K8s filter Plugins ### K8s filter plugins
Name | Description Name | Description
--- | --- --- | ---
kubernetes.core.k8s_config_resource_name|Generate resource name for the given resource of type ConfigMap, Secret kubernetes.core.k8s_config_resource_name|Generate resource name for the given resource of type ConfigMap, Secret
### Inventory Plugins ### Lookup plugins
Name | Description
--- | ---
[kubernetes.core.k8s](https://github.com/ansible-collections/kubernetes.core/blob/main/docs/kubernetes.core.k8s_inventory.rst)|Kubernetes (K8s) inventory source
### Lookup Plugins
Name | Description Name | Description
--- | --- --- | ---
[kubernetes.core.k8s](https://github.com/ansible-collections/kubernetes.core/blob/main/docs/kubernetes.core.k8s_lookup.rst)|Query the K8s API [kubernetes.core.k8s](https://github.com/ansible-collections/kubernetes.core/blob/main/docs/kubernetes.core.k8s_lookup.rst)|Query the K8s API
@@ -110,7 +99,7 @@ You can also include it in a `requirements.yml` file and install it via `ansible
--- ---
collections: collections:
- name: kubernetes.core - name: kubernetes.core
version: 5.4.2 version: 6.1.0
``` ```
### Installing the Kubernetes Python Library ### Installing the Kubernetes Python Library
@@ -189,6 +178,7 @@ For documentation on how to use individual modules and other content included in
## Ansible Turbo Mode Tech Preview ## Ansible Turbo Mode Tech Preview
The ``kubernetes.core`` collection supports Ansible Turbo mode as a tech preview via the ``cloud.common`` collection. By default, this feature is disabled. To enable Turbo mode for modules, set the environment variable `ENABLE_TURBO_MODE=1` on the managed node. For example: The ``kubernetes.core`` collection supports Ansible Turbo mode as a tech preview via the ``cloud.common`` collection. By default, this feature is disabled. To enable Turbo mode for modules, set the environment variable `ENABLE_TURBO_MODE=1` on the managed node. For example:
```yaml ```yaml
@@ -227,7 +217,7 @@ You can run the collection's test suites with the commands:
### Testing with `molecule` ### Testing with `molecule`
There are also integration tests in the `molecule` directory which are meant to be run against a local Kubernetes cluster, e.g. using [KinD](https://kind.sigs.k8s.io) or [Minikube](https://minikube.sigs.k8s.io). To set up a local cluster using KinD and run Molecule: There are also integration tests in the `molecule` directory which are meant to be run against a local Kubernetes cluster, e.g. using [KinD](https://kind.sigs.k8s.io) or [Minikube](https://minikube.sigs.k8s.io). To setup a local cluster using KinD and run Molecule:
kind create cluster kind create cluster
make test-molecule make test-molecule
@@ -266,7 +256,7 @@ For more information about communication, refer to the [Ansible Communication gu
For the latest supported versions, refer to the release notes below. For the latest supported versions, refer to the release notes below.
If you encounter issues or have questions, you can submit a support request through the following channels: If you encounter issues or have questions, you can submit a support request through the following channels:
- GitHub Issues: Report bugs, request features, or ask questions by opening an issue in the [GitHub repository](https://github.com/ansible-collections/kubernetes.core/). - GitHub Issues: Report bugs, request features, or ask questions by opening an issue in the [GitHub repository]((https://github.com/ansible-collections/kubernetes.core/).
## Release Notes ## Release Notes
@@ -278,8 +268,9 @@ We follow the [Ansible Code of Conduct](https://docs.ansible.com/ansible/devel/c
If you encounter abusive behavior, please refer to the [policy violations](https://docs.ansible.com/ansible/devel/community/code_of_conduct.html#policy-violations) section of the Code for information on how to raise a complaint. If you encounter abusive behavior, please refer to the [policy violations](https://docs.ansible.com/ansible/devel/community/code_of_conduct.html#policy-violations) section of the Code for information on how to raise a complaint.
## License ## License
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
See LICENSE to see the full text. See LICENCE to see the full text.

View File

@@ -977,7 +977,7 @@ releases:
- kustomize - kustomize plugin fails with deprecation warnings (https://github.com/ansible-collections/kubernetes.core/issues/639). - kustomize - kustomize plugin fails with deprecation warnings (https://github.com/ansible-collections/kubernetes.core/issues/639).
- waiter - Fix waiting for daemonset when desired number of pods is 0. (https://github.com/ansible-collections/kubernetes.core/pull/756). - waiter - Fix waiting for daemonset when desired number of pods is 0. (https://github.com/ansible-collections/kubernetes.core/pull/756).
minor_changes: minor_changes:
- Bump version of ``ansible-lint`` to minimum 24.7.0 (https://github.com/ansible-collections/kubernetes.core/pull/765). - Bump version of ansible-lint to minimum 24.7.0 (https://github.com/ansible-collections/kubernetes.core/pull/765).
- Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify - Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify
(https://github.com/ansible-collections/kubernetes.core/pull/805). (https://github.com/ansible-collections/kubernetes.core/pull/805).
- k8s_drain - Improve error message for pod disruption budget when draining - k8s_drain - Improve error message for pod disruption budget when draining
@@ -1029,13 +1029,13 @@ releases:
- module_utils/k8s/service - fix issue when trying to delete resource using - module_utils/k8s/service - fix issue when trying to delete resource using
`delete_options` and `check_mode=true` (https://github.com/ansible-collections/kubernetes.core/issues/892). `delete_options` and `check_mode=true` (https://github.com/ansible-collections/kubernetes.core/issues/892).
minor_changes: minor_changes:
- kubernetes.core - Bump version of ``ansible-lint`` to ``25.1.2`` (https://github.com/ansible-collections/kubernetes.core/pull/919). - Bump version of ansible-lint to 25.1.2 (https://github.com/ansible-collections/kubernetes.core/pull/919).
- action/k8s_info - update templating mechanism with changes from ``ansible-core - action/k8s_info - update templating mechanism with changes from ``ansible-core
2.19`` (https://github.com/ansible-collections/kubernetes.core/pull/888). 2.19`` (https://github.com/ansible-collections/kubernetes.core/pull/888).
- helm - add ``reset_then_reuse_values`` support to helm module (https://github.com/ansible-collections/kubernetes.core/issues/803). - helm - add reset_then_reuse_values support to helm module (https://github.com/ansible-collections/kubernetes.core/issues/803).
- helm - add support for ``insecure_skip_tls_verify`` option to helm and ``helm_repository`` (https://github.com/ansible-collections/kubernetes.core/issues/694). - helm - add support for ``insecure_skip_tls_verify`` option to helm and helm_repository(https://github.com/ansible-collections/kubernetes.core/issues/694).
release_summary: This release includes minor changes, bug fixes and also bumps release_summary: This release includes minor changes, bug fixes and also bumps
``ansible-lint`` version to ``25.1.2``. ansible-lint version to ``25.1.2``.
fragments: fragments:
- 20250324-k8s_info-templating.yaml - 20250324-k8s_info-templating.yaml
- 5.3.0.yml - 5.3.0.yml
@@ -1044,75 +1044,40 @@ releases:
- 898-k8s-dont-delete-in-check-mode.yaml - 898-k8s-dont-delete-in-check-mode.yaml
- 919-update-ansible-lint-version.yaml - 919-update-ansible-lint-version.yaml
release_date: '2025-05-16' release_date: '2025-05-16'
5.4.0: 6.0.0:
changes: changes:
breaking_changes:
- Remove deprecated ``k8s`` invetory plugin (https://github.com/ansible-collections/kubernetes.core/pull/867).
- Remove support for ``ansible-core<2.16`` (https://github.com/ansible-collections/kubernetes.core/pull/867).
fragments:
- 20250121-breaking-changes-6.0.0.yml
release_date: '2025-05-19'
6.1.0:
changes:
bugfixes:
- module_utils/k8s/service - hide fields first before creating diffs (https://github.com/ansible-collections/kubernetes.core/pull/915).
minor_changes: minor_changes:
- Module ``helm_registry_auth`` does not support idempotency with ``helm >= - Module helm_registry_auth do not support idempotency with `helm >= 3.18.0`
3.18.0`` (https://github.com/ansible-collections/kubernetes.core/pull/946). (https://github.com/ansible-collections/kubernetes.core/pull/946)
release_summary: This release updates the ``helm_registry_auth`` module to match - Module k8s_json_patch - Add support for `hidden_fields` (https://github.com/ansible-collections/kubernetes.core/pull/964).
the behavior of ``helm >= 3.18.0`` which reports a successful logout regardless - helm - Parameter plain_http added for working with insecure OCI registries
of the current state (i.e., no idempotency). (https://github.com/ansible-collections/kubernetes.core/pull/934).
- helm - Parameter take_ownership added (https://github.com/ansible-collections/kubernetes.core/pull/957).
- helm_pull - Parameter plain_http added for working with insecure OCI registries
(https://github.com/ansible-collections/kubernetes.core/pull/934).
- helm_template - Parameter plain_http added for working with insecure OCI registries
(https://github.com/ansible-collections/kubernetes.core/pull/934).
release_summary: "This release includes a fix for kubeconfig output, added ``plain_http``
and ``take_ownership`` parameters for helm modules, support for ``hidden_fields``
in ``k8s_json_patch``, documented lack of idempotency support in ``helm_registry_auth``
with ``helm \u2265 3.18.0``, and improved ``k8s_rollback`` test coverage."
fragments: fragments:
- 20250411-kubeconfig-no_log-revert.yaml - 20250411-kubeconfig-no_log-revert.yaml
- 20250503-fix-unit-tests.yml
- 20250605-fix-helm_registry_auth-integration_test.yaml
- 5.4.0.yml
release_date: '2025-08-12'
5.4.1:
changes:
bugfixes:
- Remove ``ansible.module_utils.six`` imports to avoid warnings (https://github.com/ansible-collections/kubernetes.core/pull/998).
- Update the `k8s_cp` module to also work for init containers (https://github.com/ansible-collections/kubernetes.core/pull/971).
- module_utils/k8s/service - hide fields first before creating diffs (https://github.com/ansible-collections/kubernetes.core/pull/915).
release_summary: This release includes bugfixes for k8s service field handling,
k8s_cp init containers support, and removes deprecated ansible.module_utils.six
imports.
fragments:
- 20250428-k8s-service-hide-fields-first.yaml - 20250428-k8s-service-hide-fields-first.yaml
- 20250731-fix-k8s_cp-initcontainers.yaml - 20250522-add-plain-http-for-oci-registries.yaml
- 20250922-remove-ansible-six-imports.yaml - 20250605-fix-helm_registry_auth-integration_test.yaml
- 5.4.1.yml - 20250704-k8s-rollback-integration-test-coverage.yaml
release_date: '2025-10-07' - 20250720-k8s-patch-add-hidden-fields.yaml
5.4.2: - 20250911-add-support-helm-take-ownership.yaml
changes: - release_summary.yml
bugfixes: release_date: '2025-08-12'
- Add idempotency for ``helm_pull`` module (https://github.com/ansible-collections/kubernetes.core/pull/1055).
- Fixed a bug where setting ``K8S_AUTH_VERIFY_SSL=true`` (or any string value)
caused the value to be treated as a separate ``kubectl`` command argument
(https://github.com/ansible-collections/kubernetes.core/pull/1049).
- Limit supported versions of Helm to <4.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/1039).
- Replace passing ``warnings`` to ``exit_json`` with ``AnsibleModule.warn``
in the ``k8s_drain``, ``k8s_rollback.py`` and ``k8s_scale.py`` modules as
it deprecated in ``ansible-core>=2.19.0`` and will be removed in ``ansible-core>=2.23.0``
(https://github.com/ansible-collections/kubernetes.core/pull/1033).
- k8s - Fix return block from the module documentation (https://github.com/ansible-collections/kubernetes.core/pull/1056).
- meta - Add ``k8s_cluster_info``, ``k8s_json_patch`` and ``k8s_rollback`` to
k8s action group (https://github.com/ansible-collections/kubernetes.core/pull/992).
deprecated_features:
- helm - the ``status.values`` return value has been deprecated and will be
removed in a release after 2027-01-08. Use ``status.release_values`` instead
(https://github.com/ansible-collections/kubernetes.core/pull/1056).
- helm_info - the ``status.values`` return value has been deprecated and will
be removed in a release after 2027-01-08. Use ``status.release_values`` instead
(https://github.com/ansible-collections/kubernetes.core/pull/1056).
minor_changes:
- helm - added ``release_values`` key to ``status`` return value that can be
accessed using Jinja2 dot notation (https://github.com/ansible-collections/kubernetes.core/pull/1056).
- helm_info - added ``release_values`` key to ``status`` return value that can
be accessed using Jinja2 dot notation (https://github.com/ansible-collections/kubernetes.core/pull/1056).
release_summary: This release includes various bugfixes such as replacing the
passing of ``warnings`` to ``exit_json`` with ``AnsibleModule.warn`` as well
as security updates for selectively redacting sensitive information from kubeconfig.
security_fixes:
- Selectively redact sensitive info from kubeconfig instead of applying blanket
``no_log=True`` (https://github.com/ansible-collections/kubernetes.core/pull/1014).
fragments:
- 1033-warnings-deprecations.yaml
- 20251002-fix-k8s-actiongroup.yaml
- 20251007-selective-kubeconfig-redaction.yaml
- 20251115-limit-versions-of-helm.yaml
- 20251220-fix-K8S_AUTH_VERIFY_SSL-in-kubectl-connecton-plugion.yaml
- 20260107-add-idempodency-for-helm-pull.yaml
- 20260108-fix-sanity-failures.yml
- 5-4-2.yaml
release_date: '2026-02-03'

View File

@@ -1,147 +0,0 @@
.. _ansible_turbo_mode:
******************
Ansible Turbo mode
******************
Following document provides overview of Ansible Turbo mode in ``kubernetes.core`` collection.
.. contents::
:local:
:depth: 1
Synopsis
--------
- A brief introduction about Ansible Turbo mode in ``kuberentes.core`` collection.
- Ansible Turbo mode is an optional performance optimization. It can be enabled by installing the cloud.common collection and setting the ``ENABLE_TURBO_MODE`` environment variable.
Requirements
------------
The following requirement is needed on the host that executes this module.
- The ``cloud.common`` collection (https://github.com/ansible-collections/cloud.common)
You will also need to set the environment variable ``ENABLE_TURBO_MODE=1`` on the managed host. This can be done in the same ways you would usually do so, for example::
---
- hosts: remote
environment:
ENABLE_TURBO_MODE: 1
tasks:
...
Installation
------------
You can install ``cloud.common`` collection using following command::
# ansible-galaxy collection install cloud.common
Current situation without Ansible Turbo mode
============================================
The traditional execution flow of an Ansible module includes the following steps:
- Upload of a ZIP archive with the module and its dependencies
- Execution of the module
- Ansible collects the results once the script is finished
These steps happen for each task of a playbook, and on every host.
Most of the time, the execution of a module is fast enough for
the user. However, sometime the module requires significant amount of time,
just to initialize itself. This is a common situation with the API based modules.
A classic initialization involves the following steps:
- Load a Python library to access the remote resource (via SDK)
- Open a client
- Load a bunch of Python modules.
- Request a new TCP connection.
- Create a session.
- Authenticate the client.
All these steps are time consuming and the same operations will be running again and again.
For instance, here:
- ``import openstack``: takes 0.569s
- ``client = openstack.connect()``: takes 0.065s
- ``client.authorize()``: takes 1.360s,
These numbers are from test running against VexxHost public cloud.
In this case, it's a 2s-ish overhead per task. If the playbook
comes with 10 tasks, the execution time cannot go below 20s.
How Ansible Turbo Module improve the situation
==============================================
``AnsibleTurboModule`` is actually a class that inherites from
the standard ``AnsibleModule`` class that your modules probably
already use.
The big difference is that when a module starts, it also spawns
a little Python daemon. If a daemon already exists, it will just
reuse it.
All the module logic is run inside this Python daemon. This means:
- Python modules are actually loaded one time
- Ansible module can reuse an existing authenticated session.
The background service
======================
The daemon kills itself after 15s, and communication are done
through an Unix socket.
It runs in one single process and uses ``asyncio`` internally.
Consequently you can use the ``async`` keyword in your Ansible module.
This will be handy if you interact with a lot of remote systems
at the same time.
Security impact
===============
``ansible_module.turbo`` open an Unix socket to interact with the background service.
We use this service to open the connection toward the different target systems.
This is similar to what SSH does with the sockets.
Keep in mind that:
- All the modules can access the same cache. Soon an isolation will be done at the collection level (https://github.com/ansible-collections/cloud.common/pull/17)
- A task can load a different version of a library and impact the next tasks.
- If the same user runs two ``ansible-playbook`` at the same time, they will have access to the same cache.
When a module stores a session in a cache, it's a good idea to use a hash of the authentication information to identify the session.
Error management
================
``ansible_module.turbo`` uses exceptions to communicate a result back to the module.
- ``EmbeddedModuleFailure`` is raised when ``json_fail()`` is called.
- ``EmbeddedModuleSuccess`` is raised in case of success and returns the result to the origin module process.
These exceptions are defined in ``ansible_collections.cloud.common.plugins.module_utils.turbo.exceptions``.
You can raise ``EmbeddedModuleFailure`` exception yourself, for instance from a module in ``module_utils``.
.. note:: Be careful with the ``except Exception:`` blocks.
Not only they are bad practice, but also may interface with this
mechanism.
Troubleshooting
===============
You may want to manually start the server. This can be done with the following command:
.. code-block:: shell
PYTHONPATH=$HOME/.ansible/collections python -m ansible_collections.cloud.common.plugins.module_utils.turbo.server --socket-path $HOME/.ansible/tmp/turbo_mode.kubernetes.core.socket
You can use the ``--help`` argument to get a list of the optional parameters.

View File

@@ -17,7 +17,7 @@ Requirements
To use the modules, you'll need the following: To use the modules, you'll need the following:
- Ansible 2.9.17 or latest installed - Ansible 2.16.0 or latest installed
- `Kubernetes Python client <https://pypi.org/project/kubernetes/>`_ installed on the host that will execute the modules. - `Kubernetes Python client <https://pypi.org/project/kubernetes/>`_ installed on the host that will execute the modules.

View File

@@ -1,88 +0,0 @@
.. _ansible_collections.kubernetes.core.docsite.k8s_ansible_inventory:
*****************************************
Using Kubernetes dynamic inventory plugin
*****************************************
.. contents::
:local:
Kubernetes dynamic inventory plugin
===================================
The best way to interact with your Pods is to use the Kubernetes dynamic inventory plugin, which queries Kubernetes APIs using ``kubectl`` command line available on controller node and tells Ansible what Pods can be managed.
Requirements
------------
To use the Kubernetes dynamic inventory plugins, you must install `Kubernetes Python client <https://github.com/kubernetes-client/python>`_, `kubectl <https://github.com/kubernetes/kubectl>`_ on your control node (the host running Ansible).
.. code-block:: bash
$ pip install kubernetes
Please refer to Kubernetes official documentation for `installing kubectl <https://kubernetes.io/docs/tasks/tools/install-kubectl/>`_ on the given operating systems.
To use this Kubernetes dynamic inventory plugin, you need to enable it first by specifying the following in the ``ansible.cfg`` file:
.. code-block:: ini
[inventory]
enable_plugins = kubernetes.core.k8s
Then, create a file that ends in ``.k8s.yml`` or ``.k8s.yaml`` in your working directory.
The ``kubernetes.core.k8s`` inventory plugin takes in the same authentication information as any other Kubernetes modules.
Here's an example of a valid inventory file:
.. code-block:: yaml
plugin: kubernetes.core.k8s
Executing ``ansible-inventory --list -i <filename>.k8s.yml`` will create a list of Pods that are ready to be configured using Ansible.
You can also provide the namespace to gather information about specific pods from the given namespace. For example, to gather information about Pods under the ``test`` namespace you will specify the ``namespaces`` parameter:
.. code-block:: yaml
plugin: kubernetes.core.k8s
connections:
- namespaces:
- test
Using vaulted configuration files
=================================
Since the inventory configuration file contains Kubernetes related sensitive information in plain text, a security risk, you may want to
encrypt your entire inventory configuration file.
You can encrypt a valid inventory configuration file as follows:
.. code-block:: bash
$ ansible-vault encrypt <filename>.k8s.yml
New Vault password:
Confirm New Vault password:
Encryption successful
$ echo "MySuperSecretPassw0rd!" > /path/to/vault_password_file
And you can use this vaulted inventory configuration file using:
.. code-block:: bash
$ ansible-inventory -i <filename>.k8s.yml --list --vault-password-file=/path/to/vault_password_file
.. seealso::
`Kubernetes Python client - Issue Tracker <https://github.com/kubernetes-client/python/issues>`_
The issue tracker for Kubernetes Python client
`Kubectl installation <https://kubernetes.io/docs/tasks/tools/install-kubectl/>`_
Installation guide for installing Kubectl
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_vault`
Using Vault in playbooks

View File

@@ -13,6 +13,5 @@ To get started, please select one of the following topics.
:maxdepth: 1 :maxdepth: 1
kubernetes_scenarios/k8s_intro kubernetes_scenarios/k8s_intro
kubernetes_scenarios/k8s_inventory
kubernetes_scenarios/k8s_scenarios kubernetes_scenarios/k8s_scenarios

View File

@@ -330,6 +330,27 @@ Parameters
<div style="font-size: small; color: darkgreen"><br/>aliases: kubeconfig_path</div> <div style="font-size: small; color: darkgreen"><br/>aliases: kubeconfig_path</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>plain_http</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.1.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li><div style="color: blue"><b>no</b>&nbsp;&larr;</div></li>
<li>yes</li>
</ul>
</td>
<td>
<div>Use HTTP instead of HTTPS when working with OCI registries</div>
<div>Requires Helm &gt;= 3.13.0</div>
</td>
</tr>
<tr> <tr>
<td colspan="2"> <td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div> <div class="ansibleOptionAnchor" id="parameter-"></div>
@@ -601,6 +622,27 @@ Parameters
<div>Skip custom resource definitions when installing or upgrading.</div> <div>Skip custom resource definitions when installing or upgrading.</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>take_ownership</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.1.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li><div style="color: blue"><b>no</b>&nbsp;&larr;</div></li>
<li>yes</li>
</ul>
</td>
<td>
<div>When upgrading, Helm will ignore the check for helm annotations and take ownership of the existing resources</div>
<div>This feature requires helm &gt;= 3.17.0</div>
</td>
</tr>
<tr> <tr>
<td colspan="2"> <td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div> <div class="ansibleOptionAnchor" id="parameter-"></div>
@@ -810,6 +852,12 @@ Examples
chart_ref: "https://github.com/grafana/helm-charts/releases/download/grafana-5.6.0/grafana-5.6.0.tgz" chart_ref: "https://github.com/grafana/helm-charts/releases/download/grafana-5.6.0/grafana-5.6.0.tgz"
release_namespace: monitoring release_namespace: monitoring
- name: Deploy Bitnami's MongoDB latest chart from OCI registry
kubernetes.core.helm:
name: test
chart_ref: "oci://registry-1.docker.io/bitnamicharts/mongodb"
release_namespace: database
# Using complex Values # Using complex Values
- name: Deploy new-relic client chart - name: Deploy new-relic client chart
kubernetes.core.helm: kubernetes.core.helm:

View File

@@ -27,7 +27,7 @@ Requirements
------------ ------------
The below requirements are needed on the host that executes this module. The below requirements are needed on the host that executes this module.
- helm >= 3.0, <4.0.0 (https://github.com/helm/helm/releases) - helm >= 3.0 (https://github.com/helm/helm/releases)
Parameters Parameters
@@ -174,28 +174,6 @@ Parameters
<div>location to write the chart.</div> <div>location to write the chart.</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>force</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.3.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li><div style="color: blue"><b>no</b>&nbsp;&larr;</div></li>
<li>yes</li>
</ul>
</td>
<td>
<div>Force download of the chart even if it already exists in the destination directory.</div>
<div>By default, the module will skip downloading if the chart with the same version already exists for idempotency.</div>
<div>When used with O(untar_chart=true), will remove any existing chart directory before extracting.</div>
</td>
</tr>
<tr> <tr>
<td colspan="1"> <td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div> <div class="ansibleOptionAnchor" id="parameter-"></div>
@@ -215,6 +193,27 @@ Parameters
<div>Pass credentials to all domains.</div> <div>Pass credentials to all domains.</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>plain_http</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.1.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li><div style="color: blue"><b>no</b>&nbsp;&larr;</div></li>
<li>yes</li>
</ul>
</td>
<td>
<div>Use HTTP instead of HTTPS when working with OCI registries</div>
<div>Requires Helm &gt;= 3.13.0</div>
</td>
</tr>
<tr> <tr>
<td colspan="1"> <td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div> <div class="ansibleOptionAnchor" id="parameter-"></div>
@@ -398,28 +397,11 @@ Examples
username: myuser username: myuser
password: mypassword123 password: mypassword123
- name: Download Chart (force re-download even if exists)
kubernetes.core.helm_pull:
chart_ref: redis
repo_url: https://charts.bitnami.com/bitnami
chart_version: '17.0.0'
destination: /path/to/chart
force: yes
- name: Download and untar chart (force re-extraction even if directory exists)
kubernetes.core.helm_pull:
chart_ref: redis
repo_url: https://charts.bitnami.com/bitnami
chart_version: '17.0.0'
destination: /path/to/chart
untar_chart: yes
force: yes
Return Values Return Values
------------- -------------
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module: Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html .. raw:: html
@@ -446,23 +428,6 @@ Common return values are documented `here <https://docs.ansible.com/projects/ans
<div style="font-size: smaller; color: blue; word-wrap: break-word; word-break: break-all;">helm pull --repo test ...</div> <div style="font-size: smaller; color: blue; word-wrap: break-word; word-break: break-all;">helm pull --repo test ...</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="1">
<div class="ansibleOptionAnchor" id="return-"></div>
<b>msg</b>
<a class="ansibleOptionLink" href="#return-" title="Permalink to this return value"></a>
<div style="font-size: small">
<span style="color: purple">string</span>
</div>
</td>
<td>when chart already exists</td>
<td>
<div>A message indicating the result of the operation.</div>
<br/>
<div style="font-size: smaller"><b>Sample:</b></div>
<div style="font-size: smaller; color: blue; word-wrap: break-word; word-break: break-all;">Chart redis version 17.0.0 already exists in destination directory</div>
</td>
</tr>
<tr> <tr>
<td colspan="1"> <td colspan="1">
<div class="ansibleOptionAnchor" id="return-"></div> <div class="ansibleOptionAnchor" id="return-"></div>

View File

@@ -25,7 +25,7 @@ Requirements
------------ ------------
The below requirements are needed on the host that executes this module. The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases) >= 3.8.0, <4.0.0 - helm (https://github.com/helm/helm/releases) => 3.8.0
Parameters Parameters
@@ -215,7 +215,7 @@ Examples
Return Values Return Values
------------- -------------
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module: Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html .. raw:: html

View File

@@ -194,6 +194,27 @@ Parameters
<div>If the directory already exists, it will be overwritten.</div> <div>If the directory already exists, it will be overwritten.</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>plain_http</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.1.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li><div style="color: blue"><b>no</b>&nbsp;&larr;</div></li>
<li>yes</li>
</ul>
</td>
<td>
<div>Use HTTP instead of HTTPS when working with OCI registries</div>
<div>Requires Helm &gt;= 3.13.0</div>
</td>
</tr>
<tr> <tr>
<td colspan="2"> <td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div> <div class="ansibleOptionAnchor" id="parameter-"></div>

View File

@@ -512,7 +512,6 @@ Notes
.. note:: .. note::
- the tar binary is required on the container when copying from local filesystem to pod. - the tar binary is required on the container when copying from local filesystem to pod.
- the (init) container has to be started before you copy files or directories to it.
- To avoid SSL certificate validation errors when ``validate_certs`` is *True*, the full certificate chain for the API server must be provided via ``ca_cert`` or in the kubeconfig file. - To avoid SSL certificate validation errors when ``validate_certs`` is *True*, the full certificate chain for the API server must be provided via ``ca_cert`` or in the kubeconfig file.

View File

@@ -701,21 +701,6 @@ Examples
wait_sleep: 10 wait_sleep: 10
wait_timeout: 360 wait_timeout: 360
- name: Wait for OpenShift bootstrap to complete
kubernetes.core.k8s_info:
api_version: v1
kind: ConfigMap
name: bootstrap
namespace: kube-system
register: ocp_bootstrap_status
until: >
ocp_bootstrap_status.resources is defined and
(ocp_bootstrap_status.resources | length > 0) and
(ocp_bootstrap_status.resources[0].data.status is defined) and
(ocp_bootstrap_status.resources[0].data.status == 'complete')
retries: 60
delay: 15
Return Values Return Values

View File

@@ -1,372 +0,0 @@
.. _kubernetes.core.k8s_inventory:
*******************
kubernetes.core.k8s
*******************
**Kubernetes (K8s) inventory source**
.. contents::
:local:
:depth: 1
DEPRECATED
----------
:Removed in collection release after
:Why: As discussed in https://github.com/ansible-collections/kubernetes.core/issues/31, we decided to
remove the k8s inventory plugin in release 6.0.0.
:Alternative: Use :ref:`kubernetes.core.k8s_info <kubernetes.core.k8s_info_module>` and :ref:`ansible.builtin.add_host <ansible.builtin.add_host_module>` instead.
Synopsis
--------
- Fetch containers and services for one or more clusters.
- Groups by cluster name, namespace, namespace_services, namespace_pods, and labels.
- Uses the kubectl connection plugin to access the Kubernetes cluster.
- Uses k8s.(yml|yaml) YAML configuration file to set parameter values.
Requirements
------------
The below requirements are needed on the local Ansible controller node that executes this inventory.
- python >= 3.9
- kubernetes >= 24.2.0
- PyYAML >= 3.11
Parameters
----------
.. raw:: html
<table border=0 cellpadding=0 class="documentation-table">
<tr>
<th colspan="2">Parameter</th>
<th>Choices/<font color="blue">Defaults</font></th>
<th>Configuration</th>
<th width="100%">Comments</th>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>connections</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Optional list of cluster connection settings. If no connections are provided, the default <em>~/.kube/config</em> and active context will be used, and objects will be returned for all namespaces the active user is authorized to access.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>api_key</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Token used to authenticate with the API. Can also be specified via K8S_AUTH_API_KEY environment variable.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>ca_cert</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Path to a CA certificate used to authenticate with the API. Can also be specified via K8S_AUTH_SSL_CA_CERT environment variable.</div>
<div style="font-size: small; color: darkgreen"><br/>aliases: ssl_ca_cert</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>client_cert</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Path to a certificate used to authenticate with the API. Can also be specified via K8S_AUTH_CERT_FILE environment variable.</div>
<div style="font-size: small; color: darkgreen"><br/>aliases: cert_file</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>client_key</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Path to a key file used to authenticate with the API. Can also be specified via K8S_AUTH_KEY_FILE environment variable.</div>
<div style="font-size: small; color: darkgreen"><br/>aliases: key_file</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>context</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>The name of a context found in the config file. Can also be specified via K8S_AUTH_CONTEXT environment variable.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>host</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Provide a URL for accessing the API. Can also be specified via K8S_AUTH_HOST environment variable.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>kubeconfig</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Path to an existing Kubernetes config file. If not provided, and no other connection options are provided, the Kubernetes client will attempt to load the default configuration file from <em>~/.kube/config</em>. Can also be specified via K8S_AUTH_KUBECONFIG environment variable.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>name</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Optional name to assign to the cluster. If not provided, a name is constructed from the server and port.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>namespaces</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>List of namespaces. If not specified, will fetch all containers for all namespaces user is authorized to access.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>password</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Provide a password for authenticating with the API. Can also be specified via K8S_AUTH_PASSWORD environment variable.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>username</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
</div>
</td>
<td>
</td>
<td>
</td>
<td>
<div>Provide a username for authenticating with the API. Can also be specified via K8S_AUTH_USERNAME environment variable.</div>
</td>
</tr>
<tr>
<td class="elbow-placeholder"></td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>validate_certs</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li>no</li>
<li>yes</li>
</ul>
</td>
<td>
</td>
<td>
<div>Whether or not to verify the API server&#x27;s SSL certificates. Can also be specified via K8S_AUTH_VERIFY_SSL environment variable.</div>
<div style="font-size: small; color: darkgreen"><br/>aliases: verify_ssl</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>plugin</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">-</span>
/ <span style="color: red">required</span>
</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li>kubernetes.core.k8s</li>
<li>k8s</li>
<li>community.kubernetes.k8s</li>
</ul>
</td>
<td>
</td>
<td>
<div>token that ensures this is a source file for the &#x27;k8s&#x27; plugin.</div>
</td>
</tr>
</table>
<br/>
Examples
--------
.. code-block:: yaml
# File must be named k8s.yaml or k8s.yml
- name: Authenticate with token, and return all pods and services for all namespaces
plugin: kubernetes.core.k8s
connections:
- host: https://192.168.64.4:8443
api_key: xxxxxxxxxxxxxxxx
validate_certs: false
- name: Use default config (~/.kube/config) file and active context, and return objects for a specific namespace
plugin: kubernetes.core.k8s
connections:
- namespaces:
- testing
- name: Use a custom config file, and a specific context.
plugin: kubernetes.core.k8s
connections:
- kubeconfig: /path/to/config
context: 'awx/192-168-64-4:8443/developer'
Status
------
- This inventory will be removed in version 6.0.0. *[deprecated]*
- For more information see `DEPRECATED`_.
Authors
~~~~~~~
- Chris Houseknecht (@chouseknecht)
- Fabian von Feilitzsch (@fabianvf)
.. hint::
Configuration entries for each entry type have a low to high priority order. For example, a variable that is lower in the list will override a variable that is higher up.

View File

@@ -140,6 +140,25 @@ Parameters
<div>The name of a context found in the config file. Can also be specified via K8S_AUTH_CONTEXT environment variable.</div> <div>The name of a context found in the config file. Can also be specified via K8S_AUTH_CONTEXT environment variable.</div>
</td> </td>
</tr> </tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>hidden_fields</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">list</span>
/ <span style="color: purple">elements=string</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.1.0</div>
</td>
<td>
<b>Default:</b><br/><div style="color: blue">[]</div>
</td>
<td>
<div>List of fields to hide from the diff output.</div>
<div>This is useful for fields that are not relevant to the patch operation, such as `metadata.managedFields`.</div>
</td>
</tr>
<tr> <tr>
<td colspan="2"> <td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div> <div class="ansibleOptionAnchor" id="parameter-"></div>

View File

@@ -25,7 +25,7 @@ tags:
- openshift - openshift
- okd - okd
- cluster - cluster
version: 5.4.2 version: 6.1.0
build_ignore: build_ignore:
- .DS_Store - .DS_Store
- "*.tar.gz" - "*.tar.gz"

View File

@@ -1,5 +1,5 @@
--- ---
requires_ansible: '>=2.15.0' requires_ansible: '>=2.16.0'
action_groups: action_groups:
helm: helm:
@@ -8,27 +8,23 @@ action_groups:
- helm_repository - helm_repository
k8s: k8s:
- k8s - k8s
- k8s_cluster_info
- k8s_cp
- k8s_drain
- k8s_exec - k8s_exec
- k8s_info - k8s_info
- k8s_json_patch
- k8s_log - k8s_log
- k8s_rollback
- k8s_scale - k8s_scale
- k8s_service - k8s_service
- k8s_cp
- k8s_drain
plugin_routing: plugin_routing:
inventory: inventory:
openshift: openshift:
redirect: community.okd.openshift redirect: community.okd.openshift
k8s: k8s:
deprecation: tombstone:
removal_version: 6.0.0 removal_version: 6.0.0
warning_text: >- warning_text: >-
The k8s inventory plugin has been deprecated and The k8s inventory plugin was slated for deprecation in 3.3.0 and has been removed in release 6.0.0. Use kubernetes.core.k8s_info and ansible.builtin.add_host instead.
will be removed in release 6.0.0.
modules: modules:
k8s_auth: k8s_auth:
redirect: community.okd.k8s_auth redirect: community.okd.k8s_auth

View File

@@ -22,6 +22,7 @@ from ansible.errors import (
) )
from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems, string_types
from ansible.plugins.action import ActionBase from ansible.plugins.action import ActionBase
try: try:
@@ -99,7 +100,7 @@ class ActionModule(ActionBase):
"trim_blocks": True, "trim_blocks": True,
"lstrip_blocks": False, "lstrip_blocks": False,
} }
if isinstance(template, str): if isinstance(template, string_types):
# treat this as raw_params # treat this as raw_params
template_param["path"] = template template_param["path"] = template
elif isinstance(template, dict): elif isinstance(template, dict):
@@ -119,7 +120,7 @@ class ActionModule(ActionBase):
): ):
if s_type in template_args: if s_type in template_args:
value = ensure_type(template_args[s_type], "string") value = ensure_type(template_args[s_type], "string")
if value is not None and not isinstance(value, str): if value is not None and not isinstance(value, string_types):
raise AnsibleActionFail( raise AnsibleActionFail(
"%s is expected to be a string, but got %s instead" "%s is expected to be a string, but got %s instead"
% (s_type, type(value)) % (s_type, type(value))
@@ -195,7 +196,7 @@ class ActionModule(ActionBase):
) )
template_params = [] template_params = []
if isinstance(template, str) or isinstance(template, dict): if isinstance(template, string_types) or isinstance(template, dict):
template_params.append(self.get_template_args(template)) template_params.append(self.get_template_args(template))
elif isinstance(template, list): elif isinstance(template, list):
for element in template: for element in template:
@@ -245,7 +246,7 @@ class ActionModule(ActionBase):
# add ansible 'template' vars # add ansible 'template' vars
temp_vars = copy.deepcopy(task_vars) temp_vars = copy.deepcopy(task_vars)
overrides = {} overrides = {}
for key, value in template_item.items(): for key, value in iteritems(template_item):
if hasattr(self._templar.environment, key): if hasattr(self._templar.environment, key):
if value is not None: if value is not None:
overrides[key] = value overrides[key] = value
@@ -302,7 +303,7 @@ class ActionModule(ActionBase):
) )
def get_kubeconfig(self, kubeconfig, remote_transport, new_module_args): def get_kubeconfig(self, kubeconfig, remote_transport, new_module_args):
if isinstance(kubeconfig, str): if isinstance(kubeconfig, string_types):
# find the kubeconfig in the expected search path # find the kubeconfig in the expected search path
if not remote_transport: if not remote_transport:
# kubeconfig is local # kubeconfig is local

View File

@@ -265,7 +265,6 @@ import tempfile
from ansible.errors import AnsibleError, AnsibleFileNotFound from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils._text import to_bytes from ansible.module_utils._text import to_bytes
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six.moves import shlex_quote from ansible.module_utils.six.moves import shlex_quote
from ansible.parsing.yaml.loader import AnsibleLoader from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins.connection import BUFSIZE, ConnectionBase from ansible.plugins.connection import BUFSIZE, ConnectionBase
@@ -325,12 +324,9 @@ class Connection(ConnectionBase):
# Build command options based on doc string # Build command options based on doc string
doc_yaml = AnsibleLoader(self.documentation).get_single_data() doc_yaml = AnsibleLoader(self.documentation).get_single_data()
for key in doc_yaml.get("options"): for key in doc_yaml.get("options"):
if key == "validate_certs" and self.get_option(key) != "": if key.endswith("verify_ssl") and self.get_option(key) != "":
# Translate validate_certs to --insecure-skip-tls-verify flag # Translate verify_ssl to skip_verify_ssl, and output as string
# validate_certs=True means verify certs (don't skip verification) skip_verify_ssl = not self.get_option(key)
# validate_certs=False means don't verify certs (skip verification)
validate_certs_value = boolean(self.get_option(key), strict=False)
skip_verify_ssl = not validate_certs_value
local_cmd.append( local_cmd.append(
"{0}={1}".format( "{0}={1}".format(
self.connection_options[key], str(skip_verify_ssl).lower() self.connection_options[key], str(skip_verify_ssl).lower()

View File

@@ -1,476 +0,0 @@
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
name: k8s
author:
- Chris Houseknecht (@chouseknecht)
- Fabian von Feilitzsch (@fabianvf)
short_description: Kubernetes (K8s) inventory source
description:
- Fetch containers and services for one or more clusters.
- Groups by cluster name, namespace, namespace_services, namespace_pods, and labels.
- Uses the kubectl connection plugin to access the Kubernetes cluster.
- Uses k8s.(yml|yaml) YAML configuration file to set parameter values.
deprecated:
removed_in: 6.0.0
why: |
As discussed in U(https://github.com/ansible-collections/kubernetes.core/issues/31), we decided to
remove the k8s inventory plugin in release 6.0.0.
alternative: "Use M(kubernetes.core.k8s_info) and M(ansible.builtin.add_host) instead."
options:
plugin:
description: token that ensures this is a source file for the 'k8s' plugin.
required: True
choices: ['kubernetes.core.k8s', 'k8s', 'community.kubernetes.k8s']
connections:
description:
- Optional list of cluster connection settings. If no connections are provided, the default
I(~/.kube/config) and active context will be used, and objects will be returned for all namespaces
the active user is authorized to access.
suboptions:
name:
description:
- Optional name to assign to the cluster. If not provided, a name is constructed from the server
and port.
kubeconfig:
description:
- Path to an existing Kubernetes config file. If not provided, and no other connection
options are provided, the Kubernetes client will attempt to load the default
configuration file from I(~/.kube/config). Can also be specified via K8S_AUTH_KUBECONFIG
environment variable.
context:
description:
- The name of a context found in the config file. Can also be specified via K8S_AUTH_CONTEXT environment
variable.
host:
description:
- Provide a URL for accessing the API. Can also be specified via K8S_AUTH_HOST environment variable.
api_key:
description:
- Token used to authenticate with the API. Can also be specified via K8S_AUTH_API_KEY environment
variable.
username:
description:
- Provide a username for authenticating with the API. Can also be specified via K8S_AUTH_USERNAME
environment variable.
password:
description:
- Provide a password for authenticating with the API. Can also be specified via K8S_AUTH_PASSWORD
environment variable.
client_cert:
description:
- Path to a certificate used to authenticate with the API. Can also be specified via K8S_AUTH_CERT_FILE
environment variable.
aliases: [ cert_file ]
client_key:
description:
- Path to a key file used to authenticate with the API. Can also be specified via K8S_AUTH_KEY_FILE
environment variable.
aliases: [ key_file ]
ca_cert:
description:
- Path to a CA certificate used to authenticate with the API. Can also be specified via
K8S_AUTH_SSL_CA_CERT environment variable.
aliases: [ ssl_ca_cert ]
validate_certs:
description:
- "Whether or not to verify the API server's SSL certificates. Can also be specified via
K8S_AUTH_VERIFY_SSL environment variable."
type: bool
aliases: [ verify_ssl ]
namespaces:
description:
- List of namespaces. If not specified, will fetch all containers for all namespaces user is authorized
to access.
requirements:
- "python >= 3.9"
- "kubernetes >= 24.2.0"
- "PyYAML >= 3.11"
"""
EXAMPLES = r"""
# File must be named k8s.yaml or k8s.yml
- name: Authenticate with token, and return all pods and services for all namespaces
plugin: kubernetes.core.k8s
connections:
- host: https://192.168.64.4:8443
api_key: xxxxxxxxxxxxxxxx
validate_certs: false
- name: Use default config (~/.kube/config) file and active context, and return objects for a specific namespace
plugin: kubernetes.core.k8s
connections:
- namespaces:
- testing
- name: Use a custom config file, and a specific context.
plugin: kubernetes.core.k8s
connections:
- kubeconfig: /path/to/config
context: 'awx/192-168-64-4:8443/developer'
"""
import json
from ansible.errors import AnsibleError
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable, Constructable
try:
from kubernetes.dynamic.exceptions import DynamicApiError
HAS_K8S_MODULE_HELPER = True
k8s_import_exception = None
except ImportError as e:
HAS_K8S_MODULE_HELPER = False
k8s_import_exception = e
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.client import (
get_api_client,
)
def format_dynamic_api_exc(exc):
if exc.body:
if exc.headers and exc.headers.get("Content-Type") == "application/json":
message = json.loads(exc.body).get("message")
if message:
return message
return exc.body
else:
return "%s Reason: %s" % (exc.status, exc.reason)
class K8sInventoryException(Exception):
pass
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = "kubernetes.core.k8s"
connection_plugin = "kubernetes.core.kubectl"
transport = "kubectl"
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.display.deprecated(
"The 'k8s' inventory plugin has been deprecated and will be removed in release 6.0.0",
version="6.0.0",
collection_name="kubernetes.core",
)
cache_key = self._get_cache_prefix(path)
config_data = self._read_config_data(path)
self.setup(config_data, cache, cache_key)
def setup(self, config_data, cache, cache_key):
connections = config_data.get("connections")
if not HAS_K8S_MODULE_HELPER:
raise K8sInventoryException(
"This module requires the Kubernetes Python client. Try `pip install kubernetes`. Detail: {0}".format(
k8s_import_exception
)
)
source_data = None
if cache and cache_key in self._cache:
try:
source_data = self._cache[cache_key]
except KeyError:
pass
if not source_data:
self.fetch_objects(connections)
def fetch_objects(self, connections):
if connections:
if not isinstance(connections, list):
raise K8sInventoryException("Expecting connections to be a list.")
for connection in connections:
if not isinstance(connection, dict):
raise K8sInventoryException(
"Expecting connection to be a dictionary."
)
client = get_api_client(**connection)
name = connection.get(
"name", self.get_default_host_name(client.configuration.host)
)
if connection.get("namespaces"):
namespaces = connection["namespaces"]
else:
namespaces = self.get_available_namespaces(client)
for namespace in namespaces:
self.get_pods_for_namespace(client, name, namespace)
self.get_services_for_namespace(client, name, namespace)
else:
client = get_api_client()
name = self.get_default_host_name(client.configuration.host)
namespaces = self.get_available_namespaces(client)
for namespace in namespaces:
self.get_pods_for_namespace(client, name, namespace)
self.get_services_for_namespace(client, name, namespace)
@staticmethod
def get_default_host_name(host):
return (
host.replace("https://", "")
.replace("http://", "")
.replace(".", "-")
.replace(":", "_")
)
def get_available_namespaces(self, client):
v1_namespace = client.resources.get(api_version="v1", kind="Namespace")
try:
obj = v1_namespace.get()
except DynamicApiError as exc:
self.display.debug(exc)
raise K8sInventoryException(
"Error fetching Namespace list: %s" % format_dynamic_api_exc(exc)
)
return [namespace.metadata.name for namespace in obj.items]
def get_pods_for_namespace(self, client, name, namespace):
v1_pod = client.resources.get(api_version="v1", kind="Pod")
try:
obj = v1_pod.get(namespace=namespace)
except DynamicApiError as exc:
self.display.debug(exc)
raise K8sInventoryException(
"Error fetching Pod list: %s" % format_dynamic_api_exc(exc)
)
namespace_group = "namespace_{0}".format(namespace)
namespace_pods_group = "{0}_pods".format(namespace_group)
self.inventory.add_group(name)
self.inventory.add_group(namespace_group)
self.inventory.add_child(name, namespace_group)
self.inventory.add_group(namespace_pods_group)
self.inventory.add_child(namespace_group, namespace_pods_group)
for pod in obj.items:
pod_name = pod.metadata.name
pod_groups = []
pod_annotations = (
{} if not pod.metadata.annotations else dict(pod.metadata.annotations)
)
if pod.metadata.labels:
# create a group for each label_value
for key, value in pod.metadata.labels:
group_name = "label_{0}_{1}".format(key, value)
if group_name not in pod_groups:
pod_groups.append(group_name)
self.inventory.add_group(group_name)
pod_labels = dict(pod.metadata.labels)
else:
pod_labels = {}
if not pod.status.containerStatuses:
continue
for container in pod.status.containerStatuses:
# add each pod_container to the namespace group, and to each label_value group
container_name = "{0}_{1}".format(pod.metadata.name, container.name)
self.inventory.add_host(container_name)
self.inventory.add_child(namespace_pods_group, container_name)
if pod_groups:
for group in pod_groups:
self.inventory.add_child(group, container_name)
# Add hostvars
self.inventory.set_variable(container_name, "object_type", "pod")
self.inventory.set_variable(container_name, "labels", pod_labels)
self.inventory.set_variable(
container_name, "annotations", pod_annotations
)
self.inventory.set_variable(
container_name, "cluster_name", pod.metadata.clusterName
)
self.inventory.set_variable(
container_name, "pod_node_name", pod.spec.nodeName
)
self.inventory.set_variable(container_name, "pod_name", pod.spec.name)
self.inventory.set_variable(
container_name, "pod_host_ip", pod.status.hostIP
)
self.inventory.set_variable(
container_name, "pod_phase", pod.status.phase
)
self.inventory.set_variable(container_name, "pod_ip", pod.status.podIP)
self.inventory.set_variable(
container_name, "pod_self_link", pod.metadata.selfLink
)
self.inventory.set_variable(
container_name, "pod_resource_version", pod.metadata.resourceVersion
)
self.inventory.set_variable(container_name, "pod_uid", pod.metadata.uid)
self.inventory.set_variable(
container_name, "container_name", container.image
)
self.inventory.set_variable(
container_name, "container_image", container.image
)
if container.state.running:
self.inventory.set_variable(
container_name, "container_state", "Running"
)
if container.state.terminated:
self.inventory.set_variable(
container_name, "container_state", "Terminated"
)
if container.state.waiting:
self.inventory.set_variable(
container_name, "container_state", "Waiting"
)
self.inventory.set_variable(
container_name, "container_ready", container.ready
)
self.inventory.set_variable(
container_name, "ansible_remote_tmp", "/tmp/"
)
self.inventory.set_variable(
container_name, "ansible_connection", self.connection_plugin
)
self.inventory.set_variable(
container_name, "ansible_{0}_pod".format(self.transport), pod_name
)
self.inventory.set_variable(
container_name,
"ansible_{0}_container".format(self.transport),
container.name,
)
self.inventory.set_variable(
container_name,
"ansible_{0}_namespace".format(self.transport),
namespace,
)
def get_services_for_namespace(self, client, name, namespace):
v1_service = client.resources.get(api_version="v1", kind="Service")
try:
obj = v1_service.get(namespace=namespace)
except DynamicApiError as exc:
self.display.debug(exc)
raise K8sInventoryException(
"Error fetching Service list: %s" % format_dynamic_api_exc(exc)
)
namespace_group = "namespace_{0}".format(namespace)
namespace_services_group = "{0}_services".format(namespace_group)
self.inventory.add_group(name)
self.inventory.add_group(namespace_group)
self.inventory.add_child(name, namespace_group)
self.inventory.add_group(namespace_services_group)
self.inventory.add_child(namespace_group, namespace_services_group)
for service in obj.items:
service_name = service.metadata.name
service_labels = (
{} if not service.metadata.labels else dict(service.metadata.labels)
)
service_annotations = (
{}
if not service.metadata.annotations
else dict(service.metadata.annotations)
)
self.inventory.add_host(service_name)
if service.metadata.labels:
# create a group for each label_value
for key, value in service.metadata.labels:
group_name = "label_{0}_{1}".format(key, value)
self.inventory.add_group(group_name)
self.inventory.add_child(group_name, service_name)
try:
self.inventory.add_child(namespace_services_group, service_name)
except AnsibleError:
raise
ports = [
{
"name": port.name,
"port": port.port,
"protocol": port.protocol,
"targetPort": port.targetPort,
"nodePort": port.nodePort,
}
for port in service.spec.ports or []
]
# add hostvars
self.inventory.set_variable(service_name, "object_type", "service")
self.inventory.set_variable(service_name, "labels", service_labels)
self.inventory.set_variable(
service_name, "annotations", service_annotations
)
self.inventory.set_variable(
service_name, "cluster_name", service.metadata.clusterName
)
self.inventory.set_variable(service_name, "ports", ports)
self.inventory.set_variable(service_name, "type", service.spec.type)
self.inventory.set_variable(
service_name, "self_link", service.metadata.selfLink
)
self.inventory.set_variable(
service_name, "resource_version", service.metadata.resourceVersion
)
self.inventory.set_variable(service_name, "uid", service.metadata.uid)
if service.spec.externalTrafficPolicy:
self.inventory.set_variable(
service_name,
"external_traffic_policy",
service.spec.externalTrafficPolicy,
)
if service.spec.externalIPs:
self.inventory.set_variable(
service_name, "external_ips", service.spec.externalIPs
)
if service.spec.externalName:
self.inventory.set_variable(
service_name, "external_name", service.spec.externalName
)
if service.spec.healthCheckNodePort:
self.inventory.set_variable(
service_name,
"health_check_node_port",
service.spec.healthCheckNodePort,
)
if service.spec.loadBalancerIP:
self.inventory.set_variable(
service_name, "load_balancer_ip", service.spec.loadBalancerIP
)
if service.spec.selector:
self.inventory.set_variable(
service_name, "selector", dict(service.spec.selector)
)
if (
hasattr(service.status.loadBalancer, "ingress")
and service.status.loadBalancer.ingress
):
load_balancer = [
{"hostname": ingress.hostname, "ip": ingress.ip}
for ingress in service.status.loadBalancer.ingress
]
self.inventory.set_variable(
service_name, "load_balancer", load_balancer
)

View File

@@ -86,48 +86,14 @@ DOCUMENTATION = """
description: description:
- Provide a username for authenticating with the API. Can also be specified via K8S_AUTH_USERNAME environment - Provide a username for authenticating with the API. Can also be specified via K8S_AUTH_USERNAME environment
variable. variable.
no_proxy:
description:
- The comma separated list of hosts/domains/IP/CIDR that shouldn't go through proxy.
Can also be specified via K8S_AUTH_NO_PROXY environment variable.
- Please note that this module does not pick up typical proxy settings from the environment (e.g. NO_PROXY).
- This feature requires kubernetes>=19.15.0.
When kubernetes library is less than 19.15.0, it fails even if no_proxy is set correctly.
type: str
password: password:
description: description:
- Provide a password for authenticating with the API. Can also be specified via K8S_AUTH_PASSWORD environment - Provide a password for authenticating with the API. Can also be specified via K8S_AUTH_PASSWORD environment
variable. variable.
proxy:
description:
- The URL of an HTTP proxy to use for the connection. Can also be specified via K8S_AUTH_PROXY environment variable.
- Please note that this module does not pick up typical proxy settings from the environment (e.g. HTTP_PROXY).
type: str
proxy_headers:
description:
- The Header used for the HTTP proxy.
- Documentation can be found here
U(https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html?highlight=proxy_headers#urllib3.util.make_headers).
type: dict
suboptions:
proxy_basic_auth:
type: str
description:
- Colon-separated username:password for proxy basic authentication header.
- Can also be specified via K8S_AUTH_PROXY_HEADERS_PROXY_BASIC_AUTH environment.
basic_auth:
type: str
description:
- Colon-separated username:password for basic authentication header.
- Can also be specified via K8S_AUTH_PROXY_HEADERS_BASIC_AUTH environment.
user_agent:
type: str
description:
- String representing the user-agent you want, such as foo/1.0.
- Can also be specified via K8S_AUTH_PROXY_HEADERS_USER_AGENT environment.
client_cert: client_cert:
description: description:
- Path to a certificate used to authenticate with the API. Can also be specified via K8S_AUTH_CERT_FILE environment - Path to a certificate used to authenticate with the API. Can also be specified via K8S_AUTH_CERT_FILE
environment
variable. variable.
aliases: [ cert_file ] aliases: [ cert_file ]
client_key: client_key:

View File

@@ -1,64 +1,16 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
from ansible.module_utils.six import string_types
__metaclass__ = type __metaclass__ = type
import warnings
def list_dict_str(value): def list_dict_str(value):
if isinstance(value, (list, dict, str)): if isinstance(value, (list, dict, string_types)):
return value return value
raise TypeError raise TypeError
def extract_sensitive_values_from_kubeconfig(kubeconfig_data):
"""
Extract only sensitive string values from kubeconfig data for no_log_values.
:arg kubeconfig_data: Dictionary containing kubeconfig data
:returns: Set of sensitive string values to be added to no_log_values
"""
values = set()
sensitive_fields = {
"token",
"password",
"secret",
"client-key-data",
"client-certificate-data",
"certificate-authority-data",
"api_key",
"access-token",
"refresh-token",
}
# Check API version and warn if not v1
if isinstance(kubeconfig_data, dict):
api_version = kubeconfig_data.get("apiVersion", "v1")
if api_version != "v1":
warnings.warn(
f"Kubeconfig API version '{api_version}' is not 'v1'. "
f"Sensitive field redaction is only guaranteed for API version 'v1'. "
f"Some sensitive data may not be properly redacted from the logs.",
UserWarning,
)
def _extract_recursive(data, current_path=""):
if isinstance(data, dict):
for key, value in data.items():
path = f"{current_path}.{key}" if current_path else key
if key in sensitive_fields:
if isinstance(value, str):
values.add(value)
else:
_extract_recursive(value, path)
elif isinstance(data, list):
for i, item in enumerate(data):
_extract_recursive(item, f"{current_path}[{i}]")
_extract_recursive(kubeconfig_data)
return values
AUTH_PROXY_HEADERS_SPEC = dict( AUTH_PROXY_HEADERS_SPEC = dict(
proxy_basic_auth=dict(type="str", no_log=True), proxy_basic_auth=dict(type="str", no_log=True),
basic_auth=dict(type="str", no_log=True), basic_auth=dict(type="str", no_log=True),
@@ -66,7 +18,7 @@ AUTH_PROXY_HEADERS_SPEC = dict(
) )
AUTH_ARG_SPEC = { AUTH_ARG_SPEC = {
"kubeconfig": {"type": "raw"}, "kubeconfig": {"type": "raw", "no_log": True},
"context": {}, "context": {},
"host": {}, "host": {},
"api_key": {"no_log": True}, "api_key": {"no_log": True},

View File

@@ -96,7 +96,7 @@ class K8SCopy(metaclass=ABCMeta):
return error, stdout, stderr return error, stdout, stderr
except Exception as e: except Exception as e:
self.module.fail_json( self.module.fail_json(
msg="Error while running/parsing from pod {0}/{1} command='{2}' : {3}".format( msg="Error while running/parsing from pod {1}/{2} command='{0}' : {3}".format(
self.namespace, self.name, cmd, to_native(e) self.namespace, self.name, cmd, to_native(e)
) )
) )
@@ -278,15 +278,11 @@ class K8SCopyFromPod(K8SCopy):
def run(self): def run(self):
self.files_to_copy = self.list_remote_files() self.files_to_copy = self.list_remote_files()
if self.files_to_copy == []: if self.files_to_copy == []:
# Using warn method instead of passing warnings to exit_json as it is
# deprecated in ansible-core>=2.19.0
self._module.warn(
"No file found from directory '{0}' into remote Pod.".format(
self.remote_path
)
)
self.module.exit_json( self.module.exit_json(
changed=False, changed=False,
warning="No file found from directory '{0}' into remote Pod.".format(
self.remote_path
),
) )
self.copy() self.copy()
@@ -439,21 +435,11 @@ def check_pod(svc):
try: try:
result = svc.client.get(resource, name=name, namespace=namespace) result = svc.client.get(resource, name=name, namespace=namespace)
containers = dict( containers = [
{ c["name"] for c in result.to_dict()["status"]["containerStatuses"]
c["name"]: c ]
for cl in ["initContainerStatuses", "containerStatuses"] if container and container not in containers:
for c in result.to_dict()["status"].get(cl, [])
}
)
if container and container not in containers.keys():
module.fail_json(msg="Pod has no container {0}".format(container)) module.fail_json(msg="Pod has no container {0}".format(container))
if ( return containers
container
and container in containers
and not bool(containers[container].get("started", False))
):
module.fail_json(msg="Pod container {0} is not started".format(container))
return containers.keys()
except Exception as exc: except Exception as exc:
_fail(exc) _fail(exc)

View File

@@ -15,9 +15,7 @@ import tempfile
import traceback import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import ( from ansible.module_utils.six import string_types
extract_sensitive_values_from_kubeconfig,
)
from ansible_collections.kubernetes.core.plugins.module_utils.version import ( from ansible_collections.kubernetes.core.plugins.module_utils.version import (
LooseVersion, LooseVersion,
) )
@@ -115,19 +113,12 @@ class AnsibleHelmModule(object):
kubeconfig_content = None kubeconfig_content = None
kubeconfig = self.params.get("kubeconfig") kubeconfig = self.params.get("kubeconfig")
if kubeconfig: if kubeconfig:
if isinstance(kubeconfig, str): if isinstance(kubeconfig, string_types):
with open(os.path.expanduser(kubeconfig)) as fd: with open(os.path.expanduser(kubeconfig)) as fd:
kubeconfig_content = yaml.safe_load(fd) kubeconfig_content = yaml.safe_load(fd)
elif isinstance(kubeconfig, dict): elif isinstance(kubeconfig, dict):
kubeconfig_content = kubeconfig kubeconfig_content = kubeconfig
# Redact sensitive fields from kubeconfig for logging purposes
if kubeconfig_content:
# Add original sensitive values to no_log_values to prevent them from appearing in logs
self._module.no_log_values.update(
extract_sensitive_values_from_kubeconfig(kubeconfig_content)
)
if self.params.get("ca_cert"): if self.params.get("ca_cert"):
ca_cert = self.params.get("ca_cert") ca_cert = self.params.get("ca_cert")
if LooseVersion(self.get_helm_version()) < LooseVersion("3.5.0"): if LooseVersion(self.get_helm_version()) < LooseVersion("3.5.0"):
@@ -202,24 +193,6 @@ class AnsibleHelmModule(object):
return m.group(1) return m.group(1)
return None return None
def validate_helm_version(self):
"""
Validate that Helm version is >=3.0.0 and <4.0.0.
Helm 4 is not yet supported.
"""
helm_version = self.get_helm_version()
if helm_version is None:
self.fail_json(msg="Unable to determine Helm version")
if (LooseVersion(helm_version) < LooseVersion("3.0.0")) or (
LooseVersion(helm_version) >= LooseVersion("4.0.0")
):
self.fail_json(
msg="Helm version must be >=3.0.0,<4.0.0, current version is {0}".format(
helm_version
)
)
def get_values(self, release_name, get_all=False): def get_values(self, release_name, get_all=False):
""" """
Get Values from deployed release Get Values from deployed release

View File

@@ -16,6 +16,7 @@ HELM_AUTH_ARG_SPEC = dict(
type="raw", type="raw",
aliases=["kubeconfig_path"], aliases=["kubeconfig_path"],
fallback=(env_fallback, ["K8S_AUTH_KUBECONFIG"]), fallback=(env_fallback, ["K8S_AUTH_KUBECONFIG"]),
no_log=True,
), ),
host=dict(type="str", fallback=(env_fallback, ["K8S_AUTH_HOST"])), host=dict(type="str", fallback=(env_fallback, ["K8S_AUTH_HOST"])),
ca_cert=dict( ca_cert=dict(

View File

@@ -5,6 +5,7 @@ import hashlib
import os import os
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from ansible.module_utils.six import iteritems, string_types
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import ( from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
AUTH_ARG_MAP, AUTH_ARG_MAP,
AUTH_ARG_SPEC, AUTH_ARG_SPEC,
@@ -114,7 +115,7 @@ def _load_config(auth: Dict) -> None:
"persist_config": auth.get("persist_config"), "persist_config": auth.get("persist_config"),
} }
if kubeconfig: if kubeconfig:
if isinstance(kubeconfig, str): if isinstance(kubeconfig, string_types):
kubernetes.config.load_kube_config(config_file=kubeconfig, **optional_arg) kubernetes.config.load_kube_config(config_file=kubeconfig, **optional_arg)
elif isinstance(kubeconfig, dict): elif isinstance(kubeconfig, dict):
kubernetes.config.load_kube_config_from_dict( kubernetes.config.load_kube_config_from_dict(
@@ -162,7 +163,7 @@ def _create_configuration(auth: Dict):
except AttributeError: except AttributeError:
configuration = kubernetes.client.Configuration() configuration = kubernetes.client.Configuration()
for key, value in auth.items(): for key, value in iteritems(auth):
if key in AUTH_ARG_MAP.keys() and value is not None: if key in AUTH_ARG_MAP.keys() and value is not None:
if key == "api_key": if key == "api_key":
setattr( setattr(

View File

@@ -3,9 +3,6 @@ from typing import Optional
from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.text.converters import to_text
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
extract_sensitive_values_from_kubeconfig,
)
from ansible_collections.kubernetes.core.plugins.module_utils.version import ( from ansible_collections.kubernetes.core.plugins.module_utils.version import (
LooseVersion, LooseVersion,
) )
@@ -36,15 +33,6 @@ class AnsibleK8SModule:
self._module = self.settings["module_class"](**kwargs) self._module = self.settings["module_class"](**kwargs)
# Apply kubeconfig redaction for logging purposes
if hasattr(self._module, "params") and hasattr(self._module, "no_log_values"):
kubeconfig = self._module.params.get("kubeconfig")
if kubeconfig and isinstance(kubeconfig, dict):
# Add sensitive values to no_log_values to prevent them from appearing in logs
self._module.no_log_values.update(
extract_sensitive_values_from_kubeconfig(kubeconfig)
)
if self.settings["check_k8s"]: if self.settings["check_k8s"]:
self.requires("kubernetes") self.requires("kubernetes")
self.has_at_least("kubernetes", "24.2.0", warn=True) self.has_at_least("kubernetes", "24.2.0", warn=True)

View File

@@ -4,6 +4,7 @@
import os import os
from typing import Dict, Iterable, List, Optional, Union, cast from typing import Dict, Iterable, List, Optional, Union, cast
from ansible.module_utils.six import string_types
from ansible.module_utils.urls import Request from ansible.module_utils.urls import Request
try: try:
@@ -77,11 +78,11 @@ def create_definitions(params: Dict) -> List[ResourceDefinition]:
def from_yaml(definition: Union[str, List, Dict]) -> Iterable[Dict]: def from_yaml(definition: Union[str, List, Dict]) -> Iterable[Dict]:
"""Load resource definitions from a yaml definition.""" """Load resource definitions from a yaml definition."""
definitions: List[Dict] = [] definitions: List[Dict] = []
if isinstance(definition, str): if isinstance(definition, string_types):
definitions += yaml.safe_load_all(definition) definitions += yaml.safe_load_all(definition)
elif isinstance(definition, list): elif isinstance(definition, list):
for item in definition: for item in definition:
if isinstance(item, str): if isinstance(item, string_types):
definitions += yaml.safe_load_all(item) definitions += yaml.safe_load_all(item)
else: else:
definitions.append(item) definitions.append(item)

View File

@@ -237,6 +237,20 @@ options:
default: False default: False
aliases: [ skip_tls_certs_check ] aliases: [ skip_tls_certs_check ]
version_added: 5.3.0 version_added: 5.3.0
plain_http:
description:
- Use HTTP instead of HTTPS when working with OCI registries
- Requires Helm >= 3.13.0
type: bool
default: False
version_added: 6.1.0
take_ownership:
description:
- When upgrading, Helm will ignore the check for helm annotations and take ownership of the existing resources
- This feature requires helm >= 3.17.0
type: bool
default: False
version_added: 6.1.0
extends_documentation_fragment: extends_documentation_fragment:
- kubernetes.core.helm_common_options - kubernetes.core.helm_common_options
""" """
@@ -319,6 +333,12 @@ EXAMPLES = r"""
chart_ref: "https://github.com/grafana/helm-charts/releases/download/grafana-5.6.0/grafana-5.6.0.tgz" chart_ref: "https://github.com/grafana/helm-charts/releases/download/grafana-5.6.0/grafana-5.6.0.tgz"
release_namespace: monitoring release_namespace: monitoring
- name: Deploy Bitnami's MongoDB latest chart from OCI registry
kubernetes.core.helm:
name: test
chart_ref: "oci://registry-1.docker.io/bitnamicharts/mongodb"
release_namespace: database
# Using complex Values # Using complex Values
- name: Deploy new-relic client chart - name: Deploy new-relic client chart
kubernetes.core.helm: kubernetes.core.helm:
@@ -392,18 +412,9 @@ status:
returned: always returned: always
description: The Date of last update description: The Date of last update
values: values:
type: dict type: str
returned: always returned: always
description: description: Dict of Values used to deploy
- Dict of Values used to deploy.
- This return value has been deprecated and will be removed in a release after
2027-01-08. Use RV(status.release_values) instead.
release_values:
type: dict
returned: always
description:
- Dict of Values used to deploy.
version_added: 6.3.0
stdout: stdout:
type: str type: str
description: Full `helm` command stdout, in case you want to display it or examine the event log description: Full `helm` command stdout, in case you want to display it or examine the event log
@@ -483,8 +494,7 @@ def get_release_status(module, release_name, all_status=False):
if release is None: # not install if release is None: # not install
return None return None
release["release_values"] = module.get_values(release_name) release["values"] = module.get_values(release_name)
release["values"] = release["release_values"]
return release return release
@@ -505,7 +515,9 @@ def run_dep_update(module, chart_ref):
rc, out, err = module.run_helm_command(dep_update) rc, out, err = module.run_helm_command(dep_update)
def fetch_chart_info(module, command, chart_ref, insecure_skip_tls_verify=False): def fetch_chart_info(
module, command, chart_ref, insecure_skip_tls_verify=False, plain_http=False
):
""" """
Get chart info Get chart info
""" """
@@ -514,6 +526,17 @@ def fetch_chart_info(module, command, chart_ref, insecure_skip_tls_verify=False)
if insecure_skip_tls_verify: if insecure_skip_tls_verify:
inspect_command += " --insecure-skip-tls-verify" inspect_command += " --insecure-skip-tls-verify"
if plain_http:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.13.0"):
module.fail_json(
msg="plain_http requires helm >= 3.13.0, current version is {0}".format(
helm_version
)
)
else:
inspect_command += " --plain-http"
rc, out, err = module.run_helm_command(inspect_command) rc, out, err = module.run_helm_command(inspect_command)
return yaml.safe_load(out) return yaml.safe_load(out)
@@ -543,6 +566,8 @@ def deploy(
reset_values=True, reset_values=True,
reset_then_reuse_values=False, reset_then_reuse_values=False,
insecure_skip_tls_verify=False, insecure_skip_tls_verify=False,
plain_http=False,
take_ownership=False,
): ):
""" """
Install/upgrade/rollback release chart Install/upgrade/rollback release chart
@@ -556,6 +581,8 @@ def deploy(
deploy_command = command + " upgrade -i" # install/upgrade deploy_command = command + " upgrade -i" # install/upgrade
if reset_values: if reset_values:
deploy_command += " --reset-values" deploy_command += " --reset-values"
if take_ownership:
deploy_command += " --take-ownership"
if reuse_values is not None: if reuse_values is not None:
deploy_command += " --reuse-values=" + str(reuse_values) deploy_command += " --reuse-values=" + str(reuse_values)
@@ -605,6 +632,9 @@ def deploy(
else: else:
deploy_command += " --insecure-skip-tls-verify" deploy_command += " --insecure-skip-tls-verify"
if plain_http:
deploy_command += " --plain-http"
if values_files: if values_files:
for value_file in values_files: for value_file in values_files:
deploy_command += " --values=" + value_file deploy_command += " --values=" + value_file
@@ -700,6 +730,7 @@ def helmdiff_check(
reset_values=True, reset_values=True,
reset_then_reuse_values=False, reset_then_reuse_values=False,
insecure_skip_tls_verify=False, insecure_skip_tls_verify=False,
plain_http=False,
): ):
""" """
Use helm diff to determine if a release would change by upgrading a chart. Use helm diff to determine if a release would change by upgrading a chart.
@@ -755,6 +786,17 @@ def helmdiff_check(
if insecure_skip_tls_verify: if insecure_skip_tls_verify:
cmd += " --insecure-skip-tls-verify" cmd += " --insecure-skip-tls-verify"
if plain_http:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.13.0"):
module.fail_json(
msg="plain_http requires helm >= 3.13.0, current version is {0}".format(
helm_version
)
)
else:
cmd += " --plain-http"
rc, out, err = module.run_helm_command(cmd) rc, out, err = module.run_helm_command(cmd)
return (len(out.strip()) > 0, out.strip()) return (len(out.strip()) > 0, out.strip())
@@ -818,6 +860,8 @@ def argument_spec():
insecure_skip_tls_verify=dict( insecure_skip_tls_verify=dict(
type="bool", default=False, aliases=["skip_tls_certs_check"] type="bool", default=False, aliases=["skip_tls_certs_check"]
), ),
plain_http=dict(type="bool", default=False),
take_ownership=dict(type="bool", default=False),
) )
) )
return arg_spec return arg_spec
@@ -842,9 +886,6 @@ def main():
if not IMP_YAML: if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR) module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
changed = False changed = False
chart_ref = module.params.get("chart_ref") chart_ref = module.params.get("chart_ref")
@@ -875,6 +916,8 @@ def main():
reset_values = module.params.get("reset_values") reset_values = module.params.get("reset_values")
reset_then_reuse_values = module.params.get("reset_then_reuse_values") reset_then_reuse_values = module.params.get("reset_then_reuse_values")
insecure_skip_tls_verify = module.params.get("insecure_skip_tls_verify") insecure_skip_tls_verify = module.params.get("insecure_skip_tls_verify")
plain_http = module.params.get("plain_http")
take_ownership = module.params.get("take_ownership")
if update_repo_cache: if update_repo_cache:
run_repo_update(module) run_repo_update(module)
@@ -884,6 +927,24 @@ def main():
release_status = get_release_status(module, release_name, all_status=all_status) release_status = get_release_status(module, release_name, all_status=all_status)
helm_cmd = module.get_helm_binary() helm_cmd = module.get_helm_binary()
if plain_http:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.13.0"):
module.fail_json(
msg="plain_http requires helm >= 3.13.0, current version is {0}".format(
helm_version
)
)
if take_ownership:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.17.0"):
module.fail_json(
msg="take_ownership requires helm >= 3.17.0, current version is {0}".format(
helm_version
)
)
opt_result = {} opt_result = {}
if release_state == "absent" and release_status is not None: if release_state == "absent" and release_status is not None:
# skip release statuses 'uninstalled' and 'uninstalling' # skip release statuses 'uninstalled' and 'uninstalling'
@@ -913,7 +974,7 @@ def main():
# Fetch chart info to have real version and real name for chart_ref from archive, folder or url # Fetch chart info to have real version and real name for chart_ref from archive, folder or url
chart_info = fetch_chart_info( chart_info = fetch_chart_info(
module, helm_cmd, chart_ref, insecure_skip_tls_verify module, helm_cmd, chart_ref, insecure_skip_tls_verify, plain_http
) )
if dependency_update: if dependency_update:
@@ -975,6 +1036,7 @@ def main():
reset_values=reset_values, reset_values=reset_values,
reset_then_reuse_values=reset_then_reuse_values, reset_then_reuse_values=reset_then_reuse_values,
insecure_skip_tls_verify=insecure_skip_tls_verify, insecure_skip_tls_verify=insecure_skip_tls_verify,
plain_http=plain_http,
) )
changed = True changed = True
@@ -1002,6 +1064,7 @@ def main():
reset_values=reset_values, reset_values=reset_values,
reset_then_reuse_values=reset_then_reuse_values, reset_then_reuse_values=reset_then_reuse_values,
insecure_skip_tls_verify=insecure_skip_tls_verify, insecure_skip_tls_verify=insecure_skip_tls_verify,
plain_http=plain_http,
) )
if would_change and module._diff: if would_change and module._diff:
opt_result["diff"] = {"prepared": prepared} opt_result["diff"] = {"prepared": prepared}
@@ -1039,19 +1102,16 @@ def main():
reset_values=reset_values, reset_values=reset_values,
reset_then_reuse_values=reset_then_reuse_values, reset_then_reuse_values=reset_then_reuse_values,
insecure_skip_tls_verify=insecure_skip_tls_verify, insecure_skip_tls_verify=insecure_skip_tls_verify,
plain_http=plain_http,
take_ownership=take_ownership,
) )
changed = True changed = True
if module.check_mode: if module.check_mode:
check_status = { check_status = {"values": {"current": {}, "declared": {}}}
"values": {"current": {}, "declared": {}},
"release_values": {"current": {}, "declared": {}},
}
if release_status: if release_status:
check_status["values"]["current"] = release_status["release_values"] check_status["values"]["current"] = release_status["values"]
check_status["values"]["declared"] = release_status check_status["values"]["declared"] = release_status
check_status["release_values"]["current"] = release_status["release_values"]
check_status["release_values"]["declared"] = release_status
module.exit_json( module.exit_json(
changed=changed, changed=changed,

View File

@@ -115,18 +115,9 @@ status:
returned: always returned: always
description: The Date of last update description: The Date of last update
values: values:
type: dict type: str
returned: always returned: always
description: description: Dict of Values used to deploy
- Dict of Values used to deploy
- This return value has been deprecated and will be removed in a release after
2027-01-08. Use RV(status.release_values) instead.
release_values:
type: dict
returned: always
description:
- Dict of Values used to deploy.
version_added: 6.3.0
hooks: hooks:
type: list type: list
elements: dict elements: dict
@@ -211,8 +202,7 @@ def get_release_status(module, release_name, release_state, get_all_values=False
if release is None: # not install if release is None: # not install
return None return None
release["release_values"] = module.get_values(release_name, get_all_values) release["values"] = module.get_values(release_name, get_all_values)
release["values"] = release["release_values"]
release["manifest"] = module.get_manifest(release_name) release["manifest"] = module.get_manifest(release_name)
release["notes"] = module.get_notes(release_name) release["notes"] = module.get_notes(release_name)
release["hooks"] = module.get_hooks(release_name) release["hooks"] = module.get_hooks(release_name)
@@ -245,9 +235,6 @@ def main():
if not IMP_YAML: if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR) module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
release_name = module.params.get("release_name") release_name = module.params.get("release_name")
release_state = module.params.get("release_state") release_state = module.params.get("release_state")
get_all_values = module.params.get("get_all_values") get_all_values = module.params.get("get_all_values")

View File

@@ -161,9 +161,6 @@ def main():
mutually_exclusive=mutually_exclusive(), mutually_exclusive=mutually_exclusive(),
) )
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
state = module.params.get("state") state = module.params.get("state")
helm_cmd_common = module.get_helm_binary() + " plugin" helm_cmd_common = module.get_helm_binary() + " plugin"

View File

@@ -98,9 +98,6 @@ def main():
supports_check_mode=True, supports_check_mode=True,
) )
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
plugin_name = module.params.get("plugin_name") plugin_name = module.params.get("plugin_name")
plugin_list = [] plugin_list = []

View File

@@ -21,7 +21,7 @@ description:
- There are options for unpacking the chart after download. - There are options for unpacking the chart after download.
requirements: requirements:
- "helm >= 3.0, <4.0.0 (https://github.com/helm/helm/releases)" - "helm >= 3.0 (https://github.com/helm/helm/releases)"
options: options:
chart_ref: chart_ref:
@@ -89,14 +89,6 @@ options:
- if set to true, will untar the chart after downloading it. - if set to true, will untar the chart after downloading it.
type: bool type: bool
default: False default: False
force:
description:
- Force download of the chart even if it already exists in the destination directory.
- By default, the module will skip downloading if the chart with the same version already exists for idempotency.
- When used with O(untar_chart=true), will remove any existing chart directory before extracting.
type: bool
default: False
version_added: 6.3.0
destination: destination:
description: description:
- location to write the chart. - location to write the chart.
@@ -122,6 +114,13 @@ options:
- The path of a helm binary to use. - The path of a helm binary to use.
required: false required: false
type: path type: path
plain_http:
description:
- Use HTTP instead of HTTPS when working with OCI registries
- Requires Helm >= 3.13.0
type: bool
default: False
version_added: 6.1.0
""" """
EXAMPLES = r""" EXAMPLES = r"""
@@ -153,23 +152,6 @@ EXAMPLES = r"""
destination: /path/to/chart destination: /path/to/chart
username: myuser username: myuser
password: mypassword123 password: mypassword123
- name: Download Chart (force re-download even if exists)
kubernetes.core.helm_pull:
chart_ref: redis
repo_url: https://charts.bitnami.com/bitnami
chart_version: '17.0.0'
destination: /path/to/chart
force: yes
- name: Download and untar chart (force re-extraction even if directory exists)
kubernetes.core.helm_pull:
chart_ref: redis
repo_url: https://charts.bitnami.com/bitnami
chart_version: '17.0.0'
destination: /path/to/chart
untar_chart: yes
force: yes
""" """
RETURN = r""" RETURN = r"""
@@ -188,11 +170,6 @@ command:
description: Full `helm pull` command built by this module, in case you want to re-run the command outside the module or debug a problem. description: Full `helm pull` command built by this module, in case you want to re-run the command outside the module or debug a problem.
returned: always returned: always
sample: helm pull --repo test ... sample: helm pull --repo test ...
msg:
type: str
description: A message indicating the result of the operation.
returned: when chart already exists
sample: Chart redis version 17.0.0 already exists in destination directory
rc: rc:
type: int type: int
description: Helm pull command return code description: Helm pull command return code
@@ -200,18 +177,6 @@ rc:
sample: 1 sample: 1
""" """
import os
import shutil
import tarfile
import uuid
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
from ansible_collections.kubernetes.core.plugins.module_utils.helm import ( from ansible_collections.kubernetes.core.plugins.module_utils.helm import (
AnsibleHelmModule, AnsibleHelmModule,
) )
@@ -220,115 +185,6 @@ from ansible_collections.kubernetes.core.plugins.module_utils.version import (
) )
def extract_chart_name(chart_ref):
"""
Extract chart name from chart reference.
Args:
chart_ref (str): Chart reference (name, URL, or OCI reference)
Returns:
str: Extracted chart name
"""
chart_name = chart_ref.split("/")[-1]
# Remove any query parameters or fragments from URL-based refs
if "?" in chart_name:
chart_name = chart_name.split("?")[0]
if "#" in chart_name:
chart_name = chart_name.split("#")[0]
# Remove .tgz extension if present
if chart_name.endswith(".tgz"):
chart_name = chart_name[:-4]
return chart_name
def chart_exists(destination, chart_ref, chart_version, untar_chart):
"""
Check if the chart already exists in the destination directory.
For untarred charts: check if directory exists with Chart.yaml matching version
For tarred charts: check if .tgz file exists and contains matching version
Args:
destination (str): Destination directory path
chart_ref (str): Chart reference (name or URL)
chart_version (str): Chart version to check for
untar_chart (bool): Whether to check for untarred or tarred chart
Returns:
bool: True if chart with matching version exists, False otherwise
"""
# YAML is required for version checking
if not HAS_YAML:
return False
# Without version, we can't reliably check
if not chart_version:
return False
# Extract chart name from chart_ref using shared helper
chart_name = extract_chart_name(chart_ref)
if untar_chart:
# Check for extracted directory
chart_dir = os.path.join(destination, chart_name)
chart_yaml_path = os.path.join(chart_dir, "Chart.yaml")
if os.path.isdir(chart_dir) and os.path.isfile(chart_yaml_path):
try:
with open(chart_yaml_path, "r", encoding="utf-8") as chart_file:
chart_metadata = yaml.safe_load(chart_file)
# Ensure chart_metadata is a dict and has a version that matches
if (
chart_metadata
and isinstance(chart_metadata, dict)
and chart_metadata.get("version") == chart_version
and chart_metadata.get("name") == chart_name
):
return True
except (yaml.YAMLError, IOError, OSError, TypeError):
# If we can't read or parse the file, treat as non-existent
pass
else:
# Check for .tgz file
chart_file = os.path.join(destination, f"{chart_name}-{chart_version}.tgz")
if os.path.isfile(chart_file):
try:
# Verify it's a valid tarball with matching version
with tarfile.open(chart_file, "r:gz") as tar:
# Try to extract Chart.yaml to verify version
# Look for Chart.yaml at the expected path: <chart-name>/Chart.yaml
expected_chart_yaml = f"{chart_name}/Chart.yaml"
try:
member = tar.getmember(expected_chart_yaml)
chart_yaml_file = tar.extractfile(member)
if chart_yaml_file:
try:
chart_metadata = yaml.safe_load(chart_yaml_file)
# Ensure chart_metadata is a dict and has a version that matches
if (
chart_metadata
and isinstance(chart_metadata, dict)
and chart_metadata.get("version") == chart_version
and chart_metadata.get("name") == chart_name
):
return True
except (yaml.YAMLError, TypeError):
# If we can't parse the YAML, treat as non-existent
pass
finally:
chart_yaml_file.close()
except KeyError:
# Chart.yaml not found at expected path
pass
except (tarfile.TarError, yaml.YAMLError, IOError, OSError, TypeError):
# If we can't read or parse the tarball, treat as non-existent
pass
return False
def main(): def main():
argspec = dict( argspec = dict(
chart_ref=dict(type="str", required=True), chart_ref=dict(type="str", required=True),
@@ -347,12 +203,12 @@ def main():
), ),
chart_devel=dict(type="bool"), chart_devel=dict(type="bool"),
untar_chart=dict(type="bool", default=False), untar_chart=dict(type="bool", default=False),
force=dict(type="bool", default=False),
destination=dict(type="path", required=True), destination=dict(type="path", required=True),
chart_ca_cert=dict(type="path"), chart_ca_cert=dict(type="path"),
chart_ssl_cert_file=dict(type="path"), chart_ssl_cert_file=dict(type="path"),
chart_ssl_key_file=dict(type="path"), chart_ssl_key_file=dict(type="path"),
binary_path=dict(type="path"), binary_path=dict(type="path"),
plain_http=dict(type="bool", default=False),
) )
module = AnsibleHelmModule( module = AnsibleHelmModule(
argument_spec=argspec, argument_spec=argspec,
@@ -364,16 +220,20 @@ def main():
mutually_exclusive=[("chart_version", "chart_devel")], mutually_exclusive=[("chart_version", "chart_devel")],
) )
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
helm_version = module.get_helm_version() helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.0.0"):
module.fail_json(
msg="This module requires helm >= 3.0.0, current version is {0}".format(
helm_version
)
)
helm_pull_opt_versionning = dict( helm_pull_opt_versionning = dict(
skip_tls_certs_check="3.3.0", skip_tls_certs_check="3.3.0",
chart_ca_cert="3.1.0", chart_ca_cert="3.1.0",
chart_ssl_cert_file="3.1.0", chart_ssl_cert_file="3.1.0",
chart_ssl_key_file="3.1.0", chart_ssl_key_file="3.1.0",
plain_http="3.13.0",
) )
def test_version_requirement(opt): def test_version_requirement(opt):
@@ -413,6 +273,7 @@ def main():
skip_tls_certs_check=dict(key="insecure-skip-tls-verify"), skip_tls_certs_check=dict(key="insecure-skip-tls-verify"),
chart_devel=dict(key="devel"), chart_devel=dict(key="devel"),
untar_chart=dict(key="untar"), untar_chart=dict(key="untar"),
plain_http=dict(key="plain-http"),
) )
for k, v in helm_flag_args.items(): for k, v in helm_flag_args.items():
@@ -425,72 +286,8 @@ def main():
module.params.get("chart_ref"), module.params.get("chart_ref"),
" ".join(helm_pull_opts), " ".join(helm_pull_opts),
) )
# Check if chart already exists (idempotency)
if module.params.get("chart_version") and not module.params.get("force"):
chart_exists_locally = chart_exists(
module.params.get("destination"),
module.params.get("chart_ref"),
module.params.get("chart_version"),
module.params.get("untar_chart"),
)
if chart_exists_locally:
module.exit_json(
failed=False,
changed=False,
msg="Chart {0} version {1} already exists in destination directory".format(
module.params.get("chart_ref"), module.params.get("chart_version")
),
command="",
stdout="",
stderr="",
rc=0,
)
# When both untar_chart and force are enabled, we need to remove the existing chart directory
# BEFORE running helm pull to prevent helm's "directory already exists" error.
# We do this by:
# 1. Renaming the existing directory to a temporary name (if it exists)
# 2. Running helm pull
# 3. On success: remove the temporary directory
# 4. On failure: restore the temporary directory and report the error
chart_dir_renamed = False
chart_dir = None
chart_dir_backup = None
if module.params.get("untar_chart") and module.params.get("force"):
chart_name = extract_chart_name(module.params.get("chart_ref"))
chart_dir = os.path.join(module.params.get("destination"), chart_name)
# Check if directory exists and contains a Chart.yaml (to be safe)
if os.path.isdir(chart_dir):
chart_yaml_path = os.path.join(chart_dir, "Chart.yaml")
# Only rename if it looks like a Helm chart directory (have Chart.yaml)
if os.path.isfile(chart_yaml_path):
if not module.check_mode:
# Rename to temporary backup name using uuid for uniqueness
backup_suffix = uuid.uuid4().hex[:8]
chart_dir_backup = os.path.join(
module.params.get("destination"),
f".{chart_name}_backup_{backup_suffix}",
)
os.rename(chart_dir, chart_dir_backup)
chart_dir_renamed = True
if not module.check_mode: if not module.check_mode:
rc, out, err = module.run_helm_command(helm_cmd_common, fails_on_error=False) rc, out, err = module.run_helm_command(helm_cmd_common, fails_on_error=False)
# Handle cleanup/restore based on helm command result
if chart_dir_renamed:
if rc == 0:
# Success: remove the backup directory
if os.path.isdir(chart_dir_backup):
shutil.rmtree(chart_dir_backup)
else:
# Failure: restore the backup directory
if os.path.isdir(chart_dir_backup) and not os.path.exists(chart_dir):
os.rename(chart_dir_backup, chart_dir)
else: else:
rc, out, err = (0, "", "") rc, out, err = (0, "", "")

View File

@@ -20,7 +20,7 @@ author:
- Yuriy Novostavskiy (@yurnov) - Yuriy Novostavskiy (@yurnov)
requirements: requirements:
- "helm (https://github.com/helm/helm/releases) >= 3.8.0, <4.0.0" - "helm (https://github.com/helm/helm/releases) => 3.8.0"
description: description:
- Helm registry authentication module allows you to login C(helm registry login) and logout C(helm registry logout) from a Helm registry. - Helm registry authentication module allows you to login C(helm registry login) and logout C(helm registry logout) from a Helm registry.
@@ -194,9 +194,6 @@ def main():
supports_check_mode=True, supports_check_mode=True,
) )
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
changed = False changed = False
host = module.params.get("host") host = module.params.get("host")

View File

@@ -295,9 +295,6 @@ def main():
if not IMP_YAML: if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR) module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
changed = False changed = False
repo_name = module.params.get("repo_name") repo_name = module.params.get("repo_name")

View File

@@ -147,6 +147,13 @@ options:
- json - json
- file - file
version_added: 2.4.0 version_added: 2.4.0
plain_http:
description:
- Use HTTP instead of HTTPS when working with OCI registries
- Requires Helm >= 3.13.0
type: bool
default: False
version_added: 6.1.0
""" """
EXAMPLES = r""" EXAMPLES = r"""
@@ -218,6 +225,9 @@ from ansible.module_utils.basic import missing_required_lib
from ansible_collections.kubernetes.core.plugins.module_utils.helm import ( from ansible_collections.kubernetes.core.plugins.module_utils.helm import (
AnsibleHelmModule, AnsibleHelmModule,
) )
from ansible_collections.kubernetes.core.plugins.module_utils.version import (
LooseVersion,
)
def template( def template(
@@ -236,6 +246,7 @@ def template(
values_files=None, values_files=None,
include_crds=False, include_crds=False,
set_values=None, set_values=None,
plain_http=False,
): ):
cmd += " template " cmd += " template "
@@ -262,6 +273,9 @@ def template(
if insecure_registry: if insecure_registry:
cmd += " --insecure-skip-tls-verify" cmd += " --insecure-skip-tls-verify"
if plain_http:
cmd += " --plain-http"
if show_only: if show_only:
for template in show_only: for template in show_only:
cmd += " -s " + template cmd += " -s " + template
@@ -307,6 +321,7 @@ def main():
values_files=dict(type="list", default=[], elements="str"), values_files=dict(type="list", default=[], elements="str"),
update_repo_cache=dict(type="bool", default=False), update_repo_cache=dict(type="bool", default=False),
set_values=dict(type="list", elements="dict"), set_values=dict(type="list", elements="dict"),
plain_http=dict(type="bool", default=False),
), ),
supports_check_mode=True, supports_check_mode=True,
) )
@@ -327,15 +342,22 @@ def main():
values_files = module.params.get("values_files") values_files = module.params.get("values_files")
update_repo_cache = module.params.get("update_repo_cache") update_repo_cache = module.params.get("update_repo_cache")
set_values = module.params.get("set_values") set_values = module.params.get("set_values")
plain_http = module.params.get("plain_http")
if not IMP_YAML: if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR) module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
helm_cmd = module.get_helm_binary() helm_cmd = module.get_helm_binary()
if plain_http:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.13.0"):
module.fail_json(
msg="plain_http requires helm >= 3.13.0, current version is {0}".format(
helm_version
)
)
if update_repo_cache: if update_repo_cache:
update_cmd = helm_cmd + " repo update" update_cmd = helm_cmd + " repo update"
module.run_helm_command(update_cmd) module.run_helm_command(update_cmd)
@@ -360,6 +382,7 @@ def main():
values_files=values_files, values_files=values_files,
include_crds=include_crds, include_crds=include_crds,
set_values=set_values_args, set_values=set_values_args,
plain_http=plain_http,
) )
if not check_mode: if not check_mode:

View File

@@ -383,24 +383,28 @@ result:
contains: contains:
api_version: api_version:
description: The versioned schema of this representation of an object. description: The versioned schema of this representation of an object.
returned: when O(resource_definition) or O(src) contains a single object. returned: success
type: str type: str
kind: kind:
description: Represents the REST resource this object represents. description: Represents the REST resource this object represents.
returned: when O(resource_definition) or O(src) contains a single object. returned: success
type: str type: str
metadata: metadata:
description: Standard object metadata. Includes name, namespace, annotations, labels, etc. description: Standard object metadata. Includes name, namespace, annotations, labels, etc.
returned: when O(resource_definition) or O(src) contains a single object. returned: success
type: complex type: complex
spec: spec:
description: Specific attributes of the object. Will vary based on the I(api_version) and I(kind). description: Specific attributes of the object. Will vary based on the I(api_version) and I(kind).
returned: when O(resource_definition) or O(src) contains a single object. returned: success
type: complex type: complex
status: status:
description: Current status details for the object. description: Current status details for the object.
returned: when O(resource_definition) or O(src) contains a single object. returned: success
type: complex type: complex
items:
description: Returned only when multiple yaml documents are passed to src or resource_definition
returned: when resource_definition or src contains list of objects
type: list
duration: duration:
description: elapsed time of task in seconds description: elapsed time of task in seconds
returned: when C(wait) is true returned: when C(wait) is true
@@ -410,46 +414,6 @@ result:
description: error while trying to create/delete the object. description: error while trying to create/delete the object.
returned: error returned: error
type: complex type: complex
results:
description: An array of created, patched, or otherwise present objects.
returned: when O(resource_definition) or O(src) contains a list of objects.
type: complex
contains:
api_version:
description: The versioned schema of this representation of an object.
returned: when O(resource_definition) or O(src) contains a single object.
type: str
kind:
description: Represents the REST resource this object represents.
returned: when O(resource_definition) or O(src) contains a single object.
type: str
metadata:
description: Standard object metadata. Includes name, namespace, annotations, labels, etc.
returned: when O(resource_definition) or O(src) contains a single object.
type: complex
spec:
description: Specific attributes of the object. Will vary based on the I(api_version) and I(kind).
returned: when O(resource_definition) or O(src) contains a single object.
type: complex
status:
description: Current status details for the object.
returned: when O(resource_definition) or O(src) contains a single object.
type: complex
duration:
description: elapsed time of task in seconds
returned: when C(wait) is true
type: int
sample: 48
error:
description: error while trying to create/delete the object.
returned: error
type: complex
method:
description:
- The method used to deploy the resource.
returned: success
type: str
sample: create
""" """
import copy import copy

View File

@@ -79,7 +79,6 @@ options:
notes: notes:
- the tar binary is required on the container when copying from local filesystem to pod. - the tar binary is required on the container when copying from local filesystem to pod.
- the (init) container has to be started before you copy files or directories to it.
""" """
EXAMPLES = r""" EXAMPLES = r"""

View File

@@ -441,8 +441,7 @@ class K8sDrainAnsible(object):
warnings.append(warn) warnings.append(warn)
result.append("{0} Pod(s) deleted from node.".format(number_pod)) result.append("{0} Pod(s) deleted from node.".format(number_pod))
if warnings: if warnings:
for warning in warnings: return dict(result=" ".join(result), warnings=warnings)
self._module.warn(warning)
return dict(result=" ".join(result)) return dict(result=" ".join(result))
def patch_node(self, unschedulable): def patch_node(self, unschedulable):

View File

@@ -120,21 +120,6 @@ EXAMPLES = r"""
namespace: default namespace: default
wait_sleep: 10 wait_sleep: 10
wait_timeout: 360 wait_timeout: 360
- name: Wait for OpenShift bootstrap to complete
kubernetes.core.k8s_info:
api_version: v1
kind: ConfigMap
name: bootstrap
namespace: kube-system
register: ocp_bootstrap_status
until: >
ocp_bootstrap_status.resources is defined and
(ocp_bootstrap_status.resources | length > 0) and
(ocp_bootstrap_status.resources[0].data.status is defined) and
(ocp_bootstrap_status.resources[0].data.status == 'complete')
retries: 60
delay: 15
""" """
RETURN = r""" RETURN = r"""

View File

@@ -33,6 +33,14 @@ options:
aliases: aliases:
- api - api
- version - version
hidden_fields:
description:
- List of fields to hide from the diff output.
- This is useful for fields that are not relevant to the patch operation, such as `metadata.managedFields`.
type: list
elements: str
default: []
version_added: 6.1.0
kind: kind:
description: description:
- Use to specify an object model. - Use to specify an object model.
@@ -147,6 +155,7 @@ from ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions imp
) )
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.service import ( from ansible_collections.kubernetes.core.plugins.module_utils.k8s.service import (
diff_objects, diff_objects,
hide_fields,
) )
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import ( from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import (
get_waiter, get_waiter,
@@ -174,6 +183,7 @@ JSON_PATCH_ARGS = {
"namespace": {"type": "str"}, "namespace": {"type": "str"},
"name": {"type": "str", "required": True}, "name": {"type": "str", "required": True},
"patch": {"type": "list", "required": True, "elements": "dict"}, "patch": {"type": "list", "required": True, "elements": "dict"},
"hidden_fields": {"type": "list", "elements": "str", "default": []},
} }
@@ -203,6 +213,7 @@ def execute_module(module, client):
namespace = module.params.get("namespace") namespace = module.params.get("namespace")
patch = module.params.get("patch") patch = module.params.get("patch")
hidden_fields = module.params.get("hidden_fields")
wait = module.params.get("wait") wait = module.params.get("wait")
wait_sleep = module.params.get("wait_sleep") wait_sleep = module.params.get("wait_sleep")
wait_timeout = module.params.get("wait_timeout") wait_timeout = module.params.get("wait_timeout")
@@ -260,13 +271,13 @@ def execute_module(module, client):
module.fail_json(msg=msg, error=to_native(exc), status="", reason="") module.fail_json(msg=msg, error=to_native(exc), status="", reason="")
success = True success = True
result = {"result": obj} result = {"result": hide_fields(obj, hidden_fields)}
if wait and not module.check_mode: if wait and not module.check_mode:
waiter = get_waiter(client, resource, condition=wait_condition) waiter = get_waiter(client, resource, condition=wait_condition)
success, result["result"], result["duration"] = waiter.wait( success, result["result"], result["duration"] = waiter.wait(
wait_timeout, wait_sleep, name, namespace wait_timeout, wait_sleep, name, namespace
) )
match, diffs = diff_objects(existing.to_dict(), obj) match, diffs = diff_objects(existing.to_dict(), obj, hidden_fields)
result["changed"] = not match result["changed"] = not match
if module._diff: if module._diff:
result["diff"] = diffs result["diff"] = diffs

View File

@@ -168,9 +168,7 @@ def perform_action(svc, resource):
module.params["kind"], module.params["kind"],
resource["metadata"]["name"], resource["metadata"]["name"],
) )
if warn: result = {"changed": False, "warnings": [warn]}
module.warn(warn)
result = {"changed": False}
return result return result
if module.params["kind"] == "Deployment": if module.params["kind"] == "Deployment":

View File

@@ -243,12 +243,10 @@ def execute_module(client, module):
module.fail_json(msg=error, **return_attributes) module.fail_json(msg=error, **return_attributes)
def _continue_or_exit(warn): def _continue_or_exit(warn):
if warn:
module.warn(warn)
if multiple_scale: if multiple_scale:
return_attributes["results"].append({"changed": False}) return_attributes["results"].append({"warning": warn, "changed": False})
else: else:
module.exit_json(**return_attributes) module.exit_json(warning=warn, **return_attributes)
for existing in existing_items: for existing in existing_items:
if kind.lower() == "job": if kind.lower() == "job":

View File

@@ -29,3 +29,4 @@ test_namespace:
- "helm-chart-with-space-into-name" - "helm-chart-with-space-into-name"
- "helm-reset-then-reuse-values" - "helm-reset-then-reuse-values"
- "helm-insecure" - "helm-insecure"
- "helm-test-take-ownership"

View File

@@ -7,4 +7,3 @@
- "v3.15.4" - "v3.15.4"
- "v3.16.0" - "v3.16.0"
- "v3.17.0" - "v3.17.0"
- "v4.0.0"

View File

@@ -13,47 +13,42 @@
include_role: include_role:
name: install_helm name: install_helm
- name: Main helm tests with Helm v3 - name: "Ensure we honor the environment variables"
when: helm_version != "v4.0.0" include_tasks: test_read_envvars.yml
block:
- name: "Ensure we honor the environment variables" - name: Deploy charts
include_tasks: test_read_envvars.yml include_tasks: "tests_chart/{{ test_chart_type }}.yml"
when: helm_version != "v4.0.0" loop_control:
loop_var: test_chart_type
with_items:
- from_local_path
- from_repository
- from_url
- name: Deploy charts - name: test helm upgrade with reuse_values
include_tasks: "tests_chart/{{ test_chart_type }}.yml" include_tasks: test_helm_reuse_values.yml
loop_control:
loop_var: test_chart_type
with_items:
- from_local_path
- from_repository
- from_url
- name: test helm upgrade with reuse_values - name: test helm upgrade with reset_then_reuse_values
include_tasks: test_helm_reuse_values.yml include_tasks: test_helm_reset_then_reuse_values.yml
- name: test helm upgrade with reset_then_reuse_values - name: test helm dependency update
include_tasks: test_helm_reset_then_reuse_values.yml include_tasks: test_up_dep.yml
- name: test helm dependency update - name: Test helm uninstall
include_tasks: test_up_dep.yml include_tasks: test_helm_uninstall.yml
- name: Test helm uninstall - name: Test helm install with chart name containing space
include_tasks: test_helm_uninstall.yml include_tasks: test_helm_with_space_into_chart_name.yml
- name: Test helm install with chart name containing space # https://github.com/ansible-collections/community.kubernetes/issues/296
include_tasks: test_helm_with_space_into_chart_name.yml - name: Test Skip CRDS feature in helm chart install
include_tasks: test_crds.yml
# https://github.com/ansible-collections/community.kubernetes/issues/296 - name: Test insecure registry flag feature
- name: Test Skip CRDS feature in helm chart install include_tasks: test_helm_insecure.yml
include_tasks: test_crds.yml
- name: Test insecure registry flag feature - name: Test take ownership flag feature
include_tasks: test_helm_insecure.yml include_tasks: test_helm_take_ownership.yml
- name: Test helm version
include_tasks: test_helm_version.yml
- name: Clean helm install - name: Clean helm install
file: file:

View File

@@ -36,7 +36,7 @@
that: that:
- install is changed - install is changed
- '"--reset-then-reuse-values" not in install.command' - '"--reset-then-reuse-values" not in install.command'
- release_value["status"]["release_values"] == chart_release_values - release_value["status"]["values"] == chart_release_values
- name: Upgrade chart using reset_then_reuse_values=true - name: Upgrade chart using reset_then_reuse_values=true
helm: helm:
@@ -64,7 +64,7 @@
- '"--reset-then-reuse-values" in upgrade.command' - '"--reset-then-reuse-values" in upgrade.command'
- '"--reuse-values " not in upgrade.command' - '"--reuse-values " not in upgrade.command'
- '"--reset-values" not in upgrade.command' - '"--reset-values" not in upgrade.command'
- release_value["status"]["release_values"] == chart_release_values | combine(chart_reset_then_reuse_values, recursive=true) - release_value["status"]["values"] == chart_release_values | combine(chart_reset_then_reuse_values, recursive=true)
always: always:
- name: Remove helm namespace - name: Remove helm namespace

View File

@@ -36,7 +36,7 @@
that: that:
- install is changed - install is changed
- '"--reuse-values=True" not in install.command' - '"--reuse-values=True" not in install.command'
- release_value["status"]["release_values"] == chart_release_values - release_value["status"]["values"] == chart_release_values
- name: Upgrade chart using reuse_values=true - name: Upgrade chart using reuse_values=true
helm: helm:
@@ -62,7 +62,7 @@
- upgrade is changed - upgrade is changed
- '"--reuse-values=True" in upgrade.command' - '"--reuse-values=True" in upgrade.command'
- '"--reset-values" not in upgrade.command' - '"--reset-values" not in upgrade.command'
- release_value["status"]["release_values"] == chart_release_values | combine(chart_reuse_values, recursive=true) - release_value["status"]["values"] == chart_release_values | combine(chart_reuse_values, recursive=true)
always: always:
- name: Remove helm namespace - name: Remove helm namespace

View File

@@ -0,0 +1,81 @@
---
- name: Test helm take ownership
vars:
helm_namespace: "{{ test_namespace[13] }}"
block:
- name: Initial chart installation (no flag set)
helm:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_test_oci }}"
release_name: test-take-ownership
release_namespace: "{{ helm_namespace }}"
create_namespace: true
register: install
- name: Validate that take-ownership flag is not set
assert:
that:
- install is changed
- '"--take-ownership" not in install.command'
- name: Upgrade chart (take-onwership flag set)
helm:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_test_oci }}"
release_name: test-take-ownership
release_namespace: "{{ helm_namespace }}"
take_ownership: true
values:
commonLabels:
take-onwership: "set"
register: upgrade
ignore_errors: true
- name: Validate that take-ownership flag IS set if helm version is >= 3.17.0
assert:
that:
- upgrade is changed
- '"--take-ownership" in upgrade.command'
when: '"v3.17.0" <= helm_version'
- name: Validate that feature fails for helm < 3.17.0
assert:
that:
- upgrade is failed
- '"take_ownership requires helm >= 3.17.0" in upgrade.msg'
when: 'helm_version < "v3.17.0"'
- name: Upgrade chart (take-onwership flag not set)
helm:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_test_oci }}"
release_name: test-take-ownership
release_namespace: "{{ helm_namespace }}"
values:
commonLabels:
take-onwership: "not-set"
register: upgrade
ignore_errors: true
- name: Validate that take-ownership flag IS set if helm version is >= 3.17.0
assert:
that:
- upgrade is changed
- '"--take-ownership" not in upgrade.command'
when: '"v3.17.0" <= helm_version'
- name: Validate that feature fails for helm < 3.17.0
assert:
that:
- upgrade is changed
- upgrade.msg is not defined
when: 'helm_version < "v3.17.0"'
always:
- name: Remove helm namespace
k8s:
api_version: v1
kind: Namespace
name: "{{ helm_namespace }}"
state: absent

View File

@@ -1,47 +0,0 @@
---
- name: Test helm reuse_values
vars:
helm_namespace: "{{ test_namespace[12] }}"
chart_release_values:
replica:
replicaCount: 3
master:
count: 1
kind: Deployment
chart_reuse_values:
replica:
replicaCount: 1
master:
count: 3
block:
- name: Initial chart installation
helm:
binary_path: "{{ helm_binary }}"
chart_ref: oci://registry-1.docker.io/bitnamicharts/redis
release_name: test-redis
release_namespace: "{{ helm_namespace }}"
create_namespace: true
release_values: "{{ chart_release_values }}"
register: install
ignore_errors: true
when: helm_version == "v4.0.0"
- name: Debug install result
debug:
var: install
when: helm_version == "v4.0.0"
- name: Ensure helm installation was failed for v4.0.0
assert:
that:
- install is failed
- "'Helm version must be >=3.0.0,<4.0.0' in install.msg"
when: helm_version == "v4.0.0"
always:
- name: Remove helm namespace
k8s:
api_version: v1
kind: Namespace
name: "{{ helm_namespace }}"
state: absent

View File

@@ -57,7 +57,7 @@
that: that:
- install_check_mode is changed - install_check_mode is changed
- install_check_mode.status is defined - install_check_mode.status is defined
- install_check_mode.status.release_values is defined - install_check_mode.status.values is defined
- name: "Install {{ chart_test }} from {{ source }}" - name: "Install {{ chart_test }} from {{ source }}"
helm: helm:
@@ -131,7 +131,7 @@
- install is changed - install is changed
- install.status.status | lower == 'deployed' - install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version - install.status.chart == chart_test+"-"+chart_test_version
- "install.status['release_values'].revisionHistoryLimit == 0" - "install.status['values'].revisionHistoryLimit == 0"
- name: Check idempotency after adding vars - name: Check idempotency after adding vars
helm: helm:
@@ -149,7 +149,7 @@
- install is not changed - install is not changed
- install.status.status | lower == 'deployed' - install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version - install.status.chart == chart_test+"-"+chart_test_version
- "install.status['release_values'].revisionHistoryLimit == 0" - "install.status['values'].revisionHistoryLimit == 0"
- name: "Remove Vars to {{ chart_test }} from {{ source }}" - name: "Remove Vars to {{ chart_test }} from {{ source }}"
helm: helm:
@@ -166,7 +166,7 @@
- install is changed - install is changed
- install.status.status | lower == 'deployed' - install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version - install.status.chart == chart_test+"-"+chart_test_version
- install.status['release_values'] == {} - install.status['values'] == {}
- name: Check idempotency after removing vars - name: Check idempotency after removing vars
helm: helm:
@@ -183,7 +183,7 @@
- install is not changed - install is not changed
- install.status.status | lower == 'deployed' - install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version - install.status.chart == chart_test+"-"+chart_test_version
- install.status['release_values'] == {} - install.status['values'] == {}
- name: "Upgrade {{ chart_test }} from {{ source }}" - name: "Upgrade {{ chart_test }} from {{ source }}"
helm: helm:
@@ -317,7 +317,7 @@
- install is changed - install is changed
- install.status.status | lower == 'deployed' - install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version - install.status.chart == chart_test+"-"+chart_test_version
- "install.status['release_values'].revisionHistoryLimit == 0" - "install.status['values'].revisionHistoryLimit == 0"
- name: "Install {{ chart_test }} from {{ source }} with values_files (again)" - name: "Install {{ chart_test }} from {{ source }} with values_files (again)"
helm: helm:
@@ -402,7 +402,7 @@
namespace: "{{ helm_namespace }}" namespace: "{{ helm_namespace }}"
create_namespace: true create_namespace: true
context: does-not-exist context: does-not-exist
ignore_errors: true ignore_errors: yes
register: result register: result
- name: Assert that release fails with non-existent context - name: Assert that release fails with non-existent context

View File

@@ -0,0 +1,3 @@
helm_template
helm_pull
helm

View File

@@ -0,0 +1,3 @@
[all]
helm-3.12.3 helm_version=v3.12.3 test_namespace=helm-plain-http-v3-12-3 tests_should_failed=true
helm-3.18.2 helm_version=v3.18.2 test_namespace=helm-plain-http-v3-18-2 tests_should_failed=false

View File

@@ -0,0 +1,14 @@
- name: Run test for helm plain http option
hosts: all
gather_facts: true
vars:
ansible_connection: local
ansible_python_interpreter: "{{ ansible_playbook_python }}"
chart_test_oci: "oci://registry-1.docker.io/bitnamicharts/redis"
roles:
- setup_namespace
tasks:
- ansible.builtin.include_tasks: tasks/test.yaml

View File

@@ -0,0 +1,99 @@
---
- name: Run test for helm
block:
- name: Create temporary directory to install chart In
ansible.builtin.tempfile:
state: directory
suffix: .helm
register: install_path
- name: Install required helm version
ansible.builtin.include_role:
name: install_helm
vars:
helm_install_path: "{{ install_path.path }}"
- name: Set helm binary path
ansible.builtin.set_fact:
helm_binary: "{{ install_path.path }}/{{ ansible_system | lower }}-amd64/helm"
# helm
- name: Run helm with plain_http
kubernetes.core.helm:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_test_oci }}"
release_name: test-secure
release_namespace: "{{ test_namespace }}"
create_namespace: true
plain_http: true
register: install_chart
ignore_errors: true
- name: Ensure module failed as expected
ansible.builtin.assert:
that:
- install_chart is failed
- '"plain_http requires helm >= 3.13.0" in install_chart.msg'
when: tests_should_failed | bool
- name: Ensure the result command contains the expected option
ansible.builtin.assert:
that:
- install_chart is not failed
- '"--plain-http" in install_chart.command'
when: not (tests_should_failed | bool)
# helm_pull
- name: Trying to download helm chart with option plain_http
kubernetes.core.helm_pull:
chart_ref: "{{ chart_test_oci }}"
destination: "{{ playbook_dir }}"
binary_path: "{{ helm_binary }}"
plain_http: true
register: pull_chart
ignore_errors: true
- name: Ensure module failed as expected
ansible.builtin.assert:
that:
- pull_chart is failed
- '"plain_http requires helm >= 3.13.0" in pull_chart.msg'
when: tests_should_failed | bool
- name: Ensure the result command contains the expected option
ansible.builtin.assert:
that:
- pull_chart is not failed
- '"--plain-http" in pull_chart.command'
when: not (tests_should_failed | bool)
# helm_template
- name: Test helm render template
kubernetes.core.helm_template:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_test_oci }}"
output_dir: "{{ playbook_dir }}"
plain_http: true
register: template
ignore_errors: true
- name: Ensure module failed as expected
ansible.builtin.assert:
that:
- template is failed
- '"plain_http requires helm >= 3.13.0" in template.msg'
when: tests_should_failed | bool
- name: Ensure the result command contains the expected option
ansible.builtin.assert:
that:
- template is not failed
- '"--plain-http" in template.command'
when: not (tests_should_failed | bool)
always:
- name: Delete temporary file
ansible.builtin.file:
path: "{{ install_path.path }}"
state: absent
ignore_errors: true

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=../
ansible-playbook playbooks/play.yaml -i inventory.ini "$@"

View File

@@ -47,7 +47,7 @@
assert: assert:
that: that:
- _result is failed - _result is failed
- _result.msg == "Helm version must be >=3.0.0,<4.0.0, current version is 2.3.0" - _result.msg == "This module requires helm >= 3.0.0, current version is 2.3.0"
vars: vars:
helm_path: "{{ temp_dir }}/2.3.0/linux-amd64/helm" helm_path: "{{ temp_dir }}/2.3.0/linux-amd64/helm"
@@ -221,101 +221,6 @@
- _chart.stat.exists - _chart.stat.exists
- _chart.stat.isdir - _chart.stat.isdir
# Test idempotency with tarred chart
- name: Download chart with version (first time)
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: "oci://registry-1.docker.io/bitnamicharts/redis"
destination: "{{ destination }}"
chart_version: "24.1.0"
register: _result_first
- name: Download chart with version (second time - should be idempotent)
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: "oci://registry-1.docker.io/bitnamicharts/redis"
destination: "{{ destination }}"
chart_version: "24.1.0"
register: _result_second
- name: Validate idempotency for tarred chart
assert:
that:
- _result_first is changed
- _result_second is not changed
# Test force parameter with tarred chart
- name: Download chart with force=true (should always download)
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: "oci://registry-1.docker.io/bitnamicharts/redis"
destination: "{{ destination }}"
chart_version: "24.1.0"
force: true
register: _result_force
- name: Validate force parameter causes download
assert:
that:
- _result_force is changed
# Test idempotency with untarred chart in the separate folder
- name: Create separate directory for untar test under {{ temp_dir }}
ansible.builtin.file:
path: "{{ destination }}/untar_test"
state: directory
mode: '0755'
- name: Download and untar chart (first time)
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: "oci://registry-1.docker.io/bitnamicharts/redis"
destination: "{{ destination }}/untar_test"
chart_version: "24.0.0"
untar_chart: true
register: _result_untar_first
- name: Download and untar chart (second time - should be idempotent)
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: "oci://registry-1.docker.io/bitnamicharts/redis"
destination: "{{ destination }}/untar_test"
chart_version: "24.0.0"
untar_chart: true
register: _result_untar_second
- name: Validate idempotency for untarred chart
assert:
that:
- _result_untar_first is changed
- _result_untar_second is not changed
- name: Download and untar chart with force=true (should remove existing directory and re-extract)
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: "oci://registry-1.docker.io/bitnamicharts/redis"
destination: "{{ destination }}/untar_test"
chart_version: "24.0.0"
untar_chart: true
force: true
register: _result_untar_force
- name: Validate first force extraction works
assert:
that:
- _result_untar_force is changed
- name: Verify chart directory still exists after force re-extraction
stat:
path: "{{ destination }}/untar_test/redis"
register: _chart_after_force
- name: Validate chart directory exists
assert:
that:
- _chart_after_force.stat.exists
- _chart_after_force.stat.isdir
vars: vars:
helm_path: "{{ temp_dir }}/3.8.0/linux-amd64/helm" helm_path: "{{ temp_dir }}/3.8.0/linux-amd64/helm"

View File

@@ -20,10 +20,10 @@
- name: Assert that release was created with user-defined variables - name: Assert that release was created with user-defined variables
assert: assert:
that: that:
- '"phase" in user_values.status["release_values"]' - '"phase" in user_values.status["values"]'
- '"versioned" in user_values.status["release_values"]' - '"versioned" in user_values.status["values"]'
- user_values.status["release_values"]["phase"] == "integration" - user_values.status["values"]["phase"] == "integration"
- user_values.status["release_values"]["versioned"] is false - user_values.status["values"]["versioned"] is false
# install chart using set_values and release_values # install chart using set_values and release_values
- name: Install helm binary (> 3.10.0) requires to use set-json - name: Install helm binary (> 3.10.0) requires to use set-json
@@ -55,10 +55,10 @@
- name: Assert that release was created with user-defined variables - name: Assert that release was created with user-defined variables
assert: assert:
that: that:
- values.status["release_values"].replicaCount == 3 - values.status["values"].replicaCount == 3
- values.status["release_values"].master.image.registry == "docker.io" - values.status["values"].master.image.registry == "docker.io"
- values.status["release_values"].master.image.repository == "bitnami/apache" - values.status["values"].master.image.repository == "bitnami/apache"
- values.status["release_values"].master.image.tag == "2.4.54-debian-11-r74" - values.status["values"].master.image.tag == "2.4.54-debian-11-r74"
# install chart using set_values and values_files # install chart using set_values and values_files
- name: create temporary file to save values in - name: create temporary file to save values in
@@ -96,8 +96,8 @@
- name: Assert that release was created with user-defined variables - name: Assert that release was created with user-defined variables
assert: assert:
that: that:
- values.status["release_values"].mode == "distributed" - values.status["values"].mode == "distributed"
- values.status["release_values"].disableWebUI is true - values.status["values"].disableWebUI is true
always: always:
- name: Delete temporary file - name: Delete temporary file

View File

@@ -1,3 +0,0 @@
context/target
time=42
k8s

View File

@@ -1,46 +0,0 @@
---
- name: Create inventory files
hosts: localhost
gather_facts: false
collections:
- kubernetes.core
roles:
- role: setup_kubeconfig
kubeconfig_operation: 'save'
tasks:
- name: Create inventory files
copy:
content: "{{ item.content }}"
dest: "{{ item.path }}"
vars:
hostname: "{{ lookup('file', user_credentials_dir + '/host_data.txt') }}"
test_cert_file: "{{ user_credentials_dir | realpath + '/cert_file_data.txt' }}"
test_key_file: "{{ user_credentials_dir | realpath + '/key_file_data.txt' }}"
test_ca_cert: "{{ user_credentials_dir | realpath + '/ssl_ca_cert_data.txt' }}"
with_items:
- path: "test_inventory_aliases_with_ssl_k8s.yml"
content: |
---
plugin: kubernetes.core.k8s
connections:
- namespaces:
- inventory
host: "{{ hostname }}"
cert_file: "{{ test_cert_file }}"
key_file: "{{ test_key_file }}"
verify_ssl: true
ssl_ca_cert: "{{ test_ca_cert }}"
- path: "test_inventory_aliases_no_ssl_k8s.yml"
content: |
---
plugin: kubernetes.core.k8s
connections:
- namespaces:
- inventory
host: "{{ hostname }}"
cert_file: "{{ test_cert_file }}"
key_file: "{{ test_key_file }}"
verify_ssl: false

View File

@@ -1,30 +0,0 @@
---
- name: Delete inventory namespace
hosts: localhost
connection: local
gather_facts: true
roles:
- role: setup_kubeconfig
kubeconfig_operation: 'revert'
tasks:
- name: Delete temporary files
file:
state: absent
path: "{{ user_credentials_dir ~ '/' ~ item }}"
ignore_errors: true
with_items:
- test_inventory_aliases_with_ssl_k8s.yml
- test_inventory_aliases_no_ssl_k8s.yml
- ssl_ca_cert_data.txt
- key_file_data.txt
- cert_file_data.txt
- host_data.txt
- name: Remove inventory namespace
k8s:
api_version: v1
kind: Namespace
name: inventory
state: absent

View File

@@ -1,90 +0,0 @@
---
- name: Converge
hosts: localhost
connection: local
collections:
- kubernetes.core
vars_files:
- vars/main.yml
tasks:
- name: Delete existing namespace
k8s:
api_version: v1
kind: Namespace
name: inventory
wait: yes
state: absent
- name: Ensure namespace exists
k8s:
api_version: v1
kind: Namespace
name: inventory
- name: Add a deployment
k8s:
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
namespace: inventory
spec:
replicas: 1
selector:
matchLabels:
app: "{{ k8s_pod_name }}"
template: "{{ k8s_pod_template }}"
wait: yes
wait_timeout: 400
vars:
k8s_pod_name: inventory
k8s_pod_image: python
k8s_pod_command:
- python
- '-m'
- http.server
k8s_pod_env:
- name: TEST
value: test
- meta: refresh_inventory
- name: Verify inventory and connection plugins
hosts: namespace_inventory_pods
gather_facts: no
vars:
file_content: |
Hello world
tasks:
- name: End play if host not running (TODO should we not add these to the inventory?)
meta: end_host
when: pod_phase != "Running"
- debug: var=hostvars
- setup:
- debug: var=ansible_facts
- name: Assert the TEST environment variable was retrieved
assert:
that: ansible_facts.env.TEST == 'test'
- name: Copy a file into the host
copy:
content: '{{ file_content }}'
dest: /tmp/test_file
- name: Retrieve the file from the host
slurp:
src: /tmp/test_file
register: slurped_file
- name: Assert the file content matches expectations
assert:
that: (slurped_file.content|b64decode) == file_content

View File

@@ -1,2 +0,0 @@
---
plugin: kubernetes.core.k8s

View File

@@ -1,38 +0,0 @@
---
k8s_pod_metadata:
labels:
app: "{{ k8s_pod_name }}"
k8s_pod_spec:
serviceAccount: "{{ k8s_pod_service_account }}"
containers:
- image: "{{ k8s_pod_image }}"
imagePullPolicy: Always
name: "{{ k8s_pod_name }}"
command: "{{ k8s_pod_command }}"
readinessProbe:
initialDelaySeconds: 15
exec:
command:
- /bin/true
resources: "{{ k8s_pod_resources }}"
ports: "{{ k8s_pod_ports }}"
env: "{{ k8s_pod_env }}"
k8s_pod_service_account: default
k8s_pod_resources:
limits:
cpu: "100m"
memory: "100Mi"
k8s_pod_command: []
k8s_pod_ports: []
k8s_pod_env: []
k8s_pod_template:
metadata: "{{ k8s_pod_metadata }}"
spec: "{{ k8s_pod_spec }}"

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH="../"
USER_CREDENTIALS_DIR=$(pwd)
ansible-playbook playbooks/delete_resources.yml -e "user_credentials_dir=${USER_CREDENTIALS_DIR}" "$@"
{
export ANSIBLE_CALLBACKS_ENABLED=profile_tasks
export ANSIBLE_INVENTORY_ENABLED=kubernetes.core.k8s,yaml
export ANSIBLE_PYTHON_INTERPRETER=auto_silent
ansible-playbook playbooks/play.yml -i playbooks/test.inventory_k8s.yml "$@" &&
ansible-playbook playbooks/create_resources.yml -e "user_credentials_dir=${USER_CREDENTIALS_DIR}" "$@" &&
ansible-inventory -i playbooks/test_inventory_aliases_with_ssl_k8s.yml --list "$@" &&
ansible-inventory -i playbooks/test_inventory_aliases_no_ssl_k8s.yml --list "$@" &&
unset ANSIBLE_INVENTORY_ENABLED &&
ansible-playbook playbooks/delete_resources.yml -e "user_credentials_dir=${USER_CREDENTIALS_DIR}" "$@"
} || {
ansible-playbook playbooks/delete_resources.yml -e "user_credentials_dir=${USER_CREDENTIALS_DIR}" "$@"
exit 1
}

View File

@@ -1,4 +1,42 @@
--- ---
k8s_pod_metadata:
labels:
app: "{{ k8s_pod_name }}"
k8s_pod_spec:
serviceAccount: "{{ k8s_pod_service_account }}"
containers:
- image: "{{ k8s_pod_image }}"
imagePullPolicy: Always
name: "{{ k8s_pod_name }}"
command: "{{ k8s_pod_command }}"
readinessProbe:
initialDelaySeconds: 15
exec:
command:
- /bin/true
resources: "{{ k8s_pod_resources }}"
ports: "{{ k8s_pod_ports }}"
env: "{{ k8s_pod_env }}"
k8s_pod_service_account: default
k8s_pod_resources:
limits:
cpu: "100m"
memory: "100Mi"
k8s_pod_command: []
k8s_pod_ports: []
k8s_pod_env: []
k8s_pod_template:
metadata: "{{ k8s_pod_metadata }}"
spec: "{{ k8s_pod_spec }}"
test_namespace: "apply" test_namespace: "apply"
k8s_wait_timeout: 240 k8s_wait_timeout: 240

View File

@@ -292,34 +292,36 @@
- name: Add a deployment - name: Add a deployment
k8s: k8s:
namespace: "{{ test_namespace }}"
definition: definition:
definition:
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: apply-deploy name: apply-deploy
labels: namespace: "{{ test_namespace }}"
app: apply-deploy
spec: spec:
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: apply-deploy app: "{{ k8s_pod_name }}"
template: template: "{{ k8s_pod_template }}"
metadata:
labels:
app: apply-deploy
spec:
serviceAccount: apply-deploy
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
wait: yes wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
apply: yes apply: yes
vars:
k8s_pod_name: apply-deploy
k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:v0.10.0-green
k8s_pod_service_account: apply-deploy
k8s_pod_ports:
- containerPort: 8080
name: http
protocol: TCP
k8s_pod_resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 100m
memory: 100Mi
- name: Update the earlier deployment in check mode - name: Update the earlier deployment in check mode
k8s: k8s:
@@ -333,29 +335,33 @@
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: apply-deploy app: "{{ k8s_pod_name }}"
template: template: "{{ k8s_pod_template }}"
metadata:
labels:
app: apply-deploy
spec:
serviceAccount: apply-deploy
containers:
- name: nginx-2
image: nginx:latest
ports:
- containerPort: 80
wait: yes wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
apply: yes apply: yes
check_mode: yes check_mode: yes
vars:
k8s_pod_name: apply-deploy
k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:v0.10.0-purple
k8s_pod_service_account: apply-deploy
k8s_pod_ports:
- containerPort: 8080
name: http
protocol: TCP
k8s_pod_resources:
requests:
cpu: 50m
limits:
cpu: 50m
memory: 50Mi
register: update_deploy_check_mode register: update_deploy_check_mode
- name: Ensure check mode change took - name: Ensure check mode change took
assert: assert:
that: that:
- update_deploy_check_mode is changed - update_deploy_check_mode is changed
- "update_deploy_check_mode.result.spec.template.spec.containers[0].name == 'nginx-2'" - "update_deploy_check_mode.result.spec.template.spec.containers[0].image == 'gcr.io/kuar-demo/kuard-amd64:v0.10.0-purple'"
- name: Update the earlier deployment - name: Update the earlier deployment
k8s: k8s:
@@ -369,28 +375,32 @@
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: apply-deploy app: "{{ k8s_pod_name }}"
template: template: "{{ k8s_pod_template }}"
metadata:
labels:
app: apply-deploy
spec:
serviceAccount: apply-deploy
containers:
- name: nginx-2
image: nginx:latest
ports:
- containerPort: 80
wait: yes wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
apply: yes apply: yes
vars:
k8s_pod_name: apply-deploy
k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:v0.10.0-purple
k8s_pod_service_account: apply-deploy
k8s_pod_ports:
- containerPort: 8080
name: http
protocol: TCP
k8s_pod_resources:
requests:
cpu: 50m
limits:
cpu: 50m
memory: 50Mi
register: update_deploy_for_real register: update_deploy_for_real
- name: Ensure change took - name: Ensure change took
assert: assert:
that: that:
- update_deploy_for_real is changed - update_deploy_for_real is changed
- "update_deploy_for_real.result.spec.template.spec.containers[0].name == 'nginx-2'" - "update_deploy_for_real.result.spec.template.spec.containers[0].image == 'gcr.io/kuar-demo/kuard-amd64:v0.10.0-purple'"
- name: Remove the serviceaccount - name: Remove the serviceaccount
k8s: k8s:
@@ -414,23 +424,27 @@
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: apply-deploy app: "{{ k8s_pod_name }}"
template: template: "{{ k8s_pod_template }}"
metadata:
labels:
app: apply-deploy
spec:
serviceAccount: apply-deploy
containers:
- name: nginx-3
image: nginx:latest
ports:
- containerPort: 80
wait: yes wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
apply: yes apply: yes
ignore_errors: true vars:
k8s_pod_name: apply-deploy
k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:v0.10.0-green
k8s_pod_service_account: apply-deploy
k8s_pod_ports:
- containerPort: 8080
name: http
protocol: TCP
k8s_pod_resources:
requests:
cpu: 50m
limits:
cpu: 50m
memory: 50Mi
register: deploy_after_serviceaccount_removal register: deploy_after_serviceaccount_removal
ignore_errors: yes
- name: Ensure that updating deployment after service account removal failed - name: Ensure that updating deployment after service account removal failed
assert: assert:

View File

@@ -14,9 +14,3 @@ pod_with_two_container:
pod_without_executable_find: pod_without_executable_find:
name: openjdk-pod name: openjdk-pod
pod_with_initcontainer_and_container:
name: pod-copy-2
container:
- container-20
- container-21

View File

@@ -18,23 +18,6 @@
wait: yes wait: yes
template: pods_definition.j2 template: pods_definition.j2
- name: Create Init Pod
k8s:
namespace: '{{ copy_namespace }}'
template: pods_definition_init.j2
- kubernetes.core.k8s_info:
api_version: v1
kind: Pod
name: '{{ pod_with_initcontainer_and_container.name }}'
namespace: '{{ copy_namespace }}'
register: init_pod_status
until: >-
init_pod_status.resources|length > 0
and 'initContainerStatuses' in init_pod_status.resources.0.status
and init_pod_status.resources.0.status.initContainerStatuses|length > 0
and init_pod_status.resources.0.status.initContainerStatuses.0.started|bool
- include_tasks: test_copy_errors.yml - include_tasks: test_copy_errors.yml
- include_tasks: test_check_mode.yml - include_tasks: test_check_mode.yml
- include_tasks: test_copy_file.yml - include_tasks: test_copy_file.yml
@@ -42,7 +25,6 @@
- include_tasks: test_copy_directory.yml - include_tasks: test_copy_directory.yml
- include_tasks: test_copy_large_file.yml - include_tasks: test_copy_large_file.yml
- include_tasks: test_copy_item_with_space_in_its_name.yml - include_tasks: test_copy_item_with_space_in_its_name.yml
- include_tasks: test_init_container_pod.yml
always: always:

View File

@@ -69,51 +69,49 @@
ignore_errors: true ignore_errors: true
register: _result register: _result
# - name: Validate that 'find' executable is missing from Pod - name: Validate that 'find' executable is missing from Pod
# assert: assert:
# that: that:
# - _result is failed - _result is failed
# fail_msg: "Pod contains 'find' executable, therefore we cannot run the next tasks." fail_msg: "Pod contains 'find' executable, therefore we cannot run the next tasks."
- name: Copy directory into Pod without 'find' executable - name: Copy files into container
block: k8s_cp:
- name: Copy files into container namespace: "{{ copy_namespace }}"
k8s_cp: pod: '{{ pod_without_executable_find.name }}'
namespace: "{{ copy_namespace }}" remote_path: '{{ item.path }}'
pod: '{{ pod_without_executable_find.name }}' content: '{{ item.content }}'
remote_path: '{{ item.path }}' state: to_pod
content: '{{ item.content }}' with_items:
state: to_pod - path: /ansible/root.txt
with_items: content: this file is located at the root directory
- path: /ansible/root.txt - path: /ansible/.hidden_root.txt
content: this file is located at the root directory content: this hidden file is located at the root directory
- path: /ansible/.hidden_root.txt - path: /ansible/.sudir/root.txt
content: this hidden file is located at the root directory content: this file is located at the root of the sub directory
- path: /ansible/.sudir/root.txt - path: /ansible/.sudir/.hidden_root.txt
content: this file is located at the root of the sub directory content: this hidden file is located at the root of the sub directory
- path: /ansible/.sudir/.hidden_root.txt
content: this hidden file is located at the root of the sub directory
- name: Delete existing directory - name: Delete existing directory
file: file:
path: /tmp/openjdk-files path: /tmp/openjdk-files
state: absent state: absent
ignore_errors: true ignore_errors: true
- name: copy directory from Pod into local filesystem (new directory to create) - name: copy directory from Pod into local filesystem (new directory to create)
k8s_cp: k8s_cp:
namespace: '{{ copy_namespace }}' namespace: '{{ copy_namespace }}'
pod: '{{ pod_without_executable_find.name }}' pod: '{{ pod_without_executable_find.name }}'
remote_path: /ansible remote_path: /ansible
local_path: /tmp/openjdk-files local_path: /tmp/openjdk-files
state: from_pod state: from_pod
- name: Compare directories - name: Compare directories
kubectl_file_compare: kubectl_file_compare:
namespace: '{{ copy_namespace }}' namespace: '{{ copy_namespace }}'
pod: '{{ pod_without_executable_find.name }}' pod: '{{ pod_without_executable_find.name }}'
remote_path: /ansible remote_path: /ansible
local_path: /tmp/openjdk-files local_path: /tmp/openjdk-files
always: always:
- name: Remove directories created into remote Pod - name: Remove directories created into remote Pod

View File

@@ -67,21 +67,3 @@
that: that:
- copy_fake_container is failed - copy_fake_container is failed
- copy_fake_container.msg == "Pod has no container this_is_a_fake_container" - copy_fake_container.msg == "Pod has no container this_is_a_fake_container"
# copy file to not started container in pod should fail
- name: copy file to not started container in pod should fail
k8s_cp:
namespace: '{{ copy_namespace }}'
pod: '{{ pod_with_initcontainer_and_container.name }}'
remote_path: /tmp
local_path: files/simple_file.txt
state: to_pod
container: '{{ pod_with_initcontainer_and_container.container[1] }}'
ignore_errors: true
register: copy_not_started_container
- name: check that error message is as expected
assert:
that:
- copy_not_started_container is failed
- copy_not_started_container.msg == "Pod container {{ pod_with_initcontainer_and_container.container[1] }} is not started"

View File

@@ -10,13 +10,9 @@
path: "{{ test_directory }}" path: "{{ test_directory }}"
state: directory state: directory
- name: Create a text file with specific content - name: Create a large text file
ansible.builtin.copy: ansible.builtin.shell:
dest: "{{ test_directory }}/large_text_file.txt" cmd: base64 /dev/random | head -c 150M > {{ test_directory }}/large_text_file.txt
content: |
This is a large text file
{{ 'Repeat this line 1000 times\n' * 1000 }}
mode: '0644'
- name: Create a large binary file - name: Create a large binary file
ansible.builtin.command: ansible.builtin.command:

View File

@@ -1,25 +0,0 @@
---
- set_fact:
random_content: "{{ lookup('password', '/dev/null chars=ascii_lowercase,digits,punctuation length=128') }}"
- name: Copy content into init container
k8s_cp:
namespace: '{{ copy_namespace }}'
pod: '{{ pod_with_initcontainer_and_container.name }}'
remote_path: /file_from_localhost.txt
content: '{{ random_content }}'
container: '{{ pod_with_initcontainer_and_container.container[0] }}'
state: to_pod
- name: Get the content from copied file
kubernetes.core.k8s_exec:
namespace: '{{ copy_namespace }}'
pod: '{{ pod_with_initcontainer_and_container.name }}'
container: '{{ pod_with_initcontainer_and_container.container[0] }}'
command: cat /file_from_localhost.txt
register: exec_out
- name: check that content is found and the same as generated earlier
assert:
that:
- exec_out.stdout == random_content

View File

@@ -6,7 +6,7 @@ metadata:
spec: spec:
containers: containers:
- name: '{{ pod_with_one_container.container }}' - name: '{{ pod_with_one_container.container }}'
image: busybox:latest image: busybox
command: command:
- /bin/sh - /bin/sh
- -c - -c
@@ -19,13 +19,13 @@ metadata:
spec: spec:
containers: containers:
- name: '{{ pod_with_two_container.container[0] }}' - name: '{{ pod_with_two_container.container[0] }}'
image: busybox:latest image: busybox:1.32.0
command: command:
- /bin/sh - /bin/sh
- -c - -c
- while true;do date;sleep 5; done - while true;do date;sleep 5; done
- name: '{{ pod_with_two_container.container[1] }}' - name: '{{ pod_with_two_container.container[1] }}'
image: busybox:latest image: busybox:1.33.0
command: command:
- /bin/sh - /bin/sh
- -c - -c
@@ -37,8 +37,8 @@ metadata:
name: '{{ pod_without_executable_find.name }}' name: '{{ pod_without_executable_find.name }}'
spec: spec:
containers: containers:
- name: openjdk - name: openjdk17
image: openjdk:27-ea image: openjdk:17
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -1,20 +0,0 @@
---
apiVersion: v1
kind: Pod
metadata:
name: '{{ pod_with_initcontainer_and_container.name }}'
spec:
initContainers:
- name: '{{ pod_with_initcontainer_and_container.container[0] }}'
image: busybox
command:
- /bin/sh
- -c
- while true;do date;sleep 5; done
containers:
- name: '{{ pod_with_initcontainer_and_container.container[1] }}'
image: busybox
command:
- /bin/sh
- -c
- while true;do date;sleep 5; done

View File

@@ -24,7 +24,7 @@ spec:
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: busybox-d name: openjdk-d
labels: labels:
context: ansible context: ansible
spec: spec:
@@ -38,8 +38,8 @@ spec:
context: ansible context: ansible
spec: spec:
containers: containers:
- name: busybox - name: openjdk
image: busybox:latest image: openjdk:17
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -17,7 +17,7 @@
wait_timeout: 400 wait_timeout: 400
vars: vars:
k8s_pod_name: delete-ds k8s_pod_name: delete-ds
k8s_pod_image: docker.io/nginx:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:1
register: ds register: ds
- name: Check that daemonset wait worked - name: Check that daemonset wait worked

View File

@@ -0,0 +1,3 @@
k8s_json_patch
k8s
time=33

View File

@@ -0,0 +1,2 @@
---
test_namespace: "k8s-hide-fields"

View File

@@ -0,0 +1,2 @@
dependencies:
- setup_namespace

View File

@@ -0,0 +1,6 @@
---
- connection: local
gather_facts: false
hosts: localhost
roles:
- k8s_json_patch_hide_fields

View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_CALLBACKS_ENABLED=profile_tasks
export ANSIBLE_ROLES_PATH=../
ansible-playbook playbook.yaml "$@"

View File

@@ -0,0 +1,91 @@
- vars:
pod: json-patch
k8s_wait_timeout: 400
block:
- name: Create a simple pod
kubernetes.core.k8s:
definition:
apiVersion: v1
kind: Pod
metadata:
namespace: "{{ test_namespace }}"
name: "{{ pod }}"
labels:
label1: foo
spec:
containers:
- image: busybox:musl
name: busybox
command:
- sh
- -c
- while true; do echo $(date); sleep 10; done
wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
- name: Add a label, and hide some fields
kubernetes.core.k8s_json_patch:
kind: Pod
namespace: "{{ test_namespace }}"
name: "{{ pod }}"
patch:
- op: add
path: /metadata/labels/label2
value: bar
hidden_fields:
- metadata.managedFields
register: hf1
- name: Ensure hidden fields are not present
assert:
that:
- "'managedFields' not in hf1.result['metadata']"
- name: Add a label, without hiding our fields
kubernetes.core.k8s_json_patch:
kind: Pod
namespace: "{{ test_namespace }}"
name: "{{ pod }}"
patch:
- op: add
path: /metadata/labels/label3
value: bar
hidden_fields:
- something.else
register: hf2
- name: Ensure hidden fields are present
assert:
that:
- "'managedFields' in hf2.result['metadata']"
- name: Patching the same resource with missing hidden fields should have no effect
kubernetes.core.k8s_json_patch:
kind: Pod
namespace: "{{ test_namespace }}"
name: "{{ pod }}"
patch:
- op: add
path: /metadata/labels/label2
value: bar
hidden_fields:
- does.not.exist
register: hf2
- name: Ensure no change with missing hidden fields
assert:
that:
- not hf2.changed
always:
- name: Remove namespace
k8s:
kind: Namespace
name: "{{ test_namespace }}"
state: absent
ignore_errors: true

View File

@@ -81,7 +81,6 @@
replicas: 1 replicas: 1
current_replicas: 3 current_replicas: 3
wait: true wait: true
wait_timeout: 60
register: scale register: scale
- name: Read deployment - name: Read deployment

View File

@@ -33,7 +33,6 @@
ports: ports:
- containerPort: 80 - containerPort: 80
- name: Crash the existing deployment - name: Crash the existing deployment
k8s: k8s:
state: present state: present
@@ -228,7 +227,7 @@
hostPath: hostPath:
path: /var/lib/docker/containers path: /var/lib/docker/containers
register: crash register: crash
ignore_errors: true ignore_errors: yes
- name: Assert that the Daemonset failed - name: Assert that the Daemonset failed
assert: assert:
@@ -403,16 +402,12 @@
namespace: "{{ namespace }}" namespace: "{{ namespace }}"
register: result register: result
- name: Debug result
debug:
var: result
- name: Assert warning is returned for no rollout history - name: Assert warning is returned for no rollout history
assert: assert:
that: that:
- not result.changed - not result.changed
- result.warnings is defined - result.rollback_info[0].warnings is defined
- "'No rollout history found' in result.warnings[0]" - "'No rollout history found' in result.rollback_info[0].warnings[0]"
- name: Create a service for unsupported resource test - name: Create a service for unsupported resource test
k8s: k8s:

View File

@@ -1,4 +1,42 @@
--- ---
k8s_pod_metadata:
labels:
app: "{{ k8s_pod_name }}"
k8s_pod_spec:
serviceAccount: "{{ k8s_pod_service_account }}"
containers:
- image: "{{ k8s_pod_image }}"
imagePullPolicy: Always
name: "{{ k8s_pod_name }}"
command: "{{ k8s_pod_command }}"
readinessProbe:
initialDelaySeconds: 15
exec:
command:
- /bin/true
resources: "{{ k8s_pod_resources }}"
ports: "{{ k8s_pod_ports }}"
env: "{{ k8s_pod_env }}"
k8s_pod_service_account: default
k8s_pod_resources:
limits:
cpu: "100m"
memory: "100Mi"
k8s_pod_command: []
k8s_pod_ports: []
k8s_pod_env: []
k8s_pod_template:
metadata: "{{ k8s_pod_metadata }}"
spec: "{{ k8s_pod_spec }}"
test_namespace: "scale" test_namespace: "scale"
k8s_wait_timeout: 400 k8s_wait_timeout: 400

View File

@@ -5,32 +5,28 @@
- name: Add a deployment - name: Add a deployment
k8s: k8s:
namespace: "{{ scale_namespace }}"
definition: definition:
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: scale-deploy name: scale-deploy
labels: namespace: "{{ scale_namespace }}"
app: scale-deploy
spec: spec:
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: scale-deploy app: "{{ k8s_pod_name }}"
template: template: "{{ k8s_pod_template }}"
metadata: wait: yes
labels:
app: scale-deploy
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
wait: true
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
apply: true apply: yes
vars:
k8s_pod_name: scale-deploy
k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:v0.10.0-green
k8s_pod_ports:
- containerPort: 8080
name: http
protocol: TCP
- name: Get pods in scale-deploy - name: Get pods in scale-deploy
k8s_info: k8s_info:
@@ -48,8 +44,7 @@
name: scale-deploy name: scale-deploy
namespace: "{{ scale_namespace }}" namespace: "{{ scale_namespace }}"
replicas: 0 replicas: 0
wait: true wait: yes
wait_timeout: 60
register: scale_down register: scale_down
check_mode: true check_mode: true
@@ -80,8 +75,7 @@
name: scale-deploy name: scale-deploy
namespace: "{{ scale_namespace }}" namespace: "{{ scale_namespace }}"
replicas: 0 replicas: 0
wait: true wait: yes
wait_timeout: 60
register: scale_down register: scale_down
check_mode: true check_mode: true
@@ -112,7 +106,7 @@
name: scale-deploy name: scale-deploy
namespace: "{{ scale_namespace }}" namespace: "{{ scale_namespace }}"
replicas: 0 replicas: 0
wait: true wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
register: scale_down register: scale_down
diff: true diff: true
@@ -144,8 +138,7 @@
name: scale-deploy name: scale-deploy
namespace: "{{ scale_namespace }}" namespace: "{{ scale_namespace }}"
replicas: 0 replicas: 0
wait: true wait: yes
wait_timeout: 60
register: scale_down_idempotency register: scale_down_idempotency
diff: true diff: true
@@ -166,20 +159,18 @@
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: scale-deploy app: "{{ k8s_pod_name }}"
template: template: "{{ k8s_pod_template }}"
metadata: wait: yes
labels:
app: scale-deploy
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
wait: true
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
apply: true apply: yes
vars:
k8s_pod_name: scale-deploy
k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:v0.10.0-green
k8s_pod_ports:
- containerPort: 8080
name: http
protocol: TCP
register: reapply_after_scale register: reapply_after_scale
- name: Get pods in scale-deploy - name: Get pods in scale-deploy
@@ -208,7 +199,7 @@
wait: yes wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
register: scale_up register: scale_up
diff: false diff: no
- name: Get pods in scale-deploy - name: Get pods in scale-deploy
k8s_info: k8s_info:
@@ -236,9 +227,8 @@
namespace: "{{ scale_namespace }}" namespace: "{{ scale_namespace }}"
replicas: 2 replicas: 2
wait: yes wait: yes
wait_timeout: 60
register: scale_up_noop register: scale_up_noop
diff: false diff: no
- name: Get pods in scale-deploy - name: Get pods in scale-deploy
k8s_info: k8s_info:
@@ -265,7 +255,7 @@
name: scale-deploy name: scale-deploy
namespace: "{{ scale_namespace }}" namespace: "{{ scale_namespace }}"
replicas: 1 replicas: 1
wait: false wait: no
register: scale_down_no_wait register: scale_down_no_wait
diff: true diff: true
@@ -312,12 +302,12 @@
resource_version: 0 resource_version: 0
label_selectors: label_selectors:
- app=nginx - app=nginx
wait_timeout: 60
register: scale_out register: scale_out
- assert: - assert:
that: that:
- not scale_out.changed - not scale_out.changed
- scale_out.results | selectattr('warning', 'defined') | list | length == 2
- name: scale deployment using current replicas (wrong value) - name: scale deployment using current replicas (wrong value)
kubernetes.core.k8s_scale: kubernetes.core.k8s_scale:
@@ -332,6 +322,7 @@
- assert: - assert:
that: that:
- not scale_out.changed - not scale_out.changed
- scale_out.results | selectattr('warning', 'defined') | list | length == 2
- name: scale deployment using current replicas (right value) - name: scale deployment using current replicas (right value)
kubernetes.core.k8s_scale: kubernetes.core.k8s_scale:

View File

@@ -40,7 +40,7 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-ds k8s_pod_name: wait-ds
k8s_pod_image: docker.io/busybox:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:1
k8s_pod_command: k8s_pod_command:
- sleep - sleep
- "600" - "600"
@@ -71,7 +71,7 @@
wait_timeout: 180 wait_timeout: 180
vars: vars:
k8s_pod_name: wait-ds k8s_pod_name: wait-ds
k8s_pod_image: docker.io/alpine:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:2
k8s_pod_command: k8s_pod_command:
- sleep - sleep
- "600" - "600"
@@ -82,7 +82,7 @@
assert: assert:
that: that:
- update_ds_check_mode is changed - update_ds_check_mode is changed
- "update_ds_check_mode.result.spec.template.spec.containers[0].image == 'docker.io/alpine:latest'" - "update_ds_check_mode.result.spec.template.spec.containers[0].image == 'gcr.io/kuar-demo/kuard-amd64:2'"
- name: Update a daemonset - name: Update a daemonset
k8s: k8s:
@@ -104,7 +104,7 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-ds k8s_pod_name: wait-ds
k8s_pod_image: docker.io/busybox:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:3
k8s_pod_command: k8s_pod_command:
- sleep - sleep
- "600" - "600"
@@ -125,7 +125,7 @@
assert: assert:
that: that:
- ds.result.status.currentNumberScheduled == ds.result.status.desiredNumberScheduled - ds.result.status.currentNumberScheduled == ds.result.status.desiredNumberScheduled
- updated_ds_pods.resources[0].spec.containers[0].image == 'docker.io/busybox:latest' - updated_ds_pods.resources[0].spec.containers[0].image.endswith(":3")
- name: Create daemonset with nodeSelector and not existing label - name: Create daemonset with nodeSelector and not existing label
k8s: k8s:
@@ -145,7 +145,7 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-daemonset-not-existing-label k8s_pod_name: wait-daemonset-not-existing-label
k8s_pod_image: docker.io/busybox:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:1
k8s_pod_command: k8s_pod_command:
- sleep - sleep
- "600" - "600"
@@ -187,7 +187,7 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-sts k8s_pod_name: wait-sts
k8s_pod_image: docker.io/busybox:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:1
k8s_pod_command: k8s_pod_command:
- sleep - sleep
- "600" - "600"
@@ -251,7 +251,7 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-sts k8s_pod_name: wait-sts
k8s_pod_image: docker.io/alpine:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:3
k8s_pod_command: k8s_pod_command:
- sleep - sleep
- "600" - "600"
@@ -272,7 +272,7 @@
assert: assert:
that: that:
- sts.result.spec.replicas == sts.result.status.readyReplicas - sts.result.spec.replicas == sts.result.status.readyReplicas
- updated_sts_pods.resources[0].spec.containers[0].image == 'docker.io/alpine:latest' - updated_sts_pods.resources[0].spec.containers[0].image.endswith(":3")
- name: Add a crashing pod - name: Add a crashing pod
k8s: k8s:
@@ -288,11 +288,11 @@
wait_timeout: 30 wait_timeout: 30
vars: vars:
k8s_pod_name: wait-crash-pod k8s_pod_name: wait-crash-pod
k8s_pod_image: busybox:latest k8s_pod_image: alpine:3.8
k8s_pod_command: k8s_pod_command:
- /bin/false - /bin/false
register: crash_pod register: crash_pod
ignore_errors: true ignore_errors: yes
- name: Check that task failed - name: Check that task failed
assert: assert:
@@ -315,7 +315,7 @@
k8s_pod_name: wait-no-image-pod k8s_pod_name: wait-no-image-pod
k8s_pod_image: i_made_this_up:and_this_too k8s_pod_image: i_made_this_up:and_this_too
register: no_image_pod register: no_image_pod
ignore_errors: true ignore_errors: yes
- name: Check that task failed - name: Check that task failed
assert: assert:
@@ -340,11 +340,12 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-deploy k8s_pod_name: wait-deploy
k8s_pod_image: docker.io/nginx:latest k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:1
k8s_pod_ports: k8s_pod_ports:
- containerPort: 8080 - containerPort: 8080
name: http name: http
protocol: TCP protocol: TCP
register: deploy register: deploy
- name: Check that deployment wait worked - name: Check that deployment wait worked
@@ -370,7 +371,7 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-deploy k8s_pod_name: wait-deploy
k8s_pod_image: docker.io/nginx:stable-alpine k8s_pod_image: gcr.io/kuar-demo/kuard-amd64:2
k8s_pod_ports: k8s_pod_ports:
- containerPort: 8080 - containerPort: 8080
name: http name: http
@@ -393,7 +394,7 @@
field_selectors: field_selectors:
- status.phase=Running - status.phase=Running
register: updated_deploy_pods register: updated_deploy_pods
until: updated_deploy_pods.resources[0].spec.containers[0].image == 'docker.io/nginx:stable-alpine' until: updated_deploy_pods.resources[0].spec.containers[0].image.endswith(':2')
retries: 6 retries: 6
delay: 5 delay: 5
@@ -473,11 +474,11 @@
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}" wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
vars: vars:
k8s_pod_name: wait-crash-deploy k8s_pod_name: wait-crash-deploy
k8s_pod_image: docker.io/nginx:latest k8s_pod_image: alpine:3.8
k8s_pod_command: k8s_pod_command:
- /bin/false - /bin/false
register: wait_crash_deploy register: wait_crash_deploy
ignore_errors: true ignore_errors: yes
- name: Check that task failed - name: Check that task failed
assert: assert:
@@ -494,7 +495,7 @@
wait: yes wait: yes
wait_sleep: 2 wait_sleep: 2
wait_timeout: 5 wait_timeout: 5
ignore_errors: true ignore_errors: yes
register: short_wait_remove_pod register: short_wait_remove_pod
- name: Check that task failed - name: Check that task failed
@@ -508,4 +509,4 @@
kind: Namespace kind: Namespace
name: "{{ wait_namespace }}" name: "{{ wait_namespace }}"
state: absent state: absent
ignore_errors: true ignore_errors: yes

View File

@@ -1,15 +1,12 @@
plugins/module_utils/client/discovery.py import-3.11!skip plugins/module_utils/client/discovery.py import-3.11!skip
plugins/module_utils/client/discovery.py import-3.12!skip plugins/module_utils/client/discovery.py import-3.12!skip
plugins/module_utils/client/discovery.py import-3.13!skip plugins/module_utils/client/discovery.py import-3.13!skip
plugins/module_utils/client/discovery.py import-3.14!skip
plugins/module_utils/client/resource.py import-3.11!skip plugins/module_utils/client/resource.py import-3.11!skip
plugins/module_utils/client/resource.py import-3.12!skip plugins/module_utils/client/resource.py import-3.12!skip
plugins/module_utils/client/resource.py import-3.13!skip plugins/module_utils/client/resource.py import-3.13!skip
plugins/module_utils/client/resource.py import-3.14!skip
plugins/module_utils/k8sdynamicclient.py import-3.11!skip plugins/module_utils/k8sdynamicclient.py import-3.11!skip
plugins/module_utils/k8sdynamicclient.py import-3.12!skip plugins/module_utils/k8sdynamicclient.py import-3.12!skip
plugins/module_utils/k8sdynamicclient.py import-3.13!skip plugins/module_utils/k8sdynamicclient.py import-3.13!skip
plugins/module_utils/k8sdynamicclient.py import-3.14!skip
plugins/module_utils/version.py pylint!skip plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
@@ -31,4 +28,4 @@ plugins/modules/k8s_service.py validate-modules:return-syntax-error
plugins/modules/k8s_taint.py validate-modules:return-syntax-error plugins/modules/k8s_taint.py validate-modules:return-syntax-error
tests/integration/targets/helm_diff/files/test-chart-reuse-values/templates/configmap.yaml yamllint!skip tests/integration/targets/helm_diff/files/test-chart-reuse-values/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm_registry_auth/tasks/main.yaml yamllint!skip tests/integration/targets/helm_registry_auth/tasks/main.yaml yamllint!skip
tests/integration/targets/helm_diff/files/test-chart-deployment-time/templates/configmap.yaml yamllint!skip tests/integration/targets/helm_diff/files/test-chart-deployment-time/templates/configmap.yaml yamllint!skip

View File

@@ -1,36 +0,0 @@
plugins/module_utils/client/discovery.py import-3.11!skip
plugins/module_utils/client/discovery.py import-3.12!skip
plugins/module_utils/client/discovery.py import-3.13!skip
plugins/module_utils/client/discovery.py import-3.14!skip
plugins/module_utils/client/resource.py import-3.11!skip
plugins/module_utils/client/resource.py import-3.12!skip
plugins/module_utils/client/resource.py import-3.13!skip
plugins/module_utils/client/resource.py import-3.14!skip
plugins/module_utils/k8sdynamicclient.py import-3.11!skip
plugins/module_utils/k8sdynamicclient.py import-3.12!skip
plugins/module_utils/k8sdynamicclient.py import-3.13!skip
plugins/module_utils/k8sdynamicclient.py import-3.14!skip
plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip
tests/unit/module_utils/fixtures/pods.yml yamllint!skip
tests/integration/targets/helm/files/appversionless-chart-v2/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm/files/appversionless-chart/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm/files/test-chart-v2/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm/files/test-chart/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm_diff/files/test-chart/templates/configmap.yaml yamllint!skip
tests/integration/targets/k8s_scale/files/deployment.yaml yamllint!skip
plugins/modules/k8s.py validate-modules:return-syntax-error
plugins/modules/k8s_scale.py validate-modules:return-syntax-error
plugins/modules/k8s_service.py validate-modules:return-syntax-error
plugins/modules/k8s_taint.py validate-modules:return-syntax-error
tests/integration/targets/helm_diff/files/test-chart-reuse-values/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm_registry_auth/tasks/main.yaml yamllint!skip
tests/integration/targets/helm_diff/files/test-chart-deployment-time/templates/configmap.yaml yamllint!skip
plugins/modules/helm.py validate-modules:bad-return-value-key
plugins/modules/helm_info.py validate-modules:bad-return-value-key

View File

@@ -1,35 +0,0 @@
plugins/module_utils/client/discovery.py import-3.11!skip
plugins/module_utils/client/discovery.py import-3.12!skip
plugins/module_utils/client/discovery.py import-3.13!skip
plugins/module_utils/client/discovery.py import-3.14!skip
plugins/module_utils/client/resource.py import-3.11!skip
plugins/module_utils/client/resource.py import-3.12!skip
plugins/module_utils/client/resource.py import-3.13!skip
plugins/module_utils/client/resource.py import-3.14!skip
plugins/module_utils/k8sdynamicclient.py import-3.11!skip
plugins/module_utils/k8sdynamicclient.py import-3.12!skip
plugins/module_utils/k8sdynamicclient.py import-3.13!skip
plugins/module_utils/k8sdynamicclient.py import-3.14!skip
plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip
tests/unit/module_utils/fixtures/pods.yml yamllint!skip
tests/integration/targets/helm/files/appversionless-chart-v2/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm/files/appversionless-chart/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm/files/test-chart-v2/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm/files/test-chart/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm_diff/files/test-chart/templates/configmap.yaml yamllint!skip
tests/integration/targets/k8s_scale/files/deployment.yaml yamllint!skip
plugins/modules/k8s.py validate-modules:return-syntax-error
plugins/modules/k8s_scale.py validate-modules:return-syntax-error
plugins/modules/k8s_service.py validate-modules:return-syntax-error
plugins/modules/k8s_taint.py validate-modules:return-syntax-error
tests/integration/targets/helm_diff/files/test-chart-reuse-values/templates/configmap.yaml yamllint!skip
tests/integration/targets/helm_diff/files/test-chart-deployment-time/templates/configmap.yaml yamllint!skip
plugins/modules/helm.py validate-modules:bad-return-value-key
plugins/modules/helm_info.py validate-modules:bad-return-value-key

View File

@@ -10,6 +10,7 @@ import ansible.module_utils.basic
import pytest import pytest
from ansible.module_utils._text import to_bytes from ansible.module_utils._text import to_bytes
from ansible.module_utils.common._collections_compat import MutableMapping from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.module_utils.six import string_types
@pytest.fixture @pytest.fixture
@@ -19,7 +20,7 @@ def stdin(mocker, request):
old_argv = sys.argv old_argv = sys.argv
sys.argv = ["ansible_unittest"] sys.argv = ["ansible_unittest"]
if isinstance(request.param, str): if isinstance(request.param, string_types):
args = request.param args = request.param
elif isinstance(request.param, MutableMapping): elif isinstance(request.param, MutableMapping):
if "ANSIBLE_MODULE_ARGS" not in request.param: if "ANSIBLE_MODULE_ARGS" not in request.param:

View File

@@ -443,46 +443,3 @@ def test_module_get_helm_set_values_args(set_values, expected):
result = helm_module.get_helm_set_values_args(set_values) result = helm_module.get_helm_set_values_args(set_values)
assert " ".join(expected) == result assert " ".join(expected) == result
@pytest.mark.parametrize(
"helm_version,should_fail",
[
("3.0.0", False),
("3.5.0", False),
("3.10.3", False),
("3.15.0", False),
("3.17.0", False),
("2.9.0", True),
("2.17.0", True),
("4.0.0", True),
("4.1.0", True),
("5.0.0", True),
],
)
def test_module_validate_helm_version(_ansible_helm_module, helm_version, should_fail):
_ansible_helm_module.get_helm_version = MagicMock()
_ansible_helm_module.get_helm_version.return_value = helm_version
if should_fail:
with pytest.raises(SystemExit):
_ansible_helm_module.validate_helm_version()
_ansible_helm_module.fail_json.assert_called_once()
call_args = _ansible_helm_module.fail_json.call_args
assert "Helm version must be >=3.0.0,<4.0.0" in call_args[1]["msg"]
assert helm_version in call_args[1]["msg"]
else:
_ansible_helm_module.validate_helm_version()
_ansible_helm_module.fail_json.assert_not_called()
def test_module_validate_helm_version_none(_ansible_helm_module):
_ansible_helm_module.get_helm_version = MagicMock()
_ansible_helm_module.get_helm_version.return_value = None
with pytest.raises(SystemExit):
_ansible_helm_module.validate_helm_version()
_ansible_helm_module.fail_json.assert_called_once_with(
msg="Unable to determine Helm version"
)

View File

@@ -1,401 +0,0 @@
# Copyright [2025] [Red Hat, Inc.]
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import warnings
import pytest
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
extract_sensitive_values_from_kubeconfig,
)
@pytest.fixture
def mock_kubeconfig_with_sensitive_data():
return {
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"name": "test-cluster",
"cluster": {
"server": "https://test-cluster.example.com",
"certificate-authority-data": "LS0tLS1CRUdJTi...fake-cert-data",
"insecure-skip-tls-verify": False,
},
}
],
"contexts": [
{
"name": "test-context",
"context": {
"cluster": "test-cluster",
"user": "test-user",
"namespace": "default",
},
}
],
"current-context": "test-context",
"users": [
{
"name": "test-user",
"user": {
"client-certificate-data": "LS0tLS1CRUdJTi...fake-client-cert",
"client-key-data": "LS0tLS1CRUdJTi...fake-client-key",
"token": "eyJhbGciOiJSUzI1NiIs...fake-token",
"password": "fake-password-123",
"username": "testuser",
},
},
{
"name": "service-account-user",
"user": {
"token": "eyJhbGciOiJSUzI1NiIs...fake-service-token",
},
},
],
}
@pytest.fixture
def mock_kubeconfig_with_nested_sensitive_data():
return {
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"name": "cluster-1",
"cluster": {
"certificate-authority-data": "fake-ca-data-1",
},
},
{
"name": "cluster-2",
"cluster": {
"certificate-authority-data": "fake-ca-data-2",
},
},
],
"users": [
{
"name": "user-1",
"user": {
"client-certificate-data": "fake-cert-1",
"client-key-data": "fake-key-1",
"token": "fake-token-1",
"secret": "fake-secret-1",
"api_key": "fake-api-key-1",
"access-token": "fake-access-token-1",
"refresh-token": "fake-refresh-token-1",
},
},
],
}
@pytest.fixture
def mock_kubeconfig_without_sensitive_data():
return {
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"name": "test-cluster",
"cluster": {
"server": "https://test-cluster.example.com",
"insecure-skip-tls-verify": True,
},
}
],
"users": [
{
"name": "test-user",
"user": {
"username": "testuser",
},
}
],
}
@pytest.fixture
def mock_kubeconfig_v2():
"""Mock kubeconfig with API version v2 to test warning behavior."""
return {
"apiVersion": "v2",
"kind": "Config",
"clusters": [
{
"name": "test-cluster",
"cluster": {
"server": "https://test-cluster.example.com",
"certificate-authority-data": "fake-ca-data-v2",
},
}
],
"users": [
{
"name": "test-user",
"user": {
"token": "fake-token-v2",
"password": "fake-password-v2",
},
}
],
}
def test_extract_sensitive_values_basic(mock_kubeconfig_with_sensitive_data):
result = extract_sensitive_values_from_kubeconfig(
mock_kubeconfig_with_sensitive_data
)
# Should extract all sensitive string values
expected_values = {
"LS0tLS1CRUdJTi...fake-cert-data", # certificate-authority-data
"LS0tLS1CRUdJTi...fake-client-cert", # client-certificate-data
"LS0tLS1CRUdJTi...fake-client-key", # client-key-data
"eyJhbGciOiJSUzI1NiIs...fake-token", # token
"fake-password-123", # password
"eyJhbGciOiJSUzI1NiIs...fake-service-token", # second token
}
assert result == expected_values
def test_extract_sensitive_values_nested(mock_kubeconfig_with_nested_sensitive_data):
result = extract_sensitive_values_from_kubeconfig(
mock_kubeconfig_with_nested_sensitive_data
)
# Should extract all sensitive values from multiple clusters and users
expected_values = {
"fake-ca-data-1",
"fake-ca-data-2",
"fake-cert-1",
"fake-key-1",
"fake-token-1",
"fake-secret-1",
"fake-api-key-1",
"fake-access-token-1",
"fake-refresh-token-1",
}
assert result == expected_values
def test_extract_sensitive_values_no_sensitive_data(
mock_kubeconfig_without_sensitive_data,
):
result = extract_sensitive_values_from_kubeconfig(
mock_kubeconfig_without_sensitive_data
)
# Should return empty set since there is no sensitive data
assert result == set()
def test_redaction_placeholder_appears_in_output(mock_kubeconfig_with_sensitive_data):
"""Test that sensitive values are replaced with VALUE_SPECIFIED_IN_NO_LOG_PARAMETER in output."""
sensitive_values = extract_sensitive_values_from_kubeconfig(
mock_kubeconfig_with_sensitive_data
)
# Create a mock module output that would contain sensitive data
mock_output = {
"kubeconfig": mock_kubeconfig_with_sensitive_data,
"result": "success",
"sensitive_token": "eyJhbGciOiJSUzI1NiIs...fake-token",
"sensitive_password": "fake-password-123",
}
# Simulate what Ansible does when no_log_values is set
json_str = json.dumps(mock_output)
for sensitive_value in sensitive_values:
json_str = json_str.replace(
f'"{sensitive_value}"', '"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"'
)
redacted_output = json.loads(json_str)
# Verify that sensitive values are replaced with the redaction placeholder
assert redacted_output["sensitive_token"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert (
redacted_output["sensitive_password"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
# Verify that non-sensitive data remains unchanged
assert redacted_output["result"] == "success"
assert redacted_output["kubeconfig"]["users"][0]["user"]["username"] == "testuser"
def test_redaction_placeholder_appears_in_nested_output(
mock_kubeconfig_with_nested_sensitive_data,
):
# Extract sensitive values
sensitive_values = extract_sensitive_values_from_kubeconfig(
mock_kubeconfig_with_nested_sensitive_data
)
# Create a mock module output that would contain nested sensitive data
mock_output = {
"kubeconfig": mock_kubeconfig_with_nested_sensitive_data,
"result": "success",
"cluster_ca_data": "fake-ca-data-1",
"user_cert_data": "fake-cert-1",
"user_key_data": "fake-key-1",
"user_token": "fake-token-1",
"user_secret": "fake-secret-1",
"api_key": "fake-api-key-1",
"access_token": "fake-access-token-1",
"refresh_token": "fake-refresh-token-1",
}
# Simulate what Ansible does when no_log_values is set
json_str = json.dumps(mock_output)
for sensitive_value in sensitive_values:
json_str = json_str.replace(
f'"{sensitive_value}"', '"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"'
)
redacted_output = json.loads(json_str)
# Verify that sensitive values are replaced with the redaction placeholder
assert redacted_output["cluster_ca_data"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["user_cert_data"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["user_key_data"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["user_token"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["user_secret"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["api_key"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["access_token"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
assert redacted_output["refresh_token"] == "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
# Verify that non-sensitive data remains unchanged
assert redacted_output["result"] == "success"
assert redacted_output["kubeconfig"]["apiVersion"] == "v1"
assert redacted_output["kubeconfig"]["kind"] == "Config"
assert redacted_output["kubeconfig"]["clusters"][0]["name"] == "cluster-1"
assert redacted_output["kubeconfig"]["clusters"][1]["name"] == "cluster-2"
assert redacted_output["kubeconfig"]["users"][0]["name"] == "user-1"
# Verify that sensitive values within the nested kubeconfig structure are also redacted
assert (
redacted_output["kubeconfig"]["clusters"][0]["cluster"][
"certificate-authority-data"
]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["clusters"][1]["cluster"][
"certificate-authority-data"
]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["client-certificate-data"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["client-key-data"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["token"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["secret"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["api_key"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["access-token"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
assert (
redacted_output["kubeconfig"]["users"][0]["user"]["refresh-token"]
== "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
)
def test_warning_for_non_v1_api_version(mock_kubeconfig_v2):
with pytest.warns(UserWarning) as warning_list:
result = extract_sensitive_values_from_kubeconfig(mock_kubeconfig_v2)
# Verify that exactly one warning was raised
assert len(warning_list) == 1
# Verify the warning message content
warning = warning_list[0]
assert "Kubeconfig API version 'v2' is not 'v1'" in str(warning.message)
# Verify that the function still works and extracts sensitive values
expected_values = {
"fake-ca-data-v2",
"fake-token-v2",
"fake-password-v2",
}
assert result == expected_values
def test_no_warning_for_v1_api_version(mock_kubeconfig_with_sensitive_data):
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter("always") # Capture all warnings
result = extract_sensitive_values_from_kubeconfig(
mock_kubeconfig_with_sensitive_data
)
# Filter for UserWarning specifically (our warning type)
user_warnings = [w for w in warning_list if issubclass(w.category, UserWarning)]
assert len(user_warnings) == 0
# Verify that the function still works normally
assert len(result) > 0
def test_no_warning_for_missing_api_version():
"""Test that no warning is raised when apiVersion field is missing (defaults to v1)."""
kubeconfig_no_version = {
"kind": "Config",
"clusters": [
{
"name": "test-cluster",
"cluster": {
"server": "https://test-cluster.example.com",
"certificate-authority-data": "fake-ca-data",
},
}
],
"users": [
{
"name": "test-user",
"user": {
"token": "fake-token",
},
}
],
}
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter("always") # Capture all warnings
result = extract_sensitive_values_from_kubeconfig(kubeconfig_no_version)
# Filter for UserWarning specifically (our warning type)
user_warnings = [w for w in warning_list if issubclass(w.category, UserWarning)]
assert len(user_warnings) == 0
# Verify that the function still works normally
expected_values = {"fake-ca-data", "fake-token"}
assert result == expected_values

View File

@@ -43,21 +43,15 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
def test_dependency_update_option_not_defined(self): def test_dependency_update_option_not_defined(self):
set_module_args({"chart_ref": "/tmp/path"}) set_module_args({"chart_ref": "/tmp/path"})
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm_template.main() helm_template.main()
# Check the last call was the actual helm template command mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm template /tmp/path", environ_update={}, data=None
mock_run_command.call_args_list[-1][0][0]
== "/usr/bin/helm template /tmp/path"
) )
assert result.exception.args[0]["command"] == "/usr/bin/helm template /tmp/path" assert result.exception.args[0]["command"] == "/usr/bin/helm template /tmp/path"
@@ -70,21 +64,17 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
} }
) )
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm_template.main() helm_template.main()
# Check the last call was the actual helm template command mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm template test --repo=https://charts.com/test",
mock_run_command.call_args_list[-1][0][0] environ_update={},
== "/usr/bin/helm template test --repo=https://charts.com/test" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -96,21 +86,17 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
{"chart_ref": "https://charts/example.tgz", "dependency_update": True} {"chart_ref": "https://charts/example.tgz", "dependency_update": True}
) )
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm_template.main() helm_template.main()
# Check the last call was the actual helm template command mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm template https://charts/example.tgz --dependency-update",
mock_run_command.call_args_list[-1][0][0] environ_update={},
== "/usr/bin/helm template https://charts/example.tgz --dependency-update" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]

View File

@@ -7,7 +7,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import unittest import unittest
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, call, patch
from ansible.module_utils import basic from ansible.module_utils import basic
from ansible_collections.kubernetes.core.plugins.modules import helm from ansible_collections.kubernetes.core.plugins.modules import helm
@@ -77,22 +77,18 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
helm.run_dep_update = MagicMock() helm.run_dep_update = MagicMock()
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
helm.run_dep_update.assert_not_called() helm.run_dep_update.assert_not_called()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm upgrade -i --reset-values test '/tmp/path'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm upgrade -i --reset-values test '/tmp/path'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -112,22 +108,18 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
helm.run_dep_update = MagicMock() helm.run_dep_update = MagicMock()
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
helm.run_dep_update.assert_not_called() helm.run_dep_update.assert_not_called()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm upgrade -i --reset-values test '/tmp/path'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm upgrade -i --reset-values test '/tmp/path'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -147,23 +139,19 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_with_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_with_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = 0, "configuration updated", ""
mock_run_command.side_effect = [
(
0,
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}',
"",
),
(0, "configuration updated", ""),
]
with patch.object(basic.AnsibleModule, "warn") as mock_warn: with patch.object(basic.AnsibleModule, "warn") as mock_warn:
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
mock_warn.assert_not_called() mock_warn.assert_not_called()
# Check calls include the actual helm command (after version check) mock_run_command.assert_has_calls(
assert any( [
"/usr/bin/helm upgrade -i --reset-values test '/tmp/path'" in str(call) call(
for call in mock_run_command.call_args_list "/usr/bin/helm upgrade -i --reset-values test '/tmp/path'",
environ_update={"HELM_NAMESPACE": "test"},
data=None,
)
]
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -182,23 +170,23 @@ class TestDependencyUpdateWithoutChartRepoUrlOption(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with patch.object(basic.AnsibleModule, "warn") as mock_warn: with patch.object(basic.AnsibleModule, "warn") as mock_warn:
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
mock_warn.assert_called_once() mock_warn.assert_called_once()
# Check calls include the actual helm command (after version check) mock_run_command.assert_has_calls(
assert any( [
"/usr/bin/helm upgrade -i --reset-values test '/tmp/path'" in str(call) call(
for call in mock_run_command.call_args_list "/usr/bin/helm upgrade -i --reset-values test '/tmp/path'",
environ_update={"HELM_NAMESPACE": "test"},
data=None,
)
]
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -257,21 +245,17 @@ class TestDependencyUpdateWithChartRepoUrlOption(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test 'chart1'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test 'chart1'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -291,21 +275,17 @@ class TestDependencyUpdateWithChartRepoUrlOption(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test 'chart1'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test 'chart1'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -325,15 +305,11 @@ class TestDependencyUpdateWithChartRepoUrlOption(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_with_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_with_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleFailJson) as result: with self.assertRaises(AnsibleFailJson) as result:
helm.main() helm.main()
# mock_run_command.assert_called_once_with('/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test chart1', # mock_run_command.assert_called_once_with('/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test chart1',
@@ -358,21 +334,17 @@ class TestDependencyUpdateWithChartRepoUrlOption(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm --repo=http://repo.example/charts install --dependency-update --replace test 'chart1'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm --repo=http://repo.example/charts install --dependency-update --replace test 'chart1'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -430,21 +402,17 @@ class TestDependencyUpdateWithChartRefIsUrl(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm upgrade -i --reset-values test 'http://repo.example/charts/application.tgz'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm upgrade -i --reset-values test 'http://repo.example/charts/application.tgz'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -463,21 +431,17 @@ class TestDependencyUpdateWithChartRefIsUrl(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm upgrade -i --reset-values test 'http://repo.example/charts/application.tgz'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm upgrade -i --reset-values test 'http://repo.example/charts/application.tgz'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]
@@ -496,15 +460,11 @@ class TestDependencyUpdateWithChartRefIsUrl(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_with_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_with_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleFailJson) as result: with self.assertRaises(AnsibleFailJson) as result:
helm.main() helm.main()
# mock_run_command.assert_called_once_with('/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test chart1', # mock_run_command.assert_called_once_with('/usr/bin/helm --repo=http://repo.example/charts upgrade -i --reset-values test chart1',
@@ -528,21 +488,17 @@ class TestDependencyUpdateWithChartRefIsUrl(unittest.TestCase):
helm.get_release_status = MagicMock(return_value=None) helm.get_release_status = MagicMock(return_value=None)
helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep) helm.fetch_chart_info = MagicMock(return_value=self.chart_info_without_dep)
with patch.object(basic.AnsibleModule, "run_command") as mock_run_command: with patch.object(basic.AnsibleModule, "run_command") as mock_run_command:
# Mock responses: first call is helm version, second is the actual command mock_run_command.return_value = (
mock_run_command.side_effect = [ 0,
( "configuration updated",
0, "",
'version.BuildInfo{Version:"v3.10.0", GitCommit:"", GoVersion:"go1.18"}', ) # successful execution
"",
),
(0, "configuration updated", ""),
]
with self.assertRaises(AnsibleExitJson) as result: with self.assertRaises(AnsibleExitJson) as result:
helm.main() helm.main()
# Check the last call (actual helm command, after version check) mock_run_command.assert_called_once_with(
assert ( "/usr/bin/helm install --dependency-update --replace test 'http://repo.example/charts/application.tgz'",
mock_run_command.call_args_list[-1][0][0] environ_update={"HELM_NAMESPACE": "test"},
== "/usr/bin/helm install --dependency-update --replace test 'http://repo.example/charts/application.tgz'" data=None,
) )
assert ( assert (
result.exception.args[0]["command"] result.exception.args[0]["command"]