Compare commits

..

10 Commits

Author SHA1 Message Date
patchback[bot]
2fb6546b03 docs: add until example to k8s_info (#885) (#1119)
I would liked to have an example like this when I was using the
documentation.

(cherry picked from commit 4d7dc2a7d1)

Co-authored-by: Birger Johan Nordølum <33870508+MindTooth@users.noreply.github.com>
2026-05-06 10:16:32 -04:00
patchback[bot]
ef9fb67688 Add kubeconfig module for managing Kubernetes config files (#1104) (#1122)
* Add kubeconfig module for managing Kubernetes config files

* Remove unnecessary requirement & Change version

* Move functions to module_utils

* Add unit tests

* Add kubeconfig module for managing Kubernetes config files

* Remove unnecessary requirement & Change version

* Move functions to module_utils

* Add unit tests

* Avoid linter errors

* Improve documentation clarity

* Redact sensitive kubeconfig information

* Imprvoe verbosity

* Move import statement for to_native to avoid linters check failure

* Fix linting error

---------


(cherry picked from commit e79ed52a4d)

Co-authored-by: Youssef Ali <154611350+YoussefKhalidAli@users.noreply.github.com>
Co-authored-by: Bianca Henderson <bianca@redhat.com>
2026-05-06 10:15:42 -04:00
Bianca Henderson
90bc4c4b3b Release prep for 6.4.0 (#1101)
SUMMARY

Prep kubernetes.core 6.4.0 release

COMPONENT NAME

Multiple

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
Reviewed-by: Bikouo Aubin
Reviewed-by: Hannah DeFazio <h2defazio@gmail.com>
2026-04-22 21:04:10 +00:00
patchback[bot]
1590c6a4cc Update URL reference in integration-test CI file (#1112) (#1114)
(cherry picked from commit 210467b26d)

Co-authored-by: Bianca Henderson <bianca@redhat.com>
2026-04-22 14:14:06 -04:00
patchback[bot]
76eccacab6 ci: conditionally test turbo mode and cloud.common (#1109) (#1111)
The cloud.common collection is incompatible with ansible-core >= 2.19.0.
With the current testing matrix using Python 3.12 and the ansible
milestone (currently 2.22), this incompatibility causes integration
tests to fail.

Instead of completely removing turbo mode from the testing matrix, this
commit adds ansible-core 2.18 to the matrix and excludes the combination
of the ansible milestone and turbo mode. The checkout and installation
of the cloud.common collection are now conditionally executed only when
turbo mode is enabled.

(cherry picked from commit 11f619b69e)

Co-authored-by: Yuriy Novostavskiy <yuriy@novostavskiy.kiev.ua>
2026-04-21 14:37:48 -04:00
patchback[bot]
620abbac26 trivial(doc): post #1090 cosmetic update (#1097) (#1107)
SUMMARY
Name of the Helm plays in the integration test framework test updated to reflect the actual version of Helm (addressed comments #1090 (review))
Updated documentation for the modules updated in the PR with the https://github.com/ansible-network/collection_prep, as per CONTRIBUTING.md
ISSUE TYPE

Docs Pull Request

COMPONENT NAME

tests/integration/targets/helm_v3_*/play.yaml
docs/kubernetes.core.helm*.rst

ADDITIONAL INFORMATION
Only cosmetic changes in this PR, so the label skip-changelog is suggested

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
(cherry picked from commit 16e92a20e8)

Co-authored-by: Yuriy Novostavskiy <yuriy@novostavskiy.kyiv.ua>
2026-04-21 11:31:11 +02:00
patchback[bot]
66a820e03a Add sanity test ignores for ansible-core 2.22 (#1102) (#1106)
The `devel` and `milestone` branches for ansible-core have been bumped to
`2.22.0.dev0` as the `stable-2.21` branch was created. Testing against `devel`
and `milestone` now uses 2.22, which requires creation of the
`tests/sanity/ignore-2.22.txt` file in all maintained collection branches.

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# On branch ignore-2.22
# Changes to be committed:
#	new file:   tests/sanity/ignore-2.22.txt
#

(cherry picked from commit 58f8f2e6e9)

Co-authored-by: Yuriy Novostavskiy <yuriy@novostavskiy.kiev.ua>
2026-04-20 18:34:35 -04:00
patchback[bot]
a3f2438e9d Ensure compatibility with Helm v4 for the collection (#1090) (#1096)
This is a backport of PR #1090 as merged into main (e6076e5).
SUMMARY

Ensure compatibility with Helm v4 for modules helm_plugin and helm_plugin_info
Partially addresses #1038

ISSUE TYPE


Feature Pull Request

COMPONENT NAME

helm_plugin
helm_plugin_info
helm_info
helm_pull
helm_registry_auth
helm
helm_template

Reviewed-by: Bikouo Aubin
Reviewed-by: Matthew Johnson
2026-03-17 15:03:02 +00:00
patchback[bot]
0709ea31c9 Support take_ownership parameter in helm installation (#1034) (#1092)
This is a backport of PR #1034 as merged into main (42acb4f).
SUMMARY
Adds support for the take_ownership for initial release installation operations.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
plugins/modules/helm.py
ADDITIONAL INFORMATION
I recently had to migrate a namespace k8s from flat manifest installation into helm a release.
I was so glad to see the take_ownership feature but realized that it work only after first installation of the release.
Seeing no reason to denied this use case i suggest this very simple changes.
To reproduce it:

Create a new namespace in any cluster.
Create a secret
Install any helm chart that deploy the same secret using take_ownership: true.
2026-02-19 16:40:12 +00:00
patchback[bot]
44ab1fc478 Add check_mode support for k8s_drain module (#1086) (#1091)
This is a backport of PR #1086 as merged into main (d239adb).
SUMMARY

Closes #1037

added support for check_mode
Converted warnings into informational display when user has explicitly requested to delete daemontset-managed pods, unmanaged pods or pods with local storage


ISSUE TYPE


Feature Pull Request

COMPONENT NAME

k8s_drain

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
2026-02-16 20:35:22 +00:00
116 changed files with 2964 additions and 1403 deletions

View File

@@ -55,13 +55,14 @@ jobs:
strategy:
fail-fast: false
matrix:
ansible-version:
- milestone
# Ref must match a branch/tag on github.com/ansible/ansible (e.g. stable-2.18, not 2.18).
ansible-version: ["stable-2.18", "milestone"]
enable-turbo-mode: [true, false]
exclude:
- ansible-version: "milestone"
enable-turbo-mode: true
python-version:
- "3.12"
enable-turbo-mode:
- true
- false
workflow-id: ${{ fromJson(needs.splitter.outputs.test_jobs) }}
name: "integration-py${{ matrix.python-version }}-${{ matrix.ansible-version }}-${{ matrix.workflow-id }}-enable_turbo=${{ matrix.enable-turbo-mode }}"
steps:
@@ -107,6 +108,7 @@ jobs:
source_path: ${{ env.source }}
- name: checkout ansible-collections/cloud.common
if: ${{ matrix.enable-turbo-mode == true }}
uses: ansible-network/github_actions/.github/actions/checkout_dependency@main
with:
repository: ansible-collections/cloud.common
@@ -128,6 +130,7 @@ jobs:
ref: main
- name: install cloud.common collection
if: ${{ matrix.enable-turbo-mode == true }}
uses: ansible-network/github_actions/.github/actions/build_install_collection@main
with:
install_python_dependencies: true

1
.gitignore vendored
View File

@@ -17,6 +17,7 @@ tests/integration/cloud-config-*
# Helm charts
tests/integration/*-chart-*.tgz
tests/integration/targets/*/*.tgz
# ansible-test generated file
tests/integration/inventory

View File

@@ -4,6 +4,32 @@ Kubernetes Collection Release Notes
.. contents:: Topics
v6.4.0
======
Release Summary
---------------
This release adds Helm v4 compatibility across the Helm modules and improves ``k8s_drain`` with check mode. When you explicitly allow evicting unmanaged pods, pods with local storage, or pods managed by a ``DaemonSet``, those cases are reported as informational output instead of module warnings.
Minor Changes
-------------
- helm_info - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_plugin - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_plugin_info - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_pull - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_registry_auth - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_registry_auth - add new option plain_http to allow insecure http connection when running ``helm registry login`` (https://github.com/ansible-collections/kubernetes.core/pull/1090).
- helm_repository - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- k8s_drain - Add support for ``check_mode`` (https://github.com/ansible-collections/kubernetes.core/pull/1086).
- k8s_drain - Convert module warnings into informational displays when users explicitly request the deletion of unmanaged pods, pods with local storage, or those managed by a `DaemonSet` (https://github.com/ansible-collections/kubernetes.core/issues/1037).
Bugfixes
--------
- Helm - Allow taking ownership of existing Kubernetes resources on the first installation of a Helm release. Previously, the ``take_ownership`` parameter was always disabled during the initial install, preventing resource adoption (https://github.com/ansible-collections/kubernetes.core/pull/1034).
v6.3.0
======
@@ -118,7 +144,7 @@ This release updates the ``helm_registry_auth`` module to match the behavior of
Minor Changes
-------------
- Module ``helm_registry_auth`` does not support idempotency with ``helm >= 3.18.0`` (https://github.com/ansible-collections/kubernetes.core/pull/946)
- Module ``helm_registry_auth`` does not support idempotency with ``helm >= 3.18.0`` (https://github.com/ansible-collections/kubernetes.core/pull/946).
v5.3.0
======

View File

@@ -1,5 +1,5 @@
# Also needs to be updated in galaxy.yml
VERSION = 6.3.0
VERSION = 6.4.0
TEST_ARGS ?= ""
PYTHON_VERSION ?= `python -c 'import platform; print(".".join(platform.python_version_tuple()[0:2]))'`

View File

@@ -32,7 +32,7 @@ PEP440 is the schema used to describe the versions of Ansible.
### Helm Version Compatibility
Helm modules in this collection are compatible with Helm v3.x and are not yet compatible with Helm v4. Individual modules and their parameters may support a more specific range of Helm versions.
This collection supports Helm v3.x and newer. Please note that specific modules or certain parameters may have additional version requirements.
### Python Support
@@ -103,7 +103,7 @@ You can also include it in a `requirements.yml` file and install it via `ansible
---
collections:
- name: kubernetes.core
version: 6.3.0
version: 6.4.0
```
### Installing the Kubernetes Python Library

View File

@@ -1027,15 +1027,15 @@ releases:
changes:
bugfixes:
- module_utils/k8s/service - fix issue when trying to delete resource using
`delete_options` and `check_mode=true` (https://github.com/ansible-collections/kubernetes.core/issues/892).
``delete_options`` and ``check_mode=true`` (https://github.com/ansible-collections/kubernetes.core/issues/892).
minor_changes:
- Bump version of ansible-lint to 25.1.2 (https://github.com/ansible-collections/kubernetes.core/pull/919).
- Bump version of ``ansible-lint`` to 25.1.2 (https://github.com/ansible-collections/kubernetes.core/pull/919).
- action/k8s_info - update templating mechanism with changes from ``ansible-core
2.19`` (https://github.com/ansible-collections/kubernetes.core/pull/888).
- helm - add reset_then_reuse_values support to helm module (https://github.com/ansible-collections/kubernetes.core/issues/803).
- helm - add ``reset_then_reuse_values`` support to helm module (https://github.com/ansible-collections/kubernetes.core/issues/803).
- helm - add support for ``insecure_skip_tls_verify`` option to helm and helm_repository(https://github.com/ansible-collections/kubernetes.core/issues/694).
release_summary: This release includes minor changes, bug fixes and also bumps
ansible-lint version to ``25.1.2``.
``ansible-lint`` version to ``25.1.2``.
fragments:
- 20250324-k8s_info-templating.yaml
- 5.3.0.yml
@@ -1062,7 +1062,7 @@ releases:
changes:
bugfixes:
- Remove ``ansible.module_utils.six`` imports to avoid warnings (https://github.com/ansible-collections/kubernetes.core/pull/998).
- Update the `k8s_cp` module to also work for init containers (https://github.com/ansible-collections/kubernetes.core/pull/971).
- Update the ``k8s_cp`` module to also work for init containers (https://github.com/ansible-collections/kubernetes.core/pull/971).
- module_utils/k8s/service - hide fields first before creating diffs (https://github.com/ansible-collections/kubernetes.core/pull/915).
release_summary: This release includes bugfixes for k8s service field handling,
k8s_cp init containers support, and removes deprecated ansible.module_utils.six
@@ -1086,9 +1086,9 @@ releases:
bugfixes:
- module_utils/k8s/service - hide fields first before creating diffs (https://github.com/ansible-collections/kubernetes.core/pull/915).
minor_changes:
- Module helm_registry_auth do not support idempotency with `helm >= 3.18.0`
(https://github.com/ansible-collections/kubernetes.core/pull/946)
- Module k8s_json_patch - Add support for `hidden_fields` (https://github.com/ansible-collections/kubernetes.core/pull/964).
- Module ``helm_registry_auth`` does not support idempotency with `helm >= 3.18.0`
(https://github.com/ansible-collections/kubernetes.core/pull/946).
- Module k8s_json_patch - Add support for ``hidden_fields`` (https://github.com/ansible-collections/kubernetes.core/pull/964).
- helm - Parameter plain_http added for working with insecure OCI registries
(https://github.com/ansible-collections/kubernetes.core/pull/934).
- helm - Parameter take_ownership added (https://github.com/ansible-collections/kubernetes.core/pull/957).
@@ -1173,3 +1173,33 @@ releases:
- 20260108-fix-sanity-failures.yml
- 6-3-0.yaml
release_date: '2026-02-03'
6.4.0:
changes:
bugfixes:
- Helm - Allow taking ownership of existing Kubernetes resources on the first
installation of a Helm release. Previously, the ``take_ownership`` parameter
was always disabled during the initial install, preventing resource adoption
(https://github.com/ansible-collections/kubernetes.core/pull/1034).
minor_changes:
- helm_info - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_plugin - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_plugin_info - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_pull - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_registry_auth - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- helm_registry_auth - add new option plain_http to allow insecure http connection
when running ``helm registry login`` (https://github.com/ansible-collections/kubernetes.core/pull/1090).
- helm_repository - Ensure compatibility with Helm v4 (https://github.com/ansible-collections/kubernetes.core/issues/1038).
- k8s_drain - Add support for ``check_mode`` (https://github.com/ansible-collections/kubernetes.core/pull/1086).
- k8s_drain - Convert module warnings into informational displays when users
explicitly request the deletion of unmanaged pods, pods with local storage,
or those managed by a ``DaemonSet`` (https://github.com/ansible-collections/kubernetes.core/issues/1037).
release_summary: This release adds Helm v4 compatibility across the Helm modules
and improves ``k8s_drain`` with check mode. When you explicitly allow evicting
unmanaged pods, pods with local storage, or pods managed by a ``DaemonSet``,
those cases are reported as informational output instead of module warnings.
fragments:
- 20251224-take-ownership-helm-initialization.yaml
- 20260203-k8s_drain-warning-fixes.yaml
- 20260213-support-helm-v4-for-helm-plugin-modules.yaml
- release-6-4-0.yml
release_date: '2026-04-22'

View File

@@ -25,7 +25,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases)
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
- yaml (https://pypi.org/project/PyYAML/)
@@ -268,7 +268,7 @@ Examples
Return Values
-------------
Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html
@@ -410,6 +410,23 @@ Common return values are documented `here <https://docs.ansible.com/ansible/late
<br/>
</td>
</tr>
<tr>
<td class="elbow-placeholder">&nbsp;</td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="return-"></div>
<b>release_values</b>
<a class="ansibleOptionLink" href="#return-" title="Permalink to this return value"></a>
<div style="font-size: small">
<span style="color: purple">dictionary</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.3.0</div>
</td>
<td>always</td>
<td>
<div>Dict of Values used to deploy.</div>
<br/>
</td>
</tr>
<tr>
<td class="elbow-placeholder">&nbsp;</td>
<td colspan="1">
@@ -465,12 +482,13 @@ Common return values are documented `here <https://docs.ansible.com/ansible/late
<b>values</b>
<a class="ansibleOptionLink" href="#return-" title="Permalink to this return value"></a>
<div style="font-size: small">
<span style="color: purple">string</span>
<span style="color: purple">dictionary</span>
</div>
</td>
<td>always</td>
<td>
<div>Dict of Values used to deploy</div>
<div>This return value has been deprecated and will be removed in a release after 2027-01-08. Use RV(status.release_values) instead.</div>
<br/>
</td>
</tr>

View File

@@ -25,7 +25,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases)
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
- yaml (https://pypi.org/project/PyYAML/)
@@ -660,7 +660,7 @@ Parameters
</ul>
</td>
<td>
<div>When upgrading, Helm will ignore the check for helm annotations and take ownership of the existing resources</div>
<div>Helm will ignore the check for helm annotations and take ownership of the existing resources</div>
<div>This feature requires helm &gt;= 3.17.0</div>
</td>
</tr>
@@ -920,7 +920,7 @@ Examples
Return Values
-------------
Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html
@@ -1026,6 +1026,23 @@ Common return values are documented `here <https://docs.ansible.com/ansible/late
<br/>
</td>
</tr>
<tr>
<td class="elbow-placeholder">&nbsp;</td>
<td colspan="1">
<div class="ansibleOptionAnchor" id="return-"></div>
<b>release_values</b>
<a class="ansibleOptionLink" href="#return-" title="Permalink to this return value"></a>
<div style="font-size: small">
<span style="color: purple">dictionary</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.3.0</div>
</td>
<td>always</td>
<td>
<div>Dict of Values used to deploy.</div>
<br/>
</td>
</tr>
<tr>
<td class="elbow-placeholder">&nbsp;</td>
<td colspan="1">
@@ -1081,12 +1098,13 @@ Common return values are documented `here <https://docs.ansible.com/ansible/late
<b>values</b>
<a class="ansibleOptionLink" href="#return-" title="Permalink to this return value"></a>
<div style="font-size: small">
<span style="color: purple">string</span>
<span style="color: purple">dictionary</span>
</div>
</td>
<td>always</td>
<td>
<div>Dict of Values used to deploy</div>
<div>Dict of Values used to deploy.</div>
<div>This return value has been deprecated and will be removed in a release after 2027-01-08. Use RV(status.release_values) instead.</div>
<br/>
</td>
</tr>

View File

@@ -25,7 +25,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases)
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
Parameters
@@ -196,7 +196,7 @@ Examples
Return Values
-------------
Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html

View File

@@ -25,7 +25,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases)
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
Parameters
@@ -231,6 +231,28 @@ Parameters
<div style="font-size: small; color: darkgreen"><br/>aliases: verify_ssl</div>
</td>
</tr>
<tr>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>verify</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.4.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li>no</li>
<li><div style="color: blue"><b>yes</b>&nbsp;&larr;</div></li>
</ul>
</td>
<td>
<div>Verify the plugin signature before installing.</div>
<div>This option requires helm version &gt;= 4.0.0</div>
<div>Used with <em>state=present</em>.</div>
</td>
</tr>
</table>
<br/>
@@ -272,7 +294,7 @@ Examples
Return Values
-------------
Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html

View File

@@ -27,7 +27,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm >= 3.0, <4.0.0 (https://github.com/helm/helm/releases)
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
Parameters

View File

@@ -25,7 +25,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases) >= 3.8.0, <4.0.0
- helm (https://github.com/helm/helm/releases) >= 3.8.0
Parameters
@@ -151,6 +151,27 @@ Parameters
<div style="font-size: small; color: darkgreen"><br/>aliases: repo_password</div>
</td>
</tr>
<tr>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>plain_http</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">boolean</span>
</div>
<div style="font-style: italic; font-size: small; color: darkgreen">added in 6.4.0</div>
</td>
<td>
<ul style="margin: 0; padding: 0"><b>Choices:</b>
<li><div style="color: blue"><b>no</b>&nbsp;&larr;</div></li>
<li>yes</li>
</ul>
</td>
<td>
<div>Use insecure HTTP connections for <code>helm registry login</code>.</div>
<div>Requires Helm &gt;= 3.18.0</div>
</td>
</tr>
<tr>
<td colspan="1">
<div class="ansibleOptionAnchor" id="parameter-"></div>

View File

@@ -25,7 +25,7 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
- helm (https://github.com/helm/helm/releases)
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
- yaml (https://pypi.org/project/PyYAML/)
@@ -336,7 +336,7 @@ Examples
Return Values
-------------
Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html

View File

@@ -20,6 +20,13 @@ Synopsis
Requirements
------------
The below requirements are needed on the host that executes this module.
- helm >= 3.0.0 (https://github.com/helm/helm/releases)
- yaml (https://pypi.org/project/PyYAML/)
Parameters
----------
@@ -430,7 +437,7 @@ Examples
Return Values
-------------
Common return values are documented `here <https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
Common return values are documented `here <https://docs.ansible.com/projects/ansible/latest/reference_appendices/common_return_values.html#common-return-values>`_, the following are the fields unique to this module:
.. raw:: html

View File

@@ -701,6 +701,21 @@ Examples
wait_sleep: 10
wait_timeout: 360
- name: Wait for OpenShift bootstrap to complete
kubernetes.core.k8s_info:
api_version: v1
kind: ConfigMap
name: bootstrap
namespace: kube-system
register: ocp_bootstrap_status
until: >
ocp_bootstrap_status.resources is defined and
(ocp_bootstrap_status.resources | length > 0) and
(ocp_bootstrap_status.resources[0].data.status is defined) and
(ocp_bootstrap_status.resources[0].data.status == 'complete')
retries: 60
delay: 15
Return Values

View File

@@ -25,7 +25,7 @@ tags:
- openshift
- okd
- cluster
version: 6.3.0
version: 6.4.0
build_ignore:
- .DS_Store
- "*.tar.gz"

View File

@@ -40,16 +40,20 @@ def parse_helm_plugin_list(output=None):
if not output:
return ret
parsing_grammar = None
for line in output:
if line.startswith("NAME"):
parsing_grammar = [s.strip().lower() for s in line.split("\t")]
continue
name, version, description = line.split("\t", 3)
name = name.strip()
version = version.strip()
description = description.strip()
if name == "":
if parsing_grammar is None:
continue
ret.append((name, version, description))
plugin = {
parsing_grammar[i]: v.strip()
for i, v in enumerate(line.split("\t", len(parsing_grammar)))
}
if plugin["name"] == "":
continue
ret.append(plugin)
return ret
@@ -202,21 +206,35 @@ class AnsibleHelmModule(object):
return m.group(1)
return None
def validate_helm_version(self):
def is_helm_v4(self):
helm_version = self.get_helm_version()
if helm_version is None:
return False
return LooseVersion(helm_version) >= LooseVersion("4.0.0")
def is_helm_version_compatible_with_helm_diff(self, helm_diff_version):
"""
Validate that Helm version is >=3.0.0 and <4.0.0.
Helm 4 is not yet supported.
Return true if the helm version is compatible with the helm diff version
Helm v4 requires helm diff v3.14.0
"""
if not helm_diff_version:
return False
if self.is_helm_v4():
return LooseVersion(helm_diff_version) >= LooseVersion("3.14.0")
return True
def validate_helm_version(self, version="3.0.0"):
"""
Validate that Helm version is >= version (default version=3.0.0).
"""
helm_version = self.get_helm_version()
if helm_version is None:
self.fail_json(msg="Unable to determine Helm version")
if (LooseVersion(helm_version) < LooseVersion("3.0.0")) or (
LooseVersion(helm_version) >= LooseVersion("4.0.0")
):
if LooseVersion(helm_version) < LooseVersion(version):
self.fail_json(
msg="Helm version must be >=3.0.0,<4.0.0, current version is {0}".format(
helm_version
msg="Helm version must be >= {0}, current version is {1}".format(
version, helm_version
)
)

View File

@@ -0,0 +1,91 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import hashlib
import os
import traceback
try:
import yaml
IMP_YAML = True
IMP_YAML_ERR = None
except ImportError:
IMP_YAML = False
IMP_YAML_ERR = traceback.format_exc()
def load_yaml_file(path):
if not path or not os.path.exists(path):
return {}
with open(path, "r") as f:
return yaml.safe_load(f) or {}
def deep_merge(base, updates):
result = base.copy()
for key, value in updates.items():
if key in result and isinstance(result[key], dict) and isinstance(value, dict):
result[key] = deep_merge(result[key], value)
else:
result[key] = value
return result
def merge_by_name(existing, new):
merged = {}
for item in existing:
if isinstance(item, dict) and "name" in item:
merged[item["name"]] = item
for item in new:
if not isinstance(item, dict) or "name" not in item:
continue
name = item["name"]
behavior = item.get("behavior", "merge")
item_copy = {k: v for k, v in item.items() if k != "behavior"}
if name in merged:
if behavior == "keep":
continue
elif behavior == "replace":
merged[name] = item_copy
else:
result = {"name": name}
for key in ["cluster", "user", "context"]:
if key in merged[name] or key in item_copy:
existing_config = merged[name].get(key, {})
new_config = item_copy.get(key, {})
result[key] = deep_merge(existing_config, new_config)
for key in merged[name]:
if key not in ["name", "cluster", "user", "context"]:
result[key] = merged[name][key]
for key in item_copy:
if (
key not in ["name", "cluster", "user", "context"]
and key not in result
):
result[key] = item_copy[key]
merged[name] = result
else:
merged[name] = item_copy
return list(merged.values())
def hash_data(data):
"""Generate SHA-256 hash for idempotency checking."""
return hashlib.sha256(yaml.safe_dump(data, sort_keys=True).encode()).hexdigest()
def write_file(dest, data):
if not dest:
return False
with open(dest, "w") as f:
yaml.safe_dump(data, f, sort_keys=False)
return True

View File

@@ -21,7 +21,7 @@ author:
- Matthieu Diehr (@d-matt)
requirements:
- "helm (https://github.com/helm/helm/releases)"
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
- "yaml (https://pypi.org/project/PyYAML/)"
description:
@@ -246,7 +246,7 @@ options:
version_added: 6.1.0
take_ownership:
description:
- When upgrading, Helm will ignore the check for helm annotations and take ownership of the existing resources
- Helm will ignore the check for helm annotations and take ownership of the existing resources
- This feature requires helm >= 3.17.0
type: bool
default: False
@@ -500,9 +500,13 @@ def get_release_status(module, release_name, all_status=False):
"--filter",
release_name,
]
if all_status:
if all_status and not module.is_helm_v4():
# --all has been removed from `helm list` command on helm v4
list_command.append("--all")
elif not all_status:
# The default behavior to display only deployed releases has been removed from
# Helm v4
list_command.append("--deployed")
rc, out, err = module.run_helm_command(list_command)
release = get_release(yaml.safe_load(out), release_name)
@@ -739,8 +743,8 @@ def get_plugin_version(plugin):
return None
for line in out:
if line[0] == plugin:
return line[1]
if line["name"] == plugin:
return line["version"]
return None
@@ -928,7 +932,7 @@ def main():
if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
# Validate Helm version >=3.0.0
module.validate_helm_version()
changed = False
@@ -1010,8 +1014,7 @@ def main():
if wait:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("3.7.0"):
opt_result["warnings"] = []
opt_result["warnings"].append(
module.warn(
"helm uninstall support option --wait for helm release >= 3.7.0"
)
wait = False
@@ -1092,20 +1095,28 @@ def main():
reset_then_reuse_values=reset_then_reuse_values,
insecure_skip_tls_verify=insecure_skip_tls_verify,
plain_http=plain_http,
take_ownership=take_ownership,
skip_schema_validation=skip_schema_validation,
)
changed = True
else:
helm_diff_version = get_plugin_version("diff")
if helm_diff_version and (
not chart_repo_url
or (
chart_repo_url
and LooseVersion(helm_diff_version) >= LooseVersion("3.4.1")
helm_version_compatible = module.is_helm_version_compatible_with_helm_diff(
helm_diff_version
)
if (
helm_diff_version
and helm_version_compatible
and (
not chart_repo_url
or (
chart_repo_url
and LooseVersion(helm_diff_version) >= LooseVersion("3.4.1")
)
)
):
(would_change, prepared) = helmdiff_check(
would_change, prepared = helmdiff_check(
module,
release_name,
chart_ref,
@@ -1126,10 +1137,18 @@ def main():
if would_change and module._diff:
opt_result["diff"] = {"prepared": prepared}
else:
module.warn(
"The default idempotency check can fail to report changes in certain cases. "
"Install helm diff >= 3.4.1 for better results."
)
if helm_diff_version and not helm_version_compatible:
module.warn(
"Idempotency checks are currently disabled due to a version mismatch."
f" Helm version {module.get_helm_version()} requires helm-diff >= 3.14.0,"
f" but the environment is currently running {helm_diff_version}."
" Please align the plugin versions to restore standard behavior."
)
else:
module.warn(
"The default idempotency check can fail to report changes in certain cases. "
"Install helm diff >= 3.4.1 for better results."
)
would_change = default_check(
release_status, chart_info, release_values, values_files
)

View File

@@ -20,7 +20,7 @@ author:
- Lucas Boisserie (@LucasBoisserie)
requirements:
- "helm (https://github.com/helm/helm/releases)"
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
- "yaml (https://pypi.org/project/PyYAML/)"
description:
@@ -245,7 +245,7 @@ def main():
if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
# Validate Helm version >=3.0.0
module.validate_helm_version()
release_name = module.params.get("release_name")

View File

@@ -16,7 +16,7 @@ version_added: 1.0.0
author:
- Abhijeet Kasurde (@Akasurde)
requirements:
- "helm (https://github.com/helm/helm/releases)"
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
description:
- Manages Helm plugins.
options:
@@ -48,6 +48,14 @@ options:
required: false
type: str
version_added: 2.3.0
verify:
description:
- Verify the plugin signature before installing.
- This option requires helm version >= 4.0.0
- Used with I(state=present).
type: bool
default: true
version_added: 6.4.0
extends_documentation_fragment:
- kubernetes.core.helm_common_options
"""
@@ -118,6 +126,9 @@ from ansible_collections.kubernetes.core.plugins.module_utils.helm_args_common i
HELM_AUTH_ARG_SPEC,
HELM_AUTH_MUTUALLY_EXCLUSIVE,
)
from ansible_collections.kubernetes.core.plugins.module_utils.version import (
LooseVersion,
)
def argument_spec():
@@ -138,6 +149,10 @@ def argument_spec():
default="present",
choices=["present", "absent", "latest"],
),
verify=dict(
type="bool",
default=True,
),
)
)
return arg_spec
@@ -161,7 +176,7 @@ def main():
mutually_exclusive=mutually_exclusive(),
)
# Validate Helm version >=3.0.0,<4.0.0
# Validate helm version >= 3.0.0
module.validate_helm_version()
state = module.params.get("state")
@@ -171,8 +186,19 @@ def main():
if state == "present":
helm_cmd_common += " install %s" % module.params.get("plugin_path")
plugin_version = module.params.get("plugin_version")
verify = module.params.get("verify")
if plugin_version is not None:
helm_cmd_common += " --version=%s" % plugin_version
if not verify:
helm_version = module.get_helm_version()
if LooseVersion(helm_version) < LooseVersion("4.0.0"):
module.warn(
"verify parameter requires helm >= 4.0.0, current version is {0}".format(
helm_version
)
)
else:
helm_cmd_common += " --verify=false"
if not module.check_mode:
rc, out, err = module.run_helm_command(
helm_cmd_common, fails_on_error=False
@@ -211,9 +237,9 @@ def main():
elif state == "absent":
plugin_name = module.params.get("plugin_name")
rc, output, err, command = module.get_helm_plugin_list()
out = parse_helm_plugin_list(output=output.splitlines())
plugins = parse_helm_plugin_list(output=output.splitlines())
if not out:
if not plugins:
module.exit_json(
failed=False,
changed=False,
@@ -224,12 +250,7 @@ def main():
rc=rc,
)
found = False
for line in out:
if line[0] == plugin_name:
found = True
break
if not found:
if all(plugin["name"] != plugin_name for plugin in plugins):
module.exit_json(
failed=False,
changed=False,
@@ -267,9 +288,9 @@ def main():
elif state == "latest":
plugin_name = module.params.get("plugin_name")
rc, output, err, command = module.get_helm_plugin_list()
out = parse_helm_plugin_list(output=output.splitlines())
plugins = parse_helm_plugin_list(output=output.splitlines())
if not out:
if not plugins:
module.exit_json(
failed=False,
changed=False,
@@ -280,12 +301,7 @@ def main():
rc=rc,
)
found = False
for line in out:
if line[0] == plugin_name:
found = True
break
if not found:
if all(plugin["name"] != plugin_name for plugin in plugins):
module.exit_json(
failed=False,
changed=False,

View File

@@ -16,7 +16,7 @@ version_added: 1.0.0
author:
- Abhijeet Kasurde (@Akasurde)
requirements:
- "helm (https://github.com/helm/helm/releases)"
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
description:
- Gather information about Helm plugins installed in namespace.
options:
@@ -98,29 +98,16 @@ def main():
supports_check_mode=True,
)
# Validate Helm version >=3.0.0,<4.0.0
# Validate helm version >= 3.0.0
module.validate_helm_version()
plugin_name = module.params.get("plugin_name")
plugin_list = []
rc, output, err, command = module.get_helm_plugin_list()
out = parse_helm_plugin_list(output=output.splitlines())
for line in out:
if plugin_name is None:
plugin_list.append(
{"name": line[0], "version": line[1], "description": line[2]}
)
continue
if plugin_name == line[0]:
plugin_list.append(
{"name": line[0], "version": line[1], "description": line[2]}
)
break
plugins = parse_helm_plugin_list(output=output.splitlines())
if plugin_name is not None:
plugins = [plugin for plugin in plugins if plugin.get("name") == plugin_name]
module.exit_json(
changed=True,
@@ -128,7 +115,7 @@ def main():
stdout=output,
stderr=err,
rc=rc,
plugin_list=plugin_list,
plugin_list=plugins,
)

View File

@@ -21,7 +21,7 @@ description:
- There are options for unpacking the chart after download.
requirements:
- "helm >= 3.0, <4.0.0 (https://github.com/helm/helm/releases)"
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
options:
chart_ref:
@@ -372,7 +372,7 @@ def main():
mutually_exclusive=[("chart_version", "chart_devel")],
)
# Validate Helm version >=3.0.0,<4.0.0
# Validate Helm version >=3.0.0
module.validate_helm_version()
helm_version = module.get_helm_version()

View File

@@ -20,7 +20,7 @@ author:
- Yuriy Novostavskiy (@yurnov)
requirements:
- "helm (https://github.com/helm/helm/releases) >= 3.8.0, <4.0.0"
- "helm (https://github.com/helm/helm/releases) >= 3.8.0"
description:
- Helm registry authentication module allows you to login C(helm registry login) and logout C(helm registry logout) from a Helm registry.
@@ -75,6 +75,14 @@ options:
- Path to the CA certificate SSL file for verify registry server certificate.
required: false
type: path
plain_http:
description:
- Use insecure HTTP connections for C(helm registry login).
- Requires Helm >= 3.18.0
required: false
type: bool
default: False
version_added: 6.4.0
binary_path:
description:
- The path of a helm binary to use.
@@ -148,6 +156,7 @@ def arg_spec():
key_file=dict(type="path", required=False),
cert_file=dict(type="path", required=False),
ca_file=dict(type="path", required=False),
plain_http=dict(type="bool", default=False),
)
@@ -160,6 +169,7 @@ def login(
key_file,
cert_file,
ca_file,
plain_http,
):
login_command = command + " registry login " + host
@@ -177,6 +187,8 @@ def login(
if ca_file is not None:
login_command += " --ca-file=" + ca_file
if plain_http:
login_command += " --plain-http"
return login_command
@@ -194,8 +206,8 @@ def main():
supports_check_mode=True,
)
# Validate Helm version >=3.0.0,<4.0.0
module.validate_helm_version()
# Validate Helm version >=3.8.0
module.validate_helm_version(version="3.8.0")
changed = False
@@ -207,6 +219,19 @@ def main():
key_file = module.params.get("key_file")
cert_file = module.params.get("cert_file")
ca_file = module.params.get("ca_file")
plain_http = module.params.get("plain_http")
helm_version = module.get_helm_version()
if plain_http:
if LooseVersion(helm_version) < LooseVersion("3.18.0"):
module.warn(
"plain_http option requires helm >= 3.18.0, current version is {0}".format(
helm_version
)
)
# reset option
plain_http = False
helm_cmd = module.get_helm_binary()
@@ -215,7 +240,15 @@ def main():
changed = True
elif state == "present":
helm_cmd = login(
helm_cmd, host, insecure, username, password, key_file, cert_file, ca_file
helm_cmd,
host,
insecure,
username,
password,
key_file,
cert_file,
ca_file,
plain_http,
)
changed = True
@@ -238,7 +271,6 @@ def main():
command=helm_cmd,
)
helm_version = module.get_helm_version()
if LooseVersion(helm_version) >= LooseVersion("3.18.0") and state == "absent":
# https://github.com/ansible-collections/kubernetes.core/issues/944
module.warn(

View File

@@ -20,7 +20,7 @@ author:
- Lucas Boisserie (@LucasBoisserie)
requirements:
- "helm (https://github.com/helm/helm/releases)"
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
- "yaml (https://pypi.org/project/PyYAML/)"
description:
@@ -295,7 +295,7 @@ def main():
if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
# Validate Helm version >= 3.0.0
module.validate_helm_version()
changed = False

View File

@@ -21,6 +21,10 @@ author:
description:
- Render chart templates to an output directory or as text of concatenated yaml documents.
requirements:
- "helm >= 3.0.0 (https://github.com/helm/helm/releases)"
- "yaml (https://pypi.org/project/PyYAML/)"
options:
binary_path:
description:
@@ -347,7 +351,7 @@ def main():
if not IMP_YAML:
module.fail_json(msg=missing_required_lib("yaml"), exception=IMP_YAML_ERR)
# Validate Helm version >=3.0.0,<4.0.0
# Validate Helm version >=3.0.0
module.validate_helm_version()
helm_cmd = module.get_helm_binary()

View File

@@ -230,7 +230,7 @@ def filter_pods(pods, force, ignore_daemonset, delete_emptydir_data):
else:
to_delete.append((pod.metadata.namespace, pod.metadata.name))
warnings, errors = [], []
warnings, errors, info = [], [], []
if unmanaged:
pod_names = ",".join([pod[0] + "/" + pod[1] for pod in unmanaged])
if not force:
@@ -242,7 +242,7 @@ def filter_pods(pods, force, ignore_daemonset, delete_emptydir_data):
)
else:
# Pod not managed will be deleted as 'force' is true
warnings.append(
info.append(
"Deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: {0}.".format(
pod_names
)
@@ -264,7 +264,7 @@ def filter_pods(pods, force, ignore_daemonset, delete_emptydir_data):
"cannot delete Pods with local storage: {0}.".format(pod_names)
)
else:
warnings.append("Deleting Pods with local storage: {0}.".format(pod_names))
info.append("Deleting Pods with local storage: {0}.".format(pod_names))
for pod in localStorage:
to_delete.append((pod[0], pod[1]))
@@ -278,8 +278,8 @@ def filter_pods(pods, force, ignore_daemonset, delete_emptydir_data):
)
)
else:
warnings.append("Ignoring DaemonSet-managed Pods: {0}.".format(pod_names))
return to_delete, warnings, errors
info.append("Ignoring DaemonSet-managed Pods: {0}.".format(pod_names))
return to_delete, warnings, errors, info
class K8sDrainAnsible(object):
@@ -334,18 +334,19 @@ class K8sDrainAnsible(object):
def evict_pods(self, pods):
for namespace, name in pods:
try:
if self._drain_options.get("disable_eviction"):
self._api_instance.delete_namespaced_pod(
name=name, namespace=namespace, body=self._delete_options
)
else:
body = v1_eviction(
delete_options=self._delete_options,
metadata=V1ObjectMeta(name=name, namespace=namespace),
)
self._api_instance.create_namespaced_pod_eviction(
name=name, namespace=namespace, body=body
)
if not self._module.check_mode:
if self._drain_options.get("disable_eviction"):
self._api_instance.delete_namespaced_pod(
name=name, namespace=namespace, body=self._delete_options
)
else:
body = v1_eviction(
delete_options=self._delete_options,
metadata=V1ObjectMeta(name=name, namespace=namespace),
)
self._api_instance.create_namespaced_pod_eviction(
name=name, namespace=namespace, body=body
)
self._changed = True
except ApiException as exc:
if exc.reason != "Not Found":
@@ -362,11 +363,7 @@ class K8sDrainAnsible(object):
)
def list_pods(self):
params = {
"field_selector": "spec.nodeName={name}".format(
name=self._module.params.get("name")
)
}
params = {"field_selector": "spec.nodeName=" + self._module.params.get("name")}
pod_selectors = self._module.params.get("pod_selectors")
if pod_selectors:
params["label_selector"] = ",".join(pod_selectors)
@@ -376,7 +373,8 @@ class K8sDrainAnsible(object):
# Mark node as unschedulable
result = []
if not node_unschedulable:
self.patch_node(unschedulable=True)
if not self._module.check_mode:
self.patch_node(unschedulable=True)
result.append(
"node {0} marked unschedulable.".format(self._module.params.get("name"))
)
@@ -391,7 +389,8 @@ class K8sDrainAnsible(object):
def _revert_node_patch():
if self._changed:
self._changed = False
self.patch_node(unschedulable=False)
if not self._module.check_mode:
self.patch_node(unschedulable=False)
try:
pod_list = self.list_pods()
@@ -401,7 +400,7 @@ class K8sDrainAnsible(object):
delete_emptydir_data = self._drain_options.get(
"delete_emptydir_data", False
)
pods, warnings, errors = filter_pods(
pods, warnings, errors, info = filter_pods(
pod_list.items, force, ignore_daemonset, delete_emptydir_data
)
if errors:
@@ -431,18 +430,25 @@ class K8sDrainAnsible(object):
if pods:
self.evict_pods(pods)
number_pod = len(pods)
if self._drain_options.get("wait_timeout") is not None:
warn = self.wait_for_pod_deletion(
pods,
self._drain_options.get("wait_timeout"),
self._drain_options.get("wait_sleep"),
if self._module.check_mode:
result.append(
"Would have deleted {0} Pod(s) from node if not in check mode.".format(
number_pod
)
)
if warn:
warnings.append(warn)
result.append("{0} Pod(s) deleted from node.".format(number_pod))
else:
wait_timeout = self._drain_options.get("wait_timeout")
wait_sleep = self._drain_options.get("wait_sleep")
if wait_timeout is not None:
warn = self.wait_for_pod_deletion(pods, wait_timeout, wait_sleep)
if warn:
warnings.append(warn)
result.append("{0} Pod(s) deleted from node.".format(number_pod))
if warnings:
for warning in warnings:
self._module.warn(warning)
for line in info:
self._module.debug(line)
return dict(result=" ".join(result))
def patch_node(self, unschedulable):
@@ -483,7 +489,8 @@ class K8sDrainAnsible(object):
self._module.exit_json(
result="node {0} already marked unschedulable.".format(name)
)
self.patch_node(unschedulable=True)
if not self._module.check_mode:
self.patch_node(unschedulable=True)
result["result"] = "node {0} marked unschedulable.".format(name)
self._changed = True
@@ -492,7 +499,8 @@ class K8sDrainAnsible(object):
self._module.exit_json(
result="node {0} already marked schedulable.".format(name)
)
self.patch_node(unschedulable=False)
if not self._module.check_mode:
self.patch_node(unschedulable=False)
result["result"] = "node {0} marked schedulable.".format(name)
self._changed = True
@@ -535,7 +543,9 @@ def argspec():
def main():
module = AnsibleK8SModule(module_class=AnsibleModule, argument_spec=argspec())
module = AnsibleK8SModule(
module_class=AnsibleModule, argument_spec=argspec(), supports_check_mode=True
)
if not HAS_EVICTION_API:
module.fail_json(

View File

@@ -120,6 +120,21 @@ EXAMPLES = r"""
namespace: default
wait_sleep: 10
wait_timeout: 360
- name: Wait for OpenShift bootstrap to complete
kubernetes.core.k8s_info:
api_version: v1
kind: ConfigMap
name: bootstrap
namespace: kube-system
register: ocp_bootstrap_status
until: >
ocp_bootstrap_status.resources is defined and
(ocp_bootstrap_status.resources | length > 0) and
(ocp_bootstrap_status.resources[0].data.status is defined) and
(ocp_bootstrap_status.resources[0].data.status == 'complete')
retries: 60
delay: 15
"""
RETURN = r"""

View File

@@ -0,0 +1,441 @@
#!/usr/bin/python
#
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: kubeconfig
short_description: Generate, update, and optionally write Kubernetes kubeconfig files
version_added: "6.5.0"
author: "Youssef Khalid Ali (@YoussefKhalidAli)"
description:
- Build, update, and manage Kubernetes kubeconfig files using structured input.
- Supports loading an existing kubeconfig file and merging clusters, users, and contexts.
- Can optionally write the resulting kubeconfig to a destination path.
- Ensures idempotent behavior by only updating files when changes occur.
requirements:
- "PyYAML >= 5.1"
notes:
- Input data is merged by resource name (cluster, user, context).
- Updates under O(clusters), O(users), and O(contexts) are matched by C(name) against the kubeconfig loaded from O(path).
- For an existing C(name), each entry's C(behavior) suboption controls the update.
- The default is V(merge), which merges nested C(cluster), C(user), and C(context) data so unspecified keys are preserved.
- With V(replace), the previous entry for that name is dropped and only the new definition is used.
- With V(keep), the existing entry is left unchanged.
- This can be used to move kubeconfig files to a different location with different content.
- This module does not validate cluster connectivity or authentication.
- The module supports C(check_mode) and will not write files when enabled.
- The structure follows standard Kubernetes kubeconfig format as defined in the Kubernetes documentation.
- Tokens and sensitive data should be protected using ansible-vault or environment variables.
options:
path:
description:
- Path to an existing kubeconfig file to load and merge from.
- If the file does not exist, a new kubeconfig will be created.
- This becomes the default destination if O(dest) is not specified.
type: str
required: true
dest:
description:
- Destination path where the final kubeconfig should be written.
- If not specified, the kubeconfig will be saved to O(path).
- Allows copying and modifying a kubeconfig to a new location.
type: str
required: false
clusters:
description:
- List of cluster definitions to merge into the kubeconfig.
- Each cluster is identified by its C(name).
- When C(name) matches an existing cluster, the default C(behavior) is V(merge).
- See the C(behavior) suboption for V(replace) and V(keep).
type: list
elements: dict
required: false
default: []
suboptions:
name:
description:
- Unique name identifier for the cluster.
type: str
required: true
behavior:
description:
- How to handle merging if a cluster with this name already exists.
- C(merge) - Update only the specified fields, preserve others (default).
- C(replace) - Replace the entire cluster definition.
- C(keep) - Keep existing cluster, skip this entry.
type: str
choices: ['merge', 'replace', 'keep']
default: merge
cluster:
description:
- Cluster configuration details.
type: dict
required: true
suboptions:
server:
description:
- Kubernetes API server URL (e.g., C(https://k8s.example.com:6443)).
type: str
required: true
certificate-authority:
description:
- Path to a CA certificate file for validating the API server certificate.
type: str
certificate-authority-data:
description:
- Base64 encoded CA certificate data.
- Use this instead of C(certificate-authority) for embedded certificates.
type: str
insecure-skip-tls-verify:
description:
- If true, the server's certificate will not be validated.
type: bool
proxy-url:
description:
- Optional proxy URL for cluster connections.
type: str
tls-server-name:
description:
- Server name to use for server certificate validation.
type: str
users:
description:
- List of user authentication configurations.
- Each user is identified by its C(name).
- When C(name) matches an existing user, the default C(behavior) is V(merge).
- See the C(behavior) suboption for V(replace) and V(keep).
type: list
elements: dict
required: false
default: []
suboptions:
name:
description:
- Unique name identifier for the user.
type: str
required: true
behavior:
description:
- How to handle merging if a user with this name already exists.
- C(merge) - Update only the specified fields, preserve others (default).
- C(replace) - Replace the entire user definition.
- C(keep) - Keep existing user, skip this entry.
type: str
choices: ['merge', 'replace', 'keep']
default: merge
user:
description:
- User authentication configuration.
type: dict
required: true
suboptions:
token:
description:
- Bearer token for authentication.
type: str
username:
description:
- Username for basic authentication.
type: str
password:
description:
- Password for basic authentication.
type: str
client-certificate:
description:
- Path to client certificate file.
- Used for certificate-based authentication.
type: str
client-key:
description:
- Path to client private key file.
- Must be provided with C(client-certificate).
type: str
client-certificate-data:
description:
- Base64 encoded client certificate.
- Use instead of C(client-certificate) for embedded certificates.
type: str
client-key-data:
description:
- Base64 encoded client private key.
- Use instead of C(client-key) for embedded keys.
type: str
auth-provider:
description:
- Authentication provider configuration (e.g., for GCP, Azure).
type: dict
exec:
description:
- Exec-based credential plugin configuration.
- Used for external authentication providers.
type: dict
contexts:
description:
- List of context definitions linking users and clusters.
- Each context is identified by its C(name).
- When C(name) matches an existing context, the default C(behavior) is V(merge).
- See the C(behavior) suboption for V(replace) and V(keep).
type: list
elements: dict
required: false
default: []
suboptions:
name:
description:
- Unique name identifier for the context.
type: str
required: true
behavior:
description:
- How to handle merging if a context with this name already exists.
- C(merge) - Update only the specified fields, preserve others (default).
- C(replace) - Replace the entire context definition.
- C(keep) - Keep existing context, skip this entry.
type: str
choices: ['merge', 'replace', 'keep']
default: merge
context:
description:
- Context configuration linking cluster and user.
type: dict
required: true
suboptions:
cluster:
description:
- Name of the cluster to use (must match a cluster name in O(clusters)).
type: str
required: true
user:
description:
- Name of the user to authenticate as (must match a user name in O(users)).
type: str
required: true
namespace:
description:
- Default namespace to use for this context.
- If not specified, defaults to C(default).
type: str
preferences:
description:
- Kubeconfig preferences.
- Used for client-side settings like color output, default editor, etc.
type: dict
required: false
default: {}
current_context:
description:
- Name of the context to set as current/active.
- This context will be used by default when using kubectl.
- Must match one of the context names defined in O(contexts).
type: str
required: false
seealso:
- name: Kubernetes kubeconfig documentation
description: Official Kubernetes documentation for kubeconfig files
link: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
- name: kubectl config documentation
description: kubectl commands for working with kubeconfig files
link: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/
"""
EXAMPLES = r"""
# Create a new kubeconfig file with a single cluster
- name: Create basic kubeconfig
kubernetes.core.kubeconfig:
path: /home/user/.kube/config
clusters:
- name: production-cluster
cluster:
server: https://prod.k8s.example.com:6443
certificate-authority-data: LS0tLS1CRUdJTi...
users:
- name: admin-user
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
contexts:
- name: prod-admin
context:
cluster: production-cluster
user: admin-user
namespace: production
current_context: prod-admin
- name: Copy and modify kubeconfig
kubernetes.core.kubeconfig:
path: /home/user/.kube/config
dest: /home/user/.kube/config-backup
clusters:
- name: new-cluster
cluster:
server: https://new.example.com:6443
- name: Switch current context
kubernetes.core.kubeconfig:
path: ~/.kube/config
current_context: prod-context
- name: Update user credentials
kubernetes.core.kubeconfig:
path: ~/.kube/config
users:
- name: admin-user
user:
token: "{{ new_admin_token }}"
"""
RETURN = r"""
kubeconfig:
description: The complete kubeconfig data structure.
type: dict
returned: always
dest:
description: The path where the kubeconfig was written.
type: str
returned: always
sample: /home/user/.kube/config
"""
import os
import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.kubernetes.core.plugins.module_utils.args_common import (
extract_sensitive_values_from_kubeconfig,
)
from ansible_collections.kubernetes.core.plugins.module_utils.kubeconfig import (
hash_data,
load_yaml_file,
merge_by_name,
write_file,
)
try:
import yaml
IMP_YAML = True
IMP_YAML_ERR = None
except ImportError:
IMP_YAML = False
IMP_YAML_ERR = traceback.format_exc()
def run_module():
module_args = dict(
path=dict(type="str", required=True),
dest=dict(type="str", required=False),
clusters=dict(type="list", elements="dict", required=False, default=[]),
users=dict(type="list", elements="dict", required=False, default=[]),
contexts=dict(type="list", elements="dict", required=False, default=[]),
preferences=dict(type="dict", required=False, default={}),
current_context=dict(type="str", required=False),
)
module = AnsibleModule(argument_spec=module_args, supports_check_mode=True)
path = module.params["path"]
dest = module.params["dest"] or path
clusters_input = module.params["clusters"]
users_input = module.params["users"]
contexts_input = module.params["contexts"]
preferences = module.params["preferences"]
current_context = module.params["current_context"]
# Load existing kubeconfig
try:
if not IMP_YAML:
module.fail_json(
msg=missing_required_lib("pyyaml"),
exception=IMP_YAML_ERR,
)
existing = load_yaml_file(path) if path else {}
except Exception as e:
module.fail_json(
msg="Failed to load existing kubeconfig: %s" % to_native(e),
exception=traceback.format_exc(),
)
clusters = merge_by_name(existing.get("clusters", []), clusters_input)
users = merge_by_name(existing.get("users", []), users_input)
contexts = merge_by_name(existing.get("contexts", []), contexts_input)
# Build final kubeconfig
kubeconfig = {
"apiVersion": "v1",
"kind": "Config",
"preferences": preferences or existing.get("preferences", {}),
"clusters": clusters,
"users": users,
"contexts": contexts,
"current-context": current_context or existing.get("current-context") or "",
}
changed = False
old_data = {}
if os.path.exists(dest):
try:
with open(dest, "r") as f:
old_data = yaml.safe_load(f) or {}
except Exception as e:
module.fail_json(
msg="Failed to read destination file: %s" % to_native(e),
exception=traceback.format_exc(),
)
old_hash = hash_data(old_data)
new_hash = hash_data(kubeconfig)
if old_hash != new_hash:
if not module.check_mode:
try:
write_file(dest, kubeconfig)
except Exception as e:
module.fail_json(
msg="Failed to write kubeconfig: %s" % to_native(e),
exception=traceback.format_exc(),
)
changed = True
if isinstance(kubeconfig, dict):
module.no_log_values.update(
extract_sensitive_values_from_kubeconfig(kubeconfig)
)
module.exit_json(
changed=changed,
kubeconfig=kubeconfig,
dest=dest,
msg=(
"Kubeconfig file has been updated."
if changed
else "Kubeconfig file is already up to date."
),
)
def main():
run_module()
if __name__ == "__main__":
main()

View File

@@ -1,4 +1 @@
time=100
helm_info
helm_repository
helm_template
disabled # used by test targets helm_vX_XX_XX

View File

@@ -9,25 +9,25 @@ chart_test_version: 4.2.4
chart_test_version_local_path: 1.32.0
chart_test_version_upgrade: 4.2.5
chart_test_version_upgrade_local_path: 1.33.0
chart_test_repo: "https://kubernetes.github.io/ingress-nginx"
chart_test_repo: "https://stenic.github.io/k8status/"
chart_test_git_repo: "http://github.com/helm/charts.git"
chart_test_values:
revisionHistoryLimit: 0
myValue: "changed"
test_namespace:
- "helm-test-crds"
- "helm-uninstall"
- "helm-read-envvars"
- "helm-dep-update"
- "helm-local-path-001"
- "helm-local-path-002"
- "helm-local-path-003"
- "helm-from-repository"
- "helm-from-url"
- "helm-reuse-values"
- "helm-chart-with-space-into-name"
- "helm-reset-then-reuse-values"
- "helm-insecure"
- "helm-test-take-ownership"
- "helm-skip-schema-validation"
- "helm-test-crds-{{ helm_version | replace('.', '-') }}"
- "helm-uninstall-{{ helm_version | replace('.', '-') }}"
- "helm-read-envvars-{{ helm_version | replace('.', '-') }}"
- "helm-dep-update-{{ helm_version | replace('.', '-') }}"
- "helm-local-path-001-{{ helm_version | replace('.', '-') }}"
- "helm-local-path-002-{{ helm_version | replace('.', '-') }}"
- "helm-local-path-003-{{ helm_version | replace('.', '-') }}"
- "helm-from-repository-{{ helm_version | replace('.', '-') }}"
- "helm-from-url-{{ helm_version | replace('.', '-') }}"
- "helm-reuse-values-{{ helm_version | replace('.', '-') }}"
- "helm-chart-with-space-into-name-{{ helm_version | replace('.', '-') }}"
- "helm-reset-then-reuse-values-{{ helm_version | replace('.', '-') }}"
- "helm-insecure-{{ helm_version | replace('.', '-') }}"
- "helm-test-take-ownership-{{ helm_version | replace('.', '-') }}"
- "helm-skip-schema-validation-{{ helm_version | replace('.', '-') }}"

View File

@@ -52,7 +52,9 @@ import json
import subprocess
import time
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.kubernetes.core.plugins.module_utils.helm import (
AnsibleHelmModule,
)
class HelmReleaseNotFoundError(Exception):
@@ -60,7 +62,9 @@ class HelmReleaseNotFoundError(Exception):
super().__init__(message)
def create_pending_install_release(helm_binary, chart_ref, chart_release, namespace):
def create_pending_install_release(
module, helm_binary, chart_ref, chart_release, namespace
):
# create pending-install release
command = [
helm_binary,
@@ -78,13 +82,14 @@ def create_pending_install_release(helm_binary, chart_ref, chart_release, namesp
command = [
helm_binary,
"list",
"--all",
"--output=json",
"--namespace",
namespace,
"--filter",
chart_release,
]
if not module.is_helm_v4():
command.append("--all")
cmd = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = cmd.communicate()
@@ -92,11 +97,11 @@ def create_pending_install_release(helm_binary, chart_ref, chart_release, namesp
if not data:
error = "Release %s not found." % chart_release
raise HelmReleaseNotFoundError(message=error)
return data[0]["status"] == "pending-install", data[0]["status"]
return data[0]["status"] in ("pending-install", "failed"), data[0]["status"]
def main():
module = AnsibleModule(
module = AnsibleHelmModule(
argument_spec=dict(
binary_path=dict(type="path", required=True),
chart_ref=dict(type="str", required=True),
@@ -106,6 +111,7 @@ def main():
)
params = dict(
module=module,
helm_binary=module.params.get("binary_path"),
chart_release=module.params.get("chart_release"),
chart_ref=module.params.get("chart_ref"),
@@ -116,7 +122,7 @@ def main():
result, status = create_pending_install_release(**params)
if not result:
module.fail_json(
msg="unable to create pending-install release, current status is %s"
msg="unable to create pending-install/failed release, current status is %s"
% status
)
module.exit_json(changed=True, msg="Release created with status '%s'" % status)

View File

@@ -1,5 +1,3 @@
---
collections:
- kubernetes.core
dependencies:
- remove_namespace

View File

@@ -1,7 +0,0 @@
---
- connection: local
gather_facts: true
hosts: localhost
roles:
- helm

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_CALLBACKS_ENABLED=profile_tasks
export ANSIBLE_ROLES_PATH=../
ansible-playbook playbook.yaml "$@"

View File

@@ -1,15 +0,0 @@
---
- name: Init Helm folders
file:
path: /tmp/helm/
state: directory
- name: Unarchive Helm binary
unarchive:
src: 'https://get.helm.sh/{{ helm_archive_name | default(helm_default_archive_name) }}'
dest: /tmp/helm/
remote_src: yes
retries: 10
delay: 5
register: result
until: result is not failed

View File

@@ -1,10 +1,22 @@
---
- name: Ensure helm is not installed
file:
path: "{{ item }}"
state: absent
with_items:
- "/tmp/helm"
- name: Check failed if helm is not installed
include_tasks: test_helm_not_installed.yml
- name: Install Helm v4
ansible.builtin.include_role:
name: install_helm
vars:
helm_version: v3.6.0
- name: Test helm uninstall
ansible.builtin.include_tasks: test_helm_uninstall.yml
- name: Run tests
include_tasks: run_test.yml
loop_control:
loop_var: helm_version
with_items:
- "v3.15.4"
- "v3.16.0"
- "v3.17.0"
- "v4.0.0"

View File

@@ -1,25 +1,19 @@
---
- name: Ensure helm is not installed
file:
path: "{{ item }}"
state: absent
with_items:
- "/tmp/helm"
- name: Check failed if helm is not installed
include_tasks: test_helm_not_installed.yml
- name: "Install {{ helm_version }}"
include_role:
name: install_helm
- name: Main helm tests with Helm v3
when: helm_version != "v4.0.0"
- name: Main helm tests
block:
- name: Install helm-diff plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_path: https://github.com/databus23/helm-diff
plugin_version: "{{ helm_version is version('v4.0.0', '>=') | ternary('v3.14.0', 'v3.10.0') }}"
verify: false
- name: "Ensure we honor the environment variables"
include_tasks: test_read_envvars.yml
when: helm_version != "v4.0.0"
- name: Deploy charts
include_tasks: "tests_chart/{{ test_chart_type }}.yml"
@@ -39,9 +33,6 @@
- name: test helm dependency update
include_tasks: test_up_dep.yml
- name: Test helm uninstall
include_tasks: test_helm_uninstall.yml
- name: Test helm install with chart name containing space
include_tasks: test_helm_with_space_into_chart_name.yml
@@ -58,12 +49,15 @@
- name: Test helm skip_schema_validation
include_tasks: test_skip_schema_validation.yml
- name: Test helm version
include_tasks: test_helm_version.yml
always:
- name: Remove helm-diff plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_name: diff
state: absent
ignore_errors: true
- name: Clean helm install
file:
path: "{{ item }}"
state: absent
with_items:
- "/tmp/helm/"
- name: Clean helm install
ansible.builtin.file:
path: "/tmp/helm/"
state: absent

View File

@@ -5,7 +5,7 @@
name: test
chart_ref: "{{ chart_test }}"
namespace: "helm-test"
ignore_errors: yes
ignore_errors: true
register: helm_missing_binary
- name: Assert that helm is not installed

View File

@@ -38,6 +38,28 @@
- '"--reset-then-reuse-values" not in install.command'
- release_value["status"]["release_values"] == chart_release_values
# We need to provide the actual redis password otherwise the update command
# will fail with the following:
# Error: execution error at (redis/templates/replicas/application.yaml:55:35):
# PASSWORDS ERROR: You must provide your current passwords when upgrading the release.
# Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims.
# Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases
# 'global.redis.password' must not be empty, please add '--set global.redis.password=$REDIS_PASSWORD' to the command. To get the current value:
- name: Retrieve release password
kubernetes.core.k8s_info:
namespace: "{{ helm_namespace }}"
kind: Secret
name: test-redis
register: redis_secret
- ansible.builtin.set_fact:
chart_reset_then_reuse_values: "{{ chart_reset_then_reuse_values | combine(redis_global_password) }}"
vars:
redis_global_password:
global:
redis:
password: "{{ redis_secret.resources.0.data['redis-password'] | b64decode }}"
- name: Upgrade chart using reset_then_reuse_values=true
helm:
binary_path: "{{ helm_binary }}"
@@ -73,3 +95,4 @@
kind: Namespace
name: "{{ helm_namespace }}"
state: absent
wait: false

View File

@@ -38,6 +38,21 @@
- '"--reuse-values=True" not in install.command'
- release_value["status"]["release_values"] == chart_release_values
- name: Retrieve release password
kubernetes.core.k8s_info:
namespace: "{{ helm_namespace }}"
kind: Secret
name: test-redis
register: redis_secret
- ansible.builtin.set_fact:
chart_reuse_values: "{{ chart_reuse_values | combine(redis_global_password) }}"
vars:
redis_global_password:
global:
redis:
password: "{{ redis_secret.resources.0.data['redis-password'] | b64decode }}"
- name: Upgrade chart using reuse_values=true
helm:
binary_path: "{{ helm_binary }}"

View File

@@ -19,6 +19,20 @@
- install is changed
- '"--take-ownership" not in install.command'
# We need to provide the actual redis password otherwise the update command
# will fail with the following:
# Error: execution error at (redis/templates/replicas/application.yaml:55:35):
# PASSWORDS ERROR: You must provide your current passwords when upgrading the release.
# Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims.
# Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases
# 'global.redis.password' must not be empty, please add '--set global.redis.password=$REDIS_PASSWORD' to the command. To get the current value:
- name: Retrieve release password
kubernetes.core.k8s_info:
namespace: "{{ helm_namespace }}"
kind: Secret
name: test-take-ownership-redis
register: redis_secret
- name: Upgrade chart (take-onwership flag set)
helm:
binary_path: "{{ helm_binary }}"
@@ -29,6 +43,9 @@
values:
commonLabels:
take-onwership: "set"
global:
redis:
password: "{{ redis_secret.resources.0.data['redis-password'] | b64decode }}"
register: upgrade
ignore_errors: true
@@ -55,6 +72,9 @@
values:
commonLabels:
take-onwership: "not-set"
global:
redis:
password: "{{ redis_secret.resources.0.data['redis-password'] | b64decode }}"
register: upgrade
ignore_errors: true

View File

@@ -31,26 +31,18 @@
- name: assert warning has been raised
assert:
that:
- uninstall.warnings
- uninstall.warnings is defined
- '"helm uninstall support option --wait for helm release >= 3.7.0" in uninstall.warnings'
- name: Create temp directory
tempfile:
state: directory
suffix: .test
register: _result
- set_fact:
helm_tmp_dir: "{{ _result.path }}"
- name: Unarchive Helm binary
unarchive:
src: 'https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz'
dest: "{{ helm_tmp_dir }}"
remote_src: yes
- name: Install Helm v4
ansible.builtin.include_role:
name: install_helm
vars:
helm_version: v4.0.0
- name: Install chart
helm:
binary_path: "{{ helm_tmp_dir }}/linux-amd64/helm"
binary_path: "{{ helm_binary }}"
name: "{{ chart_name }}"
chart_ref: "{{ chart_source }}"
namespace: "{{ helm_namespace }}"
@@ -59,7 +51,7 @@
- name: uninstall chart again using recent version
helm:
state: absent
binary_path: "{{ helm_tmp_dir }}/linux-amd64/helm"
binary_path: "{{ helm_binary }}"
name: "{{ chart_name }}"
namespace: "{{ helm_namespace }}"
wait: yes
@@ -96,12 +88,6 @@
- _info.status is undefined
always:
- name: Delete temp directory
file:
path: "{{ helm_tmp_dir }}"
state: absent
ignore_errors: true
- name: Remove namespace
k8s:
kind: Namespace

View File

@@ -1,47 +0,0 @@
---
- name: Test helm reuse_values
vars:
helm_namespace: "{{ test_namespace[14] }}"
chart_release_values:
replica:
replicaCount: 3
master:
count: 1
kind: Deployment
chart_reuse_values:
replica:
replicaCount: 1
master:
count: 3
block:
- name: Initial chart installation
helm:
binary_path: "{{ helm_binary }}"
chart_ref: oci://registry-1.docker.io/bitnamicharts/redis
release_name: test-redis
release_namespace: "{{ helm_namespace }}"
create_namespace: true
release_values: "{{ chart_release_values }}"
register: install
ignore_errors: true
when: helm_version == "v4.0.0"
- name: Debug install result
debug:
var: install
when: helm_version == "v4.0.0"
- name: Ensure helm installation was failed for v4.0.0
assert:
that:
- install is failed
- "'Helm version must be >=3.0.0,<4.0.0' in install.msg"
when: helm_version == "v4.0.0"
always:
- name: Remove helm namespace
k8s:
api_version: v1
kind: Namespace
name: "{{ helm_namespace }}"
state: absent

View File

@@ -30,9 +30,9 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
ignore_errors: yes
ignore_errors: true
register: install_fail
- name: "Assert that Install fail {{ chart_test }} from {{ source }}"
@@ -46,7 +46,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
create_namespace: true
register: install_check_mode
@@ -64,17 +64,18 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
create_namespace: true
register: install
- name: "Assert that {{ chart_test }} chart is installed from {{ source }}"
- name: "Assert that {{ chart_test }} chart version {{ chart_test_version }} is installed from {{ source }}"
assert:
that:
- install is changed
- install.status.chart == chart_test+"-"+chart_test_version
- install.status.status | lower == 'deployed'
- install.status.release_values == {}
- name: Check helm_info content
helm_info:
@@ -92,7 +93,7 @@
- deployed
register: release_state_content_info
- name: "Assert that {{ chart_test }} is installed from {{ source }} with helm_info"
- name: "Assert that {{ chart_test }} chart version {{ chart_test_version }} is installed from {{ source }} with helm_info"
assert:
that:
- content_info.status.chart == chart_test+"-"+chart_test_version
@@ -104,9 +105,10 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
register: install
diff: true
- name: Assert idempotency
assert:
@@ -120,7 +122,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
values: "{{ chart_test_values }}"
register: install
@@ -131,17 +133,18 @@
- install is changed
- install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version
- "install.status['release_values'].revisionHistoryLimit == 0"
- install.status['release_values'] == chart_test_values
- name: Check idempotency after adding vars
helm:
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
values: "{{ chart_test_values }}"
register: install
diff: true
- name: Assert idempotency after add vars
assert:
@@ -149,14 +152,14 @@
- install is not changed
- install.status.status | lower == 'deployed'
- install.status.chart == chart_test+"-"+chart_test_version
- "install.status['release_values'].revisionHistoryLimit == 0"
- install.status['release_values'] == chart_test_values
- name: "Remove Vars to {{ chart_test }} from {{ source }}"
helm:
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
register: install
@@ -173,9 +176,10 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
register: install
diff: true
- name: Assert idempotency after removing vars
assert:
@@ -190,7 +194,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source_upgrade | default(chart_source) }}"
chart_version: "{{ chart_source_version_upgrade | default(omit) }}"
chart_version: "{{ chart_test_version_upgrade }}"
namespace: "{{ helm_namespace }}"
register: install
@@ -206,9 +210,10 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source_upgrade | default(chart_source) }}"
chart_version: "{{ chart_source_version_upgrade | default(omit) }}"
chart_version: "{{ chart_test_version_upgrade }}"
namespace: "{{ helm_namespace }}"
register: install
diff: true
- name: Assert idempotency after upgrade
assert:
@@ -237,6 +242,7 @@
name: "{{ chart_release_name }}"
namespace: "{{ helm_namespace }}"
register: install
diff: true
- name: Assert idempotency
assert:
@@ -249,7 +255,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_replaced_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
register: install
@@ -277,7 +283,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_replaced_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
replace: True
register: install
@@ -305,7 +311,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
values_files:
- "{{ role_path }}/files/values.yaml"
@@ -324,7 +330,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
values_files:
- "{{ role_path }}/files/values.yaml"
@@ -346,7 +352,7 @@
helm_template:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
output_dir: "{{ temp_dir }}"
values_files:
- "{{ role_path }}/files/values.yaml"
@@ -372,7 +378,7 @@
helm_template:
binary_path: "{{ helm_binary }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
disable_hook: True
release_name: "myrelease"
release_namespace: "myreleasenamespace"
@@ -398,7 +404,7 @@
binary_path: "{{ helm_binary }}"
name: "{{ chart_release_name }}"
chart_ref: "{{ chart_source }}"
chart_version: "{{ chart_source_version | default(omit) }}"
chart_version: "{{ chart_test_version }}"
namespace: "{{ helm_namespace }}"
create_namespace: true
context: does-not-exist
@@ -417,6 +423,7 @@
state: absent
path: "{{ temp_dir }}"
ignore_errors: true
when: temp_dir is defined
- name: Remove helm namespace
k8s:

View File

@@ -5,18 +5,38 @@
name: test_helm
repo_url: "{{ chart_test_repo }}"
- name: Install Chart from repository
include_tasks: "../tests_chart.yml"
vars:
source: repository
chart_source: "test_helm/{{ chart_test }}"
chart_source_version: "{{ chart_test_version }}"
chart_source_version_upgrade: "{{ chart_test_version_upgrade }}"
helm_namespace: "{{ test_namespace[7] }}"
- name: Create temporary file to save values in
ansible.builtin.tempfile:
suffix: .helm_values
register: value_file
- name: Remove chart repo
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm
repo_url: "{{ chart_test_repo }}"
state: absent
- vars:
source: repository
chart_test: k8status
chart_source: "test_helm/k8status"
chart_test_version: "0.16.1"
chart_test_version_upgrade: "0.16.2"
helm_namespace: "{{ test_namespace[7] }}"
chart_test_values:
replicaCount: 3
block:
- name: Save values into file
ansible.builtin.copy:
content: "{{ chart_test_values }}"
dest: "{{ value_file.path }}"
- name: Install Chart from repository
ansible.builtin.include_tasks: "../tests_chart.yml"
always:
- name: Remove temporary file
ansible.builtin.file:
state: absent
path: "{{ value_file.path }}"
- name: Remove chart repo
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm
repo_url: "{{ chart_test_repo }}"
state: absent

View File

@@ -3,6 +3,11 @@
include_tasks: "../tests_chart.yml"
vars:
source: url
chart_source: "https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-{{ chart_test_version }}/{{ chart_test }}-{{ chart_test_version }}.tgz"
chart_source_upgrade: "https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-{{ chart_test_version_upgrade }}/{{ chart_test }}-{{ chart_test_version_upgrade }}.tgz"
chart_test: "k8status"
chart_test_values:
replicaCount: 3
chart_test_version: "0.16.1"
chart_test_version_upgrade: "0.16.2"
chart_source: https://github.com/stenic/k8status/releases/download/k8status-0.16.1/k8status-0.16.1.tgz
chart_source_upgrade: https://github.com/stenic/k8status/releases/download/k8status-0.16.2/k8status-0.16.2.tgz
helm_namespace: "{{ test_namespace[8] }}"

View File

@@ -239,6 +239,7 @@
vars:
chart_local_path: '{{ _tmpd.path }}/test-chart-deployment-time'
chart_repo_path: 'testing'
helm_binary_path: "{{ helm_binary }}"
always:
- name: Delete temporary directory
ansible.builtin.file:

View File

@@ -92,25 +92,11 @@
path: /tmp/helm/
state: absent
- name: Init Helm folders
file:
path: /tmp/helm
state: directory
- name: Set Helm old version
set_fact:
helm_archive_name: "helm-v3.8.0-linux-amd64.tar.gz"
helm_diff_old_version: "3.8.0"
- name: Unarchive Helm binary
unarchive:
src: "https://get.helm.sh/{{ helm_archive_name | default(helm_default_archive_name) }}"
dest: /tmp/helm/
remote_src: yes
retries: 10
delay: 5
register: result
until: result is not failed
- name: Install old version of helm
ansible.builtin.include_role:
name: install_helm
vars:
helm_version: "v3.8.0"
- name: Upgrade helm release (with reset_then_reuse_values=true)
kubernetes.core.helm:
@@ -140,7 +126,7 @@
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/databus23/helm-diff
plugin_version: "{{ helm_diff_old_version }}"
plugin_version: "3.8.0"
- name: Upgrade helm release (with reset_then_reuse_values=true)
kubernetes.core.helm:
@@ -166,6 +152,11 @@
- '"reset_then_reuse_values requires helm diff >= 3.9.12, current version is" in helm_upgrade.msg'
always:
- name: Delete Helm folders
file:
path: /tmp/helm/
state: absent
- name: Remove temporary directory
file:
path: "{{ helm_dir.path }}"

View File

@@ -1,7 +1,10 @@
---
helm_binary: "/tmp/helm/{{ ansible_system | lower }}-amd64/helm"
default_kubeconfig_path: "~/.kube/config"
test_namespace:
- "helm-in-memory-kubeconfig"
- "helm-kubeconfig-with-ca-cert"
- "helm-kubeconfig-with-insecure-skip-tls-verify"
helm_versions:
- v3.10.3
- v3.16.4
- v4.0.0

View File

@@ -1,4 +0,0 @@
---
dependencies:
- remove_namespace
- install_helm

View File

@@ -57,7 +57,7 @@
assert:
that:
- _install is failed
- '"Error: Kubernetes cluster unreachable" in _install.msg'
- '"error: kubernetes cluster unreachable" in _install.msg | lower()'
- name: Test helm modules using in-memory kubeconfig
include_tasks: "tests_helm_auth.yml"

View File

@@ -48,7 +48,7 @@
assert:
that:
- _install is failed
- '"Error: Kubernetes cluster unreachable" in _install.msg'
- '"error: kubernetes cluster unreachable" in _install.msg | lower()'
- name: Test helm modules using in-memory kubeconfig
include_tasks: "tests_helm_auth.yml"

View File

@@ -1,21 +1,5 @@
---
- name: Test helm with in-memory kubeconfig
include_tasks: "from_in_memory_kubeconfig.yml"
- ansible.builtin.include_tasks: run_tests.yml
loop: "{{ helm_versions }}"
loop_control:
loop_var: test_helm_version
with_items:
- "v3.10.3"
- name: Test helm with custom kubeconfig and validate_certs=false
include_tasks: "from_kubeconfig_with_validate_certs.yml"
loop_control:
loop_var: test_helm_version
with_items:
- "v3.10.3"
- name: Test helm with custom kubeconfig and ca_cert
include_tasks: "from_kubeconfig_with_cacert.yml"
loop_control:
loop_var: test_helm_version
with_items:
- "v3.10.3"
loop_var: helm_version

View File

@@ -0,0 +1,15 @@
---
- name: Run tests with helm version "{{ helm_version }}"
block:
- name: "Install Helm"
ansible.builtin.include_role:
name: install_helm
- name: Test helm with in-memory kubeconfig
ansible.builtin.include_tasks: "from_in_memory_kubeconfig.yml"
- name: Test helm with custom kubeconfig and validate_certs=false
include_tasks: "from_kubeconfig_with_validate_certs.yml"
- name: Test helm with custom kubeconfig and ca_cert
include_tasks: "from_kubeconfig_with_cacert.yml"

View File

@@ -5,16 +5,6 @@
suffix: .helm
register: _dir
- name: Install helm binary
block:
- name: "Install {{ test_helm_version }}"
include_role:
name: install_helm
vars:
helm_version: "{{ test_helm_version }}"
when: test_helm_version is defined
- set_fact:
saved_kubeconfig_path: "{{ _dir.path }}/config"
@@ -44,6 +34,7 @@
ca_cert: "{{ test_ca_cert | default(omit) }}"
state: present
plugin_path: https://github.com/hydeenoble/helm-subenv
verify: false
register: plugin
- assert:

View File

@@ -0,0 +1 @@
redis*

View File

@@ -1,3 +1,4 @@
[all]
helm-3.12.3 helm_version=v3.12.3 test_namespace=helm-plain-http-v3-12-3 tests_should_failed=true
helm-3.18.2 helm_version=v3.18.2 test_namespace=helm-plain-http-v3-18-2 tests_should_failed=false
helm-3.18.2 helm_version=v3.18.2 test_namespace=helm-plain-http-v3-18-2 tests_should_failed=false
helm-4.0.0 helm_version=v4.0.0 test_namespace=helm-plain-http-v4-0-0 tests_should_failed=false

View File

@@ -1,6 +1,7 @@
- name: Run test for helm plain http option
hosts: all
gather_facts: true
strategy: free
vars:
ansible_connection: local
@@ -8,7 +9,7 @@
chart_test_oci: "oci://registry-1.docker.io/bitnamicharts/redis"
roles:
- setup_namespace
- role: setup_namespace
tasks:
- ansible.builtin.include_tasks: tasks/test.yaml

View File

@@ -13,10 +13,6 @@
vars:
helm_install_path: "{{ install_path.path }}"
- name: Set helm binary path
ansible.builtin.set_fact:
helm_binary: "{{ install_path.path }}/{{ ansible_system | lower }}-amd64/helm"
# helm
- name: Run helm with plain_http
kubernetes.core.helm:

View File

@@ -1,3 +0,0 @@
---
dependencies:
- install_helm

View File

@@ -1,165 +1,8 @@
---
- name: Install env plugin in check mode
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/adamreese/helm-env
register: check_install_env
check_mode: true
- assert:
that:
- check_install_env.changed
- name: Install env plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/adamreese/helm-env
register: install_env
- assert:
that:
- install_env.changed
- name: Gather info about all plugin
helm_plugin_info:
binary_path: "{{ helm_binary }}"
register: plugin_info
- assert:
that:
- plugin_info.plugin_list is defined
- name: Install env plugin again
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/adamreese/helm-env
register: install_env
- assert:
that:
- not install_env.changed
- name: Uninstall env plugin in check mode
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
register: check_uninstall_env
check_mode: true
- assert:
that:
- check_uninstall_env.changed
- name: Uninstall env plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
register: uninstall_env
- assert:
that:
- uninstall_env.changed
- name: Uninstall env plugin again
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
register: uninstall_env
- assert:
that:
- not uninstall_env.changed
# https://github.com/ansible-collections/community.kubernetes/issues/399
- block:
- name: Copy required plugin files
copy:
src: "files/sample_plugin"
dest: "/tmp/helm_plugin_test/"
- name: Install sample_plugin from the directory
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: "/tmp/helm_plugin_test/sample_plugin"
register: sample_plugin_output
- name: Assert that sample_plugin is installed or not
assert:
that:
- sample_plugin_output.changed
- name: Gather Helm plugin info
helm_plugin_info:
binary_path: "{{ helm_binary }}"
register: r
- name: Set sample_plugin version
set_fact:
plugin_version: "{{ ( r.plugin_list | selectattr('name', 'equalto', plugin_name) | list )[0].version }}"
vars:
plugin_name: "sample_plugin"
- name: Assert if sample_plugin with multiline comment is installed
assert:
that:
- plugin_version == "0.0.1"
always:
- name: Uninstall sample_plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: sample_plugin
ignore_errors: yes
- block:
- name: uninstall helm plugin secrets
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_name: secrets
state: absent
- name: install helm-secrets on a specific version
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_path: https://github.com/jkroepke/helm-secrets
plugin_version: 3.4.1
state: present
- name: list helm plugin
helm_plugin_info:
plugin_name: secrets
binary_path: "{{ helm_binary }}"
register: plugin_list
- name: assert that secrets has been installed with specified version
assert:
that:
- plugin_list.plugin_list[0].version == "3.4.1"
- name: Update helm plugin version to latest
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_name: secrets
state: latest
register: _update
- name: assert update was performed
assert:
that:
- _update.changed
- '"Updated plugin: secrets" in _update.stdout'
always:
- name: Uninstall sample_plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: secrets
ignore_errors: yes
- name: Run tests
include_tasks: run_tests.yml
loop_control:
loop_var: helm_version
with_items:
- "v3.17.0"
- "v4.0.0"

View File

@@ -0,0 +1,195 @@
---
- name: "Install {{ helm_version }}"
include_role:
name: install_helm
- block:
- name: Install env plugin in check mode
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/adamreese/helm-env
verify: false
register: check_install_env
check_mode: true
- assert:
that:
- check_install_env.changed
- name: Install env plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/adamreese/helm-env
verify: false
register: install_env
- assert:
that:
- install_env.changed
- name: Gather info about all plugin
helm_plugin_info:
binary_path: "{{ helm_binary }}"
register: plugin_info
- assert:
that:
- plugin_info.plugin_list is defined
- name: Install env plugin again
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: https://github.com/adamreese/helm-env
verify: false
register: install_env
- assert:
that:
- not install_env.changed
- name: Uninstall env plugin in check mode
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
verify: false
register: check_uninstall_env
check_mode: true
- assert:
that:
- check_uninstall_env.changed
- name: Uninstall env plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
register: uninstall_env
- assert:
that:
- uninstall_env.changed
- name: Uninstall env plugin again
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
register: uninstall_env
- assert:
that:
- not uninstall_env.changed
always:
- name: Uninstall env plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: env
# https://github.com/ansible-collections/community.kubernetes/issues/399
- block:
- name: Copy required plugin files
copy:
src: "files/sample_plugin"
dest: "/tmp/helm_plugin_test/"
- name: Install sample_plugin from the directory
helm_plugin:
binary_path: "{{ helm_binary }}"
state: present
plugin_path: "/tmp/helm_plugin_test/sample_plugin"
register: sample_plugin_output
- name: Assert that sample_plugin is installed or not
assert:
that:
- sample_plugin_output.changed
- name: Gather Helm plugin info
helm_plugin_info:
binary_path: "{{ helm_binary }}"
register: r
- name: Set sample_plugin version
set_fact:
plugin_version: "{{ ( r.plugin_list | selectattr('name', 'equalto', plugin_name) | list )[0].version }}"
vars:
plugin_name: "sample_plugin"
- name: Assert if sample_plugin with multiline comment is installed
assert:
that:
- plugin_version == "0.0.1"
always:
- name: Uninstall sample_plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: sample_plugin
ignore_errors: true
- block:
- name: uninstall helm plugin unittest
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_name: unittest
state: absent
- name: install helm-unittest on a specific version
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_path: https://github.com/helm-unittest/helm-unittest
plugin_version: v1.0.1
verify: false
state: present
- name: list helm plugin
helm_plugin_info:
plugin_name: unittest
binary_path: "{{ helm_binary }}"
register: plugin_list
- name: assert that unittest has been installed with specified version
assert:
that:
- plugin_list.plugin_list[0].version == "1.0.1"
- name: Update helm plugin version to latest (check mode)
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_name: unittest
state: latest
register: _update_checkmode
check_mode: true
- name: Assert that module reported change while running in check mode
assert:
that:
- _update_checkmode.changed
- '"Updated plugin: unittest" not in _update_checkmode.stdout'
- name: Update helm plugin version to latest
helm_plugin:
binary_path: "{{ helm_binary }}"
plugin_name: unittest
state: latest
register: _update
- name: assert update was performed
assert:
that:
- _update.changed
- '"Updated plugin: unittest" in _update.stdout'
always:
- name: Uninstall sample_plugin
helm_plugin:
binary_path: "{{ helm_binary }}"
state: absent
plugin_name: unittest

View File

@@ -5,7 +5,7 @@
- 3.8.0
- 3.1.0
- 3.0.0
- 2.3.0
- 4.0.0
- block:
- name: Create temp directory for helm tests
@@ -20,37 +20,13 @@
- set_fact:
destination: "{{ temp_dir }}"
- name: Create Helm directories
file:
state: directory
path: "{{ temp_dir }}/{{ item }}"
with_items: "{{ helm_versions }}"
- name: Unarchive Helm binary
unarchive:
src: "https://get.helm.sh/helm-v{{ item }}-linux-amd64.tar.gz"
dest: "{{ temp_dir }}/{{ item }}"
remote_src: yes
with_items: "{{ helm_versions }}"
# Testing helm pull with helm version == 2.3.0
- block:
- name: Assert that helm pull failed with helm <= 3.0.0
helm_pull:
binary_path: "{{ helm_path }}"
chart_ref: https://github.com/grafana/helm-charts/releases/download/grafana-5.6.0/grafana-5.6.0.tgz
destination: "{{ destination }}"
ignore_errors: true
register: _result
- name: assert that module failed with proper message
assert:
that:
- _result is failed
- _result.msg == "Helm version must be >=3.0.0,<4.0.0, current version is 2.3.0"
- name: Install Helm versions
ansible.builtin.include_role:
name: install_helm
loop: "{{ helm_versions }}"
vars:
helm_path: "{{ temp_dir }}/2.3.0/linux-amd64/helm"
helm_version: "v{{ item }}"
helm_install_path: "{{ temp_dir }}/{{ item }}"
# Testing helm pull with helm version == 3.0.0
- block:
@@ -103,7 +79,7 @@
- _result.msg == "Parameter chart_ca_cert requires helm >= 3.1.0, current version is 3.0.0"
vars:
helm_path: "{{ temp_dir }}/3.0.0/linux-amd64/helm"
helm_path: "{{ temp_dir }}/3.0.0/helm"
# Testing helm pull with helm version == 3.1.0
- block:
@@ -143,7 +119,7 @@
- _result.msg == "Parameter skip_tls_certs_check requires helm >= 3.3.0, current version is 3.1.0"
vars:
helm_path: "{{ temp_dir }}/3.1.0/linux-amd64/helm"
helm_path: "{{ temp_dir }}/3.1.0/helm"
# Testing helm pull with helm version == 3.8.0
- block:
@@ -317,7 +293,7 @@
- _chart_after_force.stat.isdir
vars:
helm_path: "{{ temp_dir }}/3.8.0/linux-amd64/helm"
helm_path: "{{ temp_dir }}/3.8.0/helm"
always:

View File

@@ -5,5 +5,9 @@ username: testuser
password: testpassword
wrong_password: 'WrongPassword'
registry_name: oci_registry
registry_port: 5000
registry_port: 5002
test_chart: https://github.com/grafana/helm-charts/releases/download/k8s-monitoring-1.6.8/k8s-monitoring-1.6.8.tgz
helm_versions:
- v3.17.0
- v3.20.0
- v4.0.0

View File

@@ -1,3 +0,0 @@
---
dependencies:
- install_helm

View File

@@ -1,182 +0,0 @@
---
- name: Run module test
# using a shell and command module to run the test as test can be non-idempotent
# and it allow to not install any additional dependencies
block:
- name: Ensure that helm is installed
ansible.builtin.shell: helm version --client --short | grep v3
register: _helm_version
failed_when: _helm_version.rc != 0
- name: Ensure that Docker demon is running
ansible.builtin.command: "docker info"
register: _docker_info
failed_when: _docker_info.rc != 0
- name: Create a tmpfile htpasswd directory
ansible.builtin.tempfile:
state: directory
suffix: .httppasswd
register: _tmpfile
- name: Copy htpasswd to the tmpfile directory
ansible.builtin.copy:
src: registry.password
dest: "{{ _tmpfile.path }}/registry.password"
- name: Setup the registry
ansible.builtin.command: >-
docker run -d --rm
-p {{ registry_port }}:5000
--name "{{ registry_name }}"
-v "{{ _tmpfile.path }}:/auth"
-e "REGISTRY_AUTH=htpasswd"
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/registry.password
registry:2
register: _setup_registry
failed_when: _setup_registry.rc != 0
- name: Ensure that the registry is running and rechable
ansible.builtin.wait_for:
host: localhost
port: "{{ registry_port }}"
- name: Test the registry with correct credentials to ensure that the registry is running
ansible.builtin.shell: >-
echo {{ password | quote }} | helm registry login localhost:{{ registry_port }}
-u {{ username }} --password-stdin
register: _login_correct
failed_when: _login_correct.rc != 0
- name: Clean up credentials to run test on clean environment
ansible.builtin.shell: >-
helm registry logout localhost:{{ registry_port }}
register: _logout
failed_when: _logout.rc != 0
- name: Create directory for helm chart
ansible.builtin.tempfile:
state: directory
suffix: ".helm"
register: _destination
- name: Pull test helm chart
ansible.builtin.uri:
url: "{{ test_chart }}"
dest: "{{ _destination.path }}/k8s-monitoring-1.6.8.tgz"
return_content: no
status_code: 200
- name: Test module helm_registry_auth with correct credentials
helm_registry_auth:
username: "{{ username }}"
password: "{{ password }}"
host: localhost:{{ registry_port }}
state: present
register: _helm_registry_auth_correct
- name: Assert that the registry is logged in
# Helm binary prints the message to stderr, refence: https://github.com/helm/helm/issues/13464
assert:
that:
- "'Login Succeeded' in _helm_registry_auth_correct.stderr"
- "'{{ password }}' not in _helm_registry_auth_correct.command"
- "'{{ password }}' not in _helm_registry_auth_correct.stdout"
- "'{{ password }}' not in _helm_registry_auth_correct.stderr"
- name: Ensure that push to the registry is working
ansible.builtin.shell: >-
helm push "{{ _destination.path }}/k8s-monitoring-1.6.8.tgz" oci://localhost:{{ registry_port }}/test/
register: _save_chart
failed_when: _save_chart.rc != 0
- name: Assert that the chart is saved
# Helm binary prints the message to stderr, refence: https://github.com/helm/helm/issues/13464
assert:
that: "'Pushed: localhost:{{ registry_port }}/test/k8s-monitoring' in _save_chart.stderr"
- name: Test logout
helm_registry_auth:
host: localhost:{{ registry_port }}
state: absent
register: _helm_registry_auth_logout
- name: Assert logout
# Helm binary prints the message to stderr
assert:
that: "'Removing login credentials' in _helm_registry_auth_logout.stderr"
- name: Test idempotency of logout with helm < 3.18.0
when: _helm_version.stdout is ansible.builtin.version('v3.18.0', '<')
block:
- name: Test logout idempotency
helm_registry_auth:
host: localhost:{{ registry_port }}
state: absent
register: _helm_registry_auth_logout_idempotency
- name: Assert logout operation did not report change
ansible.builtin.assert:
that: _helm_registry_auth_logout_idempotency is not changed
- name: Ensure that not able to push to the registry
ansible.builtin.shell: >-
helm push "{{ _destination.path }}/k8s-monitoring-1.6.8.tgz" oci://localhost:{{ registry_port }}/test/
register: _save_chart
failed_when: _save_chart.rc == 0
- name: Read content of ~/.config/helm/registry/config.json
ansible.builtin.slurp:
src: ~/.config/helm/registry/config.json
register: _config_json
- name: Assert that auth data is remove and the chart is not saved
# Helm binary prints the message to stderr
ansible.builtin.assert:
that:
- "'push access denied' in _save_chart.stderr or 'basic credential not found' in _save_chart.stderr"
- "_save_chart.rc != 0"
- "'localhost:{{ registry_port }}' not in _config_json.content | b64decode"
- name: Test module helm_registry_auth with wrong credentials
helm_registry_auth:
username: "{{ username }}"
password: "{{ wrong_password }}"
host: localhost:{{ registry_port }}
state: present
register: _helm_registry_auth_wrong
ignore_errors: true
- name: Read content of ~/.config/helm/registry/config.json
ansible.builtin.slurp:
src: ~/.config/helm/registry/config.json
register: _config_json
- name: Assert that the registry is not logged in and auth data is not saved
ansible.builtin.assert:
that:
- "'401' in _helm_registry_auth_wrong.stderr"
- "'unauthorized' in _helm_registry_auth_wrong.stderr | lower"
- "'{{ wrong_password }}' not in _helm_registry_auth_correct.command"
- "'{{ wrong_password }}' not in _helm_registry_auth_correct.stdout"
- "'{{ wrong_password }}' not in _helm_registry_auth_correct.stderr"
- "'localhost:{{ registry_port }}' not in _config_json.content | b64decode"
# Clean up
always:
- name: Stop and remove the registry
ansible.builtin.command: docker stop {{ registry_name }}
ignore_errors: true
- name: Remove the tmpfile
ansible.builtin.file:
state: absent
path: "{{ item }}"
force: true
loop:
- "{{ _tmpfile.path }}"
- "{{ _destination.path }}"
ignore_errors: true

View File

@@ -0,0 +1,60 @@
---
- name: Run tests for helm_registry_auth with different helm versions
block:
- name: Create temporary directory to install helm binaries
ansible.builtin.tempfile:
state: directory
suffix: .helm
register: _tmpdir
- name: Ensure that Docker demon is running
ansible.builtin.command: "docker info"
register: _docker_info
failed_when: _docker_info.rc != 0
- name: Copy htpasswd to the tmpfile directory
ansible.builtin.copy:
src: registry.password
dest: "{{ _tmpdir.path }}/registry.password"
- name: Pull test helm chart
ansible.builtin.uri:
url: "{{ test_chart }}"
dest: "{{ _tmpdir.path }}/k8s-monitoring-1.6.8.tgz"
return_content: no
status_code: 200
- name: Setup the registry
ansible.builtin.command: >-
docker run -d --rm
-p {{ registry_port }}:5000
--name "{{ registry_name }}"
-v "{{ _tmpdir.path }}:/auth"
-e "REGISTRY_AUTH=htpasswd"
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/registry.password
registry:2
register: _setup_registry
failed_when: _setup_registry.rc != 0
- name: Ensure that the registry is running and rechable
ansible.builtin.wait_for:
host: localhost
port: "{{ registry_port }}"
- name: Run tests
ansible.builtin.include_tasks: run_tests.yml
loop: "{{ helm_versions }}"
loop_control:
loop_var: helm_version
always:
- name: Stop and remove the registry
ansible.builtin.command: docker stop {{ registry_name }}
ignore_errors: true
- name: Delete temporary directory
ansible.builtin.file:
state: absent
path: "{{ _tmpdir.path }}"
ignore_errors: true

View File

@@ -0,0 +1,108 @@
---
- block:
- name: Install helm versions
ansible.builtin.include_role:
name: install_helm
# - name: Test the registry with correct credentials to ensure that the registry is running
# ansible.builtin.shell: >-
# echo {{ password | quote }} | {{ helm_binary }} registry login localhost:{{ registry_port }}
# -u {{ username }} --password-stdin --plain-http
# register: _login_correct
# failed_when: _login_correct.rc != 0
# - name: Clean up credentials to run test on clean environment
# ansible.builtin.shell: >-
# {{ helm_binary }} registry logout localhost:{{ registry_port }}
# register: _logout
# failed_when: _logout.rc != 0
- name: Test module helm_registry_auth with correct credentials
helm_registry_auth:
binary_path: "{{ helm_binary }}"
username: "{{ username }}"
password: "{{ password }}"
host: localhost:{{ registry_port }}
plain_http: true
state: present
register: _helm_registry_auth_correct
- name: Assert that the registry is logged in
# Helm binary prints the message to stderr, refence: https://github.com/helm/helm/issues/13464
assert:
that:
- "'Login Succeeded' in _helm_registry_auth_correct.stdout_lines + _helm_registry_auth_correct.stderr_lines"
- password not in _helm_registry_auth_correct.command
- password not in _helm_registry_auth_correct.stdout
- password not in _helm_registry_auth_correct.stderr
- name: Ensure that push to the registry is working
ansible.builtin.shell: >-
{{ helm_binary }} push --plain-http "{{ _tmpdir.path }}/k8s-monitoring-1.6.8.tgz" oci://localhost:{{ registry_port }}/test/
register: _save_chart
failed_when: _save_chart.rc != 0
- name: Assert that the chart is saved
# Helm binary prints the message to stderr, refence: https://github.com/helm/helm/issues/13464
assert:
that: "'Pushed: localhost:' + registry_port | string + '/test/k8s-monitoring' in _save_chart.stderr"
- name: Test logout
helm_registry_auth:
binary_path: "{{ helm_binary }}"
host: localhost:{{ registry_port }}
state: absent
register: _helm_registry_auth_logout
- name: Assert logout
# Helm binary prints the message to stderr
assert:
that: "'Removing login credentials' in _helm_registry_auth_logout.stderr"
- name: Ensure that not able to push to the registry
ansible.builtin.shell: >-
{{ helm_binary }} push --plain-http "{{ _tmpdir.path }}/k8s-monitoring-1.6.8.tgz" oci://localhost:{{ registry_port }}/test/
register: _save_chart
failed_when: _save_chart.rc == 0
- name: Assert that auth data is remove and the chart is not saved
# Helm binary prints the message to stderr
ansible.builtin.assert:
that:
- "'push access denied' in _save_chart.stderr or 'basic credential not found' in _save_chart.stderr"
- "_save_chart.rc != 0"
- name: Test idempotency of logout with helm < 3.18.0
when: helm_version is ansible.builtin.version('v3.18.0', '<')
block:
- name: Test logout idempotency
helm_registry_auth:
binary_path: "{{ helm_binary }}"
host: localhost:{{ registry_port }}
state: absent
register: _helm_registry_auth_logout_idempotency
- name: Assert logout operation did not report change
ansible.builtin.assert:
that: _helm_registry_auth_logout_idempotency is not changed
- name: Test module helm_registry_auth with wrong credentials
helm_registry_auth:
binary_path: "{{ helm_binary }}"
username: "{{ username }}"
password: "{{ wrong_password }}"
host: localhost:{{ registry_port }}
state: present
plain_http: true
register: _helm_registry_auth_wrong
ignore_errors: true
- name: Assert that the registry is not logged in and auth data is not saved
ansible.builtin.assert:
that:
- "'401' in _helm_registry_auth_wrong.stderr"
- "'unauthorized' in _helm_registry_auth_wrong.stderr | lower"
- "'{{ wrong_password }}' not in _helm_registry_auth_correct.command"
- "'{{ wrong_password }}' not in _helm_registry_auth_correct.stdout"
- "'{{ wrong_password }}' not in _helm_registry_auth_correct.stderr"

View File

@@ -1,5 +1,2 @@
time=20
helm_repository
helm_info
helm
helm_template
time=1
helm_repository

View File

@@ -1,3 +1,5 @@
---
chart_test_repo: "https://kubernetes.github.io/ingress-nginx"
helm_binary: "/tmp/helm/{{ ansible_system | lower }}-amd64/helm"
helm_versions:
- v3.20.0
- v4.0.0

View File

@@ -1,5 +0,0 @@
---
collections:
- kubernetes.core
dependencies:
- install_helm

View File

@@ -1,101 +1,6 @@
---
- name: "Ensure test_helm_repo doesn't exist"
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
- name: Add test_helm_repo chart repository
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
register: repository
- name: Assert that test_helm_repo repository is added
assert:
that:
- repository is changed
- '"--insecure-skip-tls-verify" not in repository.command'
- name: Check idempotency
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
register: repository
- name: Assert idempotency
assert:
that:
- repository is not changed
- name: Failed to add repository with the same name
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "https://other-charts.url"
register: repository_errors
ignore_errors: yes
- name: Assert that adding repository with the same name failed
assert:
that:
- repository_errors is failed
- name: Succesfully add repository with the same name when forcing
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
force: true
register: repository
- name: Assert that test_helm_repo repository is changed
assert:
that:
- repository is changed
- name: Remove test_helm_repo chart repository
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
register: repository
- name: Assert that test_helm_repo repository is removed
assert:
that:
- repository is changed
- name: Check idempotency after remove
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
register: repository
- name: Assert idempotency
assert:
that:
- repository is not changed
- name: Add test_helm_repo chart repository as insecure
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
insecure_skip_tls_verify: true
register: repository
- name: Assert that repository added and flag set
assert:
that:
- repository is changed
- '"--insecure-skip-tls-verify" in repository.command'
- name: Clean test_helm_repo chart repository
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
- name: Run test for helm_repository module
ansible.builtin.include_tasks: run_tests.yml
loop: "{{ helm_versions }}"
loop_control:
loop_var: helm_version

View File

@@ -0,0 +1,105 @@
---
- name: "Install helm version {{ helm_version }}"
ansible.builtin.include_role:
name: install_helm
- name: "Ensure test_helm_repo doesn't exist"
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
- name: Add test_helm_repo chart repository
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
register: repository
- name: Assert that test_helm_repo repository is added
assert:
that:
- repository is changed
- '"--insecure-skip-tls-verify" not in repository.command'
- name: Check idempotency
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
register: repository
- name: Assert idempotency
assert:
that:
- repository is not changed
- name: Failed to add repository with the same name
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "https://other-charts.url"
register: repository_errors
ignore_errors: true
- name: Assert that adding repository with the same name failed
assert:
that:
- repository_errors is failed
- name: Succesfully add repository with the same name when forcing
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
force: true
register: repository
- name: Assert that test_helm_repo repository is changed
assert:
that:
- repository is changed
- name: Remove test_helm_repo chart repository
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
register: repository
- name: Assert that test_helm_repo repository is removed
assert:
that:
- repository is changed
- name: Check idempotency after remove
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent
register: repository
- name: Assert idempotency
assert:
that:
- repository is not changed
- name: Add test_helm_repo chart repository as insecure
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
repo_url: "{{ chart_test_repo }}"
insecure_skip_tls_verify: true
register: repository
- name: Assert that repository added and flag set
assert:
that:
- repository is changed
- '"--insecure-skip-tls-verify" in repository.command'
- name: Clean test_helm_repo chart repository
helm_repository:
binary_path: "{{ helm_binary }}"
name: test_helm_repo
state: absent

View File

@@ -1,3 +1,5 @@
---
helm_binary: "/tmp/helm/{{ ansible_system | lower }}-amd64/helm"
helm_namespace: helm-set-values
helm_versions:
- v3.10.3
- v4.0.0

View File

@@ -25,79 +25,14 @@
- user_values.status["release_values"]["phase"] == "integration"
- user_values.status["release_values"]["versioned"] is false
# install chart using set_values and release_values
- name: Install helm binary (> 3.10.0) requires to use set-json
include_role:
name: install_helm
vars:
helm_version: "v3.10.3"
- name: Install helm using set_values parameters
helm:
binary_path: "{{ helm_binary }}"
chart_ref: oci://registry-1.docker.io/bitnamicharts/apache
release_name: test-apache
release_namespace: "{{ helm_namespace }}"
create_namespace: true
set_values:
- value: 'master.image={"registry": "docker.io", "repository": "bitnami/apache", "tag": "2.4.54-debian-11-r74"}'
value_type: json
release_values:
replicaCount: 3
- name: Get release info
helm_info:
binary_path: "{{ helm_binary }}"
release_name: test-apache
release_namespace: "{{ helm_namespace }}"
register: values
- name: Assert that release was created with user-defined variables
assert:
that:
- values.status["release_values"].replicaCount == 3
- values.status["release_values"].master.image.registry == "docker.io"
- values.status["release_values"].master.image.repository == "bitnami/apache"
- values.status["release_values"].master.image.tag == "2.4.54-debian-11-r74"
# install chart using set_values and values_files
- name: create temporary file to save values in
tempfile:
suffix: .yml
register: ymlfile
- block:
- name: copy content into values file
copy:
content: |
---
mode: distributed
dest: "{{ ymlfile.path }}"
- name: create temporary file to save values in
tempfile:
suffix: .yml
register: ymlfile
- name: Install helm using set_values parameters
helm:
binary_path: "{{ helm_binary }}"
chart_ref: oci://registry-1.docker.io/bitnamicharts/minio
release_name: test-minio
release_namespace: "{{ helm_namespace }}"
create_namespace: true
set_values:
- value: 'disableWebUI=true'
values_files:
- "{{ ymlfile.path }}"
- name: Get release info
helm_info:
binary_path: "{{ helm_binary }}"
release_name: test-minio
release_namespace: "{{ helm_namespace }}"
register: values
- name: Assert that release was created with user-defined variables
assert:
that:
- values.status["release_values"].mode == "distributed"
- values.status["release_values"].disableWebUI is true
- ansible.builtin.include_tasks: run_tests.yml
loop: "{{ helm_versions }}"
always:
- name: Delete temporary file

View File

@@ -0,0 +1,68 @@
---
# install chart using set_values and release_values
- name: 'Install helm binary (> 3.10.0) requires to use set-json (helm_version={{ item }})'
include_role:
name: install_helm
vars:
helm_version: "{{ item }}"
- name: Install helm using set_values parameters
helm:
binary_path: "{{ helm_binary }}"
chart_ref: oci://registry-1.docker.io/bitnamicharts/apache
release_name: 'test-apache-{{ item }}'
release_namespace: "{{ helm_namespace }}"
create_namespace: true
set_values:
- value: 'master.image={"registry": "docker.io", "repository": "bitnami/apache", "tag": "2.4.54-debian-11-r74"}'
value_type: json
release_values:
replicaCount: 3
- name: Get release info
helm_info:
binary_path: "{{ helm_binary }}"
release_name: 'test-apache-{{ item }}'
release_namespace: "{{ helm_namespace }}"
register: values
- name: Assert that release was created with user-defined variables
assert:
that:
- values.status["release_values"].replicaCount == 3
- values.status["release_values"].master.image.registry == "docker.io"
- values.status["release_values"].master.image.repository == "bitnami/apache"
- values.status["release_values"].master.image.tag == "2.4.54-debian-11-r74"
# install chart using set_values and values_files
- name: copy content into values file
copy:
content: |
---
mode: distributed
dest: "{{ ymlfile.path }}"
- name: Install helm using set_values parameters
helm:
binary_path: "{{ helm_binary }}"
chart_ref: oci://registry-1.docker.io/bitnamicharts/minio
release_name: 'test-minio-{{ item }}'
release_namespace: "{{ helm_namespace }}"
create_namespace: true
set_values:
- value: 'disableWebUI=true'
values_files:
- "{{ ymlfile.path }}"
- name: Get release info
helm_info:
binary_path: "{{ helm_binary }}"
release_name: 'test-minio-{{ item }}'
release_namespace: "{{ helm_namespace }}"
register: values
- name: Assert that release was created with user-defined variables
assert:
that:
- values.status["release_values"].mode == "distributed"
- values.status["release_values"].disableWebUI is true

View File

@@ -0,0 +1,4 @@
helm
helm_template
helm_info
helm_repository

View File

@@ -0,0 +1,2 @@
[all]
v3.15.4

View File

@@ -0,0 +1,11 @@
- name: Run tests for Helm v3.15.4
hosts: all
connection: local
gather_facts: true
vars:
ansible_python_interpreter: "{{ ansible_playbook_python }}"
helm_version: "{{ inventory_hostname }}"
roles:
- role: helm

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=../
ansible-playbook play.yaml -i inventory.ini "$@"

View File

@@ -0,0 +1,4 @@
helm
helm_template
helm_info
helm_repository

View File

@@ -0,0 +1,2 @@
[all]
v3.16.0

View File

@@ -0,0 +1,11 @@
- name: Run tests for Helm v3.16.0
hosts: all
connection: local
gather_facts: true
vars:
ansible_python_interpreter: "{{ ansible_playbook_python }}"
helm_version: "{{ inventory_hostname }}"
roles:
- role: helm

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=../
ansible-playbook play.yaml -i inventory.ini "$@"

View File

@@ -0,0 +1,4 @@
helm
helm_template
helm_info
helm_repository

View File

@@ -0,0 +1,2 @@
[all]
v3.17.0

View File

@@ -0,0 +1,11 @@
- name: Run tests for Helm v3.17.0
hosts: all
connection: local
gather_facts: true
vars:
ansible_python_interpreter: "{{ ansible_playbook_python }}"
helm_version: "{{ inventory_hostname }}"
roles:
- role: helm

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=../
ansible-playbook play.yaml -i inventory.ini "$@"

View File

@@ -0,0 +1,4 @@
helm
helm_template
helm_info
helm_repository

View File

@@ -0,0 +1,2 @@
[all]
v4.0.0

View File

@@ -0,0 +1,11 @@
- name: Run tests for Helm v4.0.0
hosts: all
connection: local
gather_facts: true
vars:
ansible_python_interpreter: "{{ ansible_playbook_python }}"
helm_version: "{{ inventory_hostname }}"
roles:
- role: helm

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=../
ansible-playbook play.yaml -i inventory.ini "$@"

View File

@@ -1,4 +1,4 @@
---
helm_version: v3.16.4
helm_install_path: /tmp/helm
helm_default_archive_name: "helm-{{ helm_version }}-{{ ansible_system | lower }}-amd64.tar.gz"
helm_default_archive_name: "https://get.helm.sh/helm-{{ helm_version }}-{{ ansible_system | lower }}-{{ ansible_architecture | lower }}.tar.gz"

View File

@@ -4,12 +4,27 @@
path: "{{ helm_install_path }}"
state: directory
- name: Unarchive Helm binary
unarchive:
src: "https://get.helm.sh/{{ helm_archive_name | default(helm_default_archive_name) }}"
dest: "{{ helm_install_path }}"
remote_src: yes
retries: 10
delay: 5
register: result
until: result is not failed
- ansible.builtin.set_fact:
os_path: "{{ lookup('env', 'PATH') }}"
- name: Download the Helm install script
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-{{ major_version }}"
dest: /tmp/get_helm.sh
mode: '0700'
vars:
major_version: "{{ helm_version | split('.') | first | replace('v', '') }}"
- name: Run the install script (helm version = {{ helm_version }})
ansible.builtin.command: /tmp/get_helm.sh
environment:
DESIRED_VERSION: "{{ helm_version }}"
HELM_INSTALL_DIR: "{{ helm_install_path }}"
PATH: "{{ os_path }}:{{ helm_install_path }}"
VERIFY_CHECKSUM: "false"
register: helm_install_result
changed_when: "'is already at the latest version' not in helm_install_result.stdout"
- name: Save Helm binary path for later use
ansible.builtin.set_fact:
helm_binary: "{{ helm_install_path }}/helm"

View File

@@ -1,3 +1,7 @@
---
test_namespace: "drain"
k8s_wait_timeout: 400
daemonset_name: promotheus
deployment_name: busybox-emptydir
pod1_name: "busybox-1"
pod2_name: "busybox-2"

View File

@@ -0,0 +1,65 @@
---
- name: Cordon node (check mode)
k8s_drain:
state: cordon
name: '{{ node_to_drain }}'
register: cordon_check_mode
check_mode: true
- name: assert that module reported change while running in check_mode
assert:
that:
- cordon_check_mode is changed
- name: Ensure the node remain schedulable (cordon run on check mode)
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Cordon node
k8s_drain:
state: cordon
name: '{{ node_to_drain }}'
register: cordon
- name: assert that cordon is changed
assert:
that:
- cordon is changed
- name: Ensure the node is unschedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
- name: Test cordon idempotency (check_mode=true)
k8s_drain:
state: cordon
name: '{{ node_to_drain }}'
register: cordon_checkmode_idempotency
check_mode: true
- name: Assert that module is idempotent while running in check mode
assert:
that:
- cordon_checkmode_idempotency is not changed
- name: Test cordon idempotency
k8s_drain:
state: cordon
name: '{{ node_to_drain }}'
register: cordon
- name: assert that cordon is not changed
assert:
that:
- cordon is not changed
- name: Get pods
k8s_info:
kind: Pod
namespace: '{{ test_namespace }}'
register: Pod
- name: assert that pods are running on cordoned node
assert:
that:
- Pod.resources | selectattr('status.phase', 'equalto', 'Running') | selectattr('spec.nodeName', 'equalto', node_to_drain) | list | length > 0

View File

@@ -0,0 +1,389 @@
---
# Drain the node (Should failed)
- name: Drain node with expected failure (check_mode=true)
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
ignore_errors: true
register: drain_failed_check_mode
check_mode: true
- name: Assert that drain failed due to DaemonSet managed Pods
assert:
that:
- drain_failed_check_mode is failed
- '"cannot delete DaemonSet-managed Pods" in drain_failed_check_mode.msg'
- '"cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet" in drain_failed_check_mode.msg'
- '"cannot delete Pods with local storage" in drain_failed_check_mode.msg'
- name: Ensure that the node remains schedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Drain node with expected failure
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
ignore_errors: true
register: drain_failed
- name: Assert that drain failed due to DaemonSet managed Pods
assert:
that:
- drain_failed is failed
- '"cannot delete DaemonSet-managed Pods" in drain_failed.msg'
- '"cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet" in drain_failed.msg'
- '"cannot delete Pods with local storage" in drain_failed.msg'
- name: Ensure that the node remains schedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
# Drain the node ignoring non-candidate Pods
# check_mode
- name: Drain node using ignore_daemonsets, force, and delete_emptydir_data options (check_mode=true)
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
ignore_daemonsets: true
delete_emptydir_data: true
wait_timeout: 0
register: drain_force_check_mode
check_mode: true
- name: Assert that module reported changed while node was not drained
assert:
that:
- drain_force_check_mode is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_force_check_mode.result'
- name: Ensure node remains schedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Assert that running with check_mode did not delete any Pod
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- "{{ item }}"
register: pods
failed_when: pods.resources | length == 0
loop:
- drain=unmanaged-pod
- drain=daemonset-pod
- drain=emptyDir
# Apply
- name: Drain node using ignore_daemonsets, force, and delete_emptydir_data options
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
ignore_daemonsets: true
delete_emptydir_data: true
wait_timeout: 0
register: drain_force
- name: Assert that module reported changed
assert:
that:
- drain_force is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_force.result'
- name: Ensure node is now unschedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
- name: Assert that unmanaged Pod were deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=unmanaged-pod
register: pods
failed_when: pods.resources | length > 0
- name: Assert that Pod with local storage are not Pending
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=emptyDir
register: pods
failed_when: pods.resources | map(attribute='status.phase') | unique != ['Pending']
- name: Assert that DaemonSet-managed pod were not deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=daemonset-pod
register: pods
failed_when: pods.resources | length == 0
# Idempotency
- name: Test drain idempotency (check_mode=true)
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
ignore_daemonsets: true
delete_emptydir_data: true
register: drain_force_idempotency_check_mode
check_mode: true
- name: Validate idempotency with check_mode
assert:
that:
- drain_force_idempotency_check_mode is not changed
- name: Ensure node remains unschedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
- name: Assert that DaemonSet-managed pod were not deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=daemonset-pod
register: pods
failed_when: pods.resources | length == 0
# Drain with disable_eviction = true
# check_mode
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
- name: Create once again the Pod deleted before
k8s:
namespace: '{{ test_namespace }}'
wait: true
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
template: pod1.yml.j2
- name: Drain node using disable_eviction (check_mode)
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
disable_eviction: true
terminate_grace_period: 0
ignore_daemonsets: true
wait_timeout: 0
delete_emptydir_data: true
register: disable_evict_check_mode
check_mode: true
- name: Assert that node has been drained
assert:
that:
- disable_evict_check_mode is changed
- '"node "+node_to_drain+" marked unschedulable." in disable_evict_check_mode.result'
- name: Ensure node remains schedulable (check_mode)
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Assert that unmanaged Pod were not deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=unmanaged-pod
register: pods
failed_when: pods.resources | length == 0
# apply
- name: Drain node using disable_eviction
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
disable_eviction: true
terminate_grace_period: 0
ignore_daemonsets: true
wait_timeout: 0
delete_emptydir_data: true
register: disable_evict
- name: Assert that node has been drained
assert:
that:
- disable_evict is changed
- '"node "+node_to_drain+" marked unschedulable." in disable_evict_check_mode.result'
- name: Ensure the node is unschedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
# Drain using pod_selectors
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
- name: Create Pod with label selector
k8s:
namespace: "{{ test_namespace }}"
wait: true
template: pod1.yml.j2
# check_mode
- name: Drain the node using pod_selectors matching no Pod
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
pod_selectors:
- drain=no_match_selector
delete_options:
terminate_grace_period: 0
delete_emptydir_data: true
force: true
ignore_daemonsets: true
register: drain_pod_selector_no_match_check_mode
check_mode: true
- name: Assert that module reported change while running in check_mode
assert:
that:
- drain_pod_selector_no_match_check_mode is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_pod_selector_no_match_check_mode.result'
- name: Ensure that the node remains schedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Validate that Pod are still running
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=unmanaged-pod
field_selectors:
- status.phase=Running
register: pods
failed_when: pods.resources | length == 0
# apply
- name: Drain the node using pod_selectors matching no Pod
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
pod_selectors:
- drain=no_match_selector
delete_options:
terminate_grace_period: 0
delete_emptydir_data: true
force: true
ignore_daemonsets: true
register: drain_pod_selector_no_match
- name: Assert that node has been drained
assert:
that:
- drain_pod_selector_no_match is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_pod_selector_no_match.result'
- name: Ensure the node is unschedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
- name: Validate that Pod are still running
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=unmanaged-pod
field_selectors:
- status.phase=Running
register: pods
failed_when: pods.resources | length == 0
# Drain the node using matching pod_selector
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
# check_mode
- name: Drain the node using matching pod_selectors
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
pod_selectors:
- drain=unmanaged-pod
delete_options:
terminate_grace_period: 0
delete_emptydir_data: true
force: true
ignore_daemonsets: true
register: drain_pod_selector_match_check_mode
check_mode: true
- name: Assert that module reported change while running in check_mode
assert:
that:
- drain_pod_selector_match_check_mode is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_pod_selector_match_check_mode.result'
- name: Ensure that the node remains schedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Validate that Pod are still running
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=unmanaged-pod
field_selectors:
- status.phase=Running
register: pods
failed_when: pods.resources | length == 0
# apply
- name: Drain the node using matching pod_selectors
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
pod_selectors:
- drain=unmanaged-pod
delete_options:
terminate_grace_period: 0
delete_emptydir_data: true
force: true
ignore_daemonsets: true
wait_timeout: 0
register: drain_pod_selector_match
- name: Assert that node has been drained
assert:
that:
- drain_pod_selector_match is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_pod_selector_match.result'
- name: Ensure the node is unschedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
- name: Validate that Pod are not running
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- drain=unmanaged-pod
field_selectors:
- status.phase=Running
register: pods
failed_when: pods.resources | length > 0

View File

@@ -1,11 +1,5 @@
---
- block:
- name: Set common facts
set_fact:
drain_daemonset_name: "promotheus-dset"
drain_pod_name: "pod-drain"
drain_deployment_emptydir_name: "deployment-emptydir-drain"
# It seems that the default ServiceAccount can take a bit to be created
# right after a cluster is brought up. This can lead to the ServiceAccount
# admission controller rejecting a Pod creation request because the
@@ -35,407 +29,23 @@
set_fact:
node_to_drain: '{{ uncordoned_nodes[0] }}'
- name: Deploy daemonset on cluster
- name: Create resources
k8s:
namespace: '{{ test_namespace }}'
definition:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: '{{ drain_daemonset_name }}'
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- '{{ node_to_drain }}'
selector:
matchLabels:
name: prometheus-exporter
template:
metadata:
labels:
name: prometheus-exporter
spec:
containers:
- name: prometheus
image: prom/node-exporter
ports:
- containerPort: 80
- name: Create Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet.
k8s:
namespace: '{{ test_namespace }}'
wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
definition:
apiVersion: v1
kind: Pod
metadata:
name: '{{ drain_pod_name }}'
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- '{{ node_to_drain }}'
containers:
- name: c0
image: busybox
command:
- /bin/sh
- -c
- while true;do date;sleep 5; done
- name: Create Deployment with an emptyDir volume.
k8s:
namespace: '{{ test_namespace }}'
wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ drain_deployment_emptydir_name }}'
spec:
replicas: 1
selector:
matchLabels:
drain: emptyDir
template:
metadata:
labels:
drain: emptyDir
spec:
metadata:
labels:
drain: emptyDir
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- '{{ node_to_drain }}'
containers:
- name: c0
image: busybox
command:
- /bin/sh
- -c
- while true;do date;sleep 5; done
volumeMounts:
- mountPath: /emptydir
name: emptydir
volumes:
- name: emptydir
emptyDir: {}
- name: Register emptyDir Pod name
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- "drain = emptyDir"
register: emptydir_pod_result
failed_when:
- emptydir_pod_result.resources | length != 1
- name: Cordon node
k8s_drain:
state: cordon
name: '{{ node_to_drain }}'
register: cordon
- name: assert that cordon is changed
assert:
that:
- cordon is changed
- name: Test cordon idempotency
k8s_drain:
state: cordon
name: '{{ node_to_drain }}'
register: cordon
- name: assert that cordon is not changed
assert:
that:
- cordon is not changed
- name: Get pods
k8s_info:
kind: Pod
namespace: '{{ test_namespace }}'
register: Pod
- name: assert that pods are running on cordoned node
assert:
that:
- Pod.resources | selectattr('status.phase', 'equalto', 'Running') | selectattr('spec.nodeName', 'equalto', node_to_drain) | list | length > 0
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
register: uncordon
- name: assert that uncordon is changed
assert:
that:
- uncordon is changed
- name: Test uncordon idempotency
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
register: uncordon
- name: assert that uncordon is not changed
assert:
that:
- uncordon is not changed
- name: Drain node
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
ignore_errors: true
register: drain_result
- name: assert that drain failed due to DaemonSet managed Pods
assert:
that:
- drain_result is failed
- '"cannot delete DaemonSet-managed Pods" in drain_result.msg'
- '"cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet" in drain_result.msg'
- '"cannot delete Pods with local storage" in drain_result.msg'
- name: Drain node using ignore_daemonsets, force, and delete_emptydir_data options
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
ignore_daemonsets: true
delete_emptydir_data: true
wait_timeout: 0
register: drain_result
- name: assert that node has been drained
assert:
that:
- drain_result is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_result.result'
- name: assert that unmanaged pod were deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
name: '{{ drain_pod_name }}'
register: _result
failed_when: _result.resources | length > 0
- name: assert that emptyDir pod was deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
name: "{{ emptydir_pod_result.resources[0].metadata.name }}"
register: _result
failed_when: _result.resources | length != 0
- name: Test drain idempotency
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
ignore_daemonsets: true
delete_emptydir_data: true
register: drain_result
- name: Check idempotency
assert:
that:
- drain_result is not changed
- name: Get DaemonSet
k8s_info:
kind: DaemonSet
namespace: '{{ test_namespace }}'
name: '{{ drain_daemonset_name }}'
register: dset_result
- name: assert that daemonset managed pods were not removed
assert:
that:
- dset_result.resources | list | length > 0
# test: drain using disable_eviction=true
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
- name: Create another Pod
k8s:
namespace: '{{ test_namespace }}'
wait: yes
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
definition:
apiVersion: v1
kind: Pod
metadata:
name: '{{ drain_pod_name }}-01'
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- '{{ node_to_drain }}'
containers:
- name: c0
image: busybox
command:
- /bin/sh
- -c
- while true;do date;sleep 5; done
volumeMounts:
- mountPath: /emptydir
name: emptydir
volumes:
- name: emptydir
emptyDir: {}
- name: Drain node using disable_eviction set to yes
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
delete_options:
force: true
disable_eviction: yes
terminate_grace_period: 0
ignore_daemonsets: yes
wait_timeout: 0
delete_emptydir_data: true
register: disable_evict
- name: assert that node has been drained
assert:
that:
- disable_evict is changed
- '"node "+node_to_drain+" marked unschedulable." in disable_evict.result'
- name: assert that unmanaged pod were deleted
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
name: '{{ drain_pod_name }}-01'
register: _result
failed_when: _result.resources | length > 0
# test: drain using pod_selectors
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
- name: create a Pod for test
k8s:
namespace: '{{ test_namespace }}'
wait: true
wait_timeout: "{{ k8s_wait_timeout | default(omit) }}"
definition:
apiVersion: v1
kind: Pod
metadata:
name: 'ansible-drain-pod'
labels:
app: ansible-drain
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- '{{ node_to_drain }}'
containers:
- name: ansible-container
image: busybox
command:
- '/bin/sh'
- '-c'
- 'while true; do echo $(date); sleep 10; done'
namespace: "{{ test_namespace }}"
template:
- daemonset.yml.j2
- deployment.yml.j2
- pod1.yml.j2
- name: Drain node using pod_selectors 'app!=ansible-drain'
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
pod_selectors:
- app!=ansible-drain
delete_options:
terminate_grace_period: 0
delete_emptydir_data: true
force: true
ignore_daemonsets: true
register: drain_pod_selector
- name: Test Cordon node
ansible.builtin.include_tasks: tasks/cordon.yml
- name: assert that node has been drained
assert:
that:
- drain_pod_selector is changed
- '"node "+node_to_drain+" marked unschedulable." in drain_pod_selector.result'
- name: Test Uncordon node
ansible.builtin.include_tasks: tasks/uncordon.yml
- name: assert that pod created before is still running
k8s_info:
namespace: '{{ test_namespace }}'
kind: Pod
label_selectors:
- app=ansible-drain
field_selectors:
- status.phase=Running
register: pods
failed_when: pods.resources == []
- name: Drain node using pod_selectors 'app=ansible-drain'
k8s_drain:
state: drain
name: '{{ node_to_drain }}'
pod_selectors:
- app=ansible-drain
delete_options:
terminate_grace_period: 0
force: true
register: drain_pod_selector_equal
- name: assert that node was not drained
assert:
that:
- drain_pod_selector_equal is changed
- '"node "+node_to_drain+" already marked unschedulable." in drain_pod_selector_equal.result'
- '"Deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: "+test_namespace+"/ansible-drain-pod." in drain_pod_selector_equal.warnings'
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
- name: Test drain node
ansible.builtin.include_tasks: tasks/drain.yml
always:
- name: Uncordon node

View File

@@ -0,0 +1,54 @@
---
- name: Uncordon node (check_mode=true)
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
register: uncordon_check_mode
check_mode: true
- name: Assert that module reported change while running in check_mode
assert:
that:
- uncordon_check_mode is changed
- name: Ensure the node is still unschedulable (uncordon run in check_mode)
ansible.builtin.include_tasks: tasks/validate_node_status.yml
- name: Uncordon node
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
register: uncordon
- name: Assert that module reported change
assert:
that:
- uncordon is changed
- name: Ensure the node is now schedulable
ansible.builtin.include_tasks: tasks/validate_node_status.yml
vars:
schedulable: true
- name: Test uncordon idempotency (check_mode=true)
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
register: uncordon_checkmode_idempotency
check_mode: true
- name: assert that uncordon is not changed (idempotency with check mode)
assert:
that:
- uncordon_checkmode_idempotency is not changed
- name: Test uncordon idempotency
k8s_drain:
state: uncordon
name: '{{ node_to_drain }}'
register: uncordon
- name: assert that uncordon is not changed
assert:
that:
- uncordon is not changed

Some files were not shown because too many files have changed in this diff Show More