Compare commits

4 Commits
5.1.0 ... 5.2.0

Author SHA1 Message Date
Mike Graves
0eff03dd19 Prep 5.2.0 release (#891)
SUMMARY

Prep 5.2.0 release

ISSUE TYPE

COMPONENT NAME

ADDITIONAL INFORMATION

Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bikouo Aubin
Reviewed-by: Alina Buzachis
2025-03-31 17:20:25 +00:00
patchback[bot]
81fb8662da waiter.py Add ClusterOperator Test (#879) (#882)
This is a backport of PR #879 as merged into main (7cdf0d0).
SUMMARY
Fixes #869
During an OpenShift installation, one of the checks to see that the cluster is ready to proceed with configuration is to check to ensure that the Cluster Operators are in an Available: True Degraded: False Progressing: False state. While you can currently use the k8s_info module to get a json response, the resulting json needs to be iterated over several times to get the appropriate status.
This PR adds functionality into waiter.py which loops over all resource instances of the cluster operators. If any of them is not ready, waiter returns False and the task false. If the task returns, you can assume that all the cluster operators are healthy.


ISSUE TYPE


Feature Pull Request

COMPONENT NAME

waiter.py
ADDITIONAL INFORMATION



A simple playbook will trigger the waiter.py to watch the ClusterOperator object

---
- name: get operators
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Get cluster operators
      kubernetes.core.k8s_info:
        api_version: v1
        kind: ClusterOperator
        kubeconfig: "/home/ocp/one/auth/kubeconfig"
        wait: true
        wait_timeout: 30
      register: cluster_operators


This will produce the simple response if everything is functioning properly:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
ok: [localhost]

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

If the timeout is reached:
PLAY [get operators] *************************************************************************************************

TASK [Get cluster operators] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds"}

PLAY RECAP ***********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

UNSOLVED: How to know which Operators are failing

Reviewed-by: Bikouo Aubin
2025-03-26 15:50:36 +00:00
patchback[bot]
f74ee14d71 Extend hidden_fields to allow more complicated field definitions (#872) (#887)
This is a backport of PR #872 as merged into main (9ec6912).
SUMMARY
This allows us to ignore e.g. the last-applied-configuration annotation by specifying
metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
ISSUE TYPE

Feature Pull Request

COMPONENT NAME
hidden_fields
This replaces #643 as I no longer have permissions to push to branches in this repo

Reviewed-by: Bikouo Aubin
2025-03-25 13:36:12 +00:00
patchback[bot]
6f75d86954 Fix linters in CI (#873) (#876)
This is a backport of PR #873 as merged into main (91df2f1).
SUMMARY
It seems that recent updates in linters break CI. Closes #874
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
CI
ADDITIONAL INFORMATION

Reviewed-by: Mike Graves <mgraves@redhat.com>
2025-02-06 15:45:54 +00:00
25 changed files with 621 additions and 50 deletions

5
.ansible-lint-ignore Normal file
View File

@@ -0,0 +1,5 @@
# https://docs.ansible.com/ansible-lint/docs/rules/
# no-changed-when is not requried for examples
plugins/connection/kubectl.py no-changed-when
# false positive result
plugins/connection/kubectl.py var-naming[no-reserved]

View File

@@ -1,2 +0,0 @@
# no-changed-when is not requried for examples
plugins/connection/kubectl.py no-changed-when

1
.gitignore vendored
View File

@@ -13,6 +13,7 @@ changelogs/.plugin-cache.yaml
tests/output tests/output
tests/integration/cloud-config-* tests/integration/cloud-config-*
.cache .cache
.ansible
# Helm charts # Helm charts
tests/integration/*-chart-*.tgz tests/integration/*-chart-*.tgz

View File

@@ -5,16 +5,24 @@ rules:
braces: braces:
max-spaces-inside: 1 max-spaces-inside: 1
level: error level: error
brackets: brackets:
max-spaces-inside: 1 max-spaces-inside: 1
level: error level: error
comments:
min-spaces-from-content: 1
comments-indentation: false
document-start: disable document-start: disable
line-length: disable line-length: disable
truthy: disable truthy: disable
indentation: indentation:
spaces: 2 spaces: 2
indent-sequences: consistent indent-sequences: consistent
octal-values:
forbid-implicit-octal: true
forbid-explicit-octal: true
ignore: | ignore: |
.cache .cache
.tox .tox
.ansible
tests/output tests/output

View File

@@ -4,22 +4,41 @@ Kubernetes Collection Release Notes
.. contents:: Topics .. contents:: Topics
v5.2.0
======
Release Summary
---------------
This release adds more functionality to the hidden_fields option and support for waiting on ClusterOperators to reach a ready state.
Minor Changes
-------------
- k8s - Extend hidden_fields to allow the expression of more complex field types to be hidden (https://github.com/ansible-collections/kubernetes.core/pull/872)
- k8s_info - Extend hidden_fields to allow the expression of more complex field types to be hidden (https://github.com/ansible-collections/kubernetes.core/pull/872)
- waiter.py - add ClusterOperator support. The module can now check OpenShift cluster health by verifying ClusterOperator status requiring 'Available: True', 'Degraded: False', and 'Progressing: False' for success. (https://github.com/ansible-collections/kubernetes.core/issues/869)
v5.1.0 v5.1.0
====== ======
Release Summary
---------------
This release came with new module ``helm_registry_auth``, improvements to the error messages in the k8s_drain module, new parameter ``insecure_registry`` for ``helm_template`` module and several bug fixes.
Minor Changes Minor Changes
------------- -------------
- Bump version of ansible-lint to minimum 24.7.0 (https://github.com/ansible-collections/kubernetes.core/pull/765). - Bump version of ansible-lint to minimum 24.7.0 (https://github.com/ansible-collections/kubernetes.core/pull/765).
- Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (https://github.com/ansible-collections/kubernetes.core/pull/805). - Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (https://github.com/ansible-collections/kubernetes.core/pull/805).
- connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741).
- k8s_drain - Improve error message for pod disruption budget when draining a node (https://github.com/ansible-collections/kubernetes.core/issues/797). - k8s_drain - Improve error message for pod disruption budget when draining a node (https://github.com/ansible-collections/kubernetes.core/issues/797).
Bugfixes Bugfixes
-------- --------
- helm - Helm version checks did not support RC versions. They now accept any version tags. (https://github.com/ansible-collections/kubernetes.core/pull/745). - helm - Helm version checks did not support RC versions. They now accept any version tags. (https://github.com/ansible-collections/kubernetes.core/pull/745).
- helm_pull - Apply no_log=True to pass_credentials to silence false positive warning.. (https://github.com/ansible-collections/kubernetes.core/pull/796). - helm_pull - Apply no_log=True to pass_credentials to silence false positive warning. (https://github.com/ansible-collections/kubernetes.core/pull/796).
- k8s_drain - Fix k8s_drain does not wait for single pod (https://github.com/ansible-collections/kubernetes.core/issues/769). - k8s_drain - Fix k8s_drain does not wait for single pod (https://github.com/ansible-collections/kubernetes.core/issues/769).
- k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (https://github.com/ansible-collections/kubernetes.core/issues/792). - k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (https://github.com/ansible-collections/kubernetes.core/issues/792).
- kubeconfig option should not appear in module invocation log (https://github.com/ansible-collections/kubernetes.core/issues/782). - kubeconfig option should not appear in module invocation log (https://github.com/ansible-collections/kubernetes.core/issues/782).
@@ -42,6 +61,7 @@ This major release drops support for ``ansible-core<2.15``.
Minor Changes Minor Changes
------------- -------------
- connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/734). - inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/734).
Breaking Changes / Porting Guide Breaking Changes / Porting Guide
@@ -84,15 +104,22 @@ Bugfixes
v3.3.0 v3.3.0
====== ======
Release Summary
---------------
This release comes with improvements to the error messages in the k8s_drain module and several bug fixes.
Minor Changes Minor Changes
------------- -------------
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 5.0 (https://github.com/ansible-collections/kubernetes.core/pull/723).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/734).
- k8s_drain - Improve error message for pod disruption budget when draining a node (https://github.com/ansible-collections/kubernetes.core/issues/797). - k8s_drain - Improve error message for pod disruption budget when draining a node (https://github.com/ansible-collections/kubernetes.core/issues/797).
Bugfixes Bugfixes
-------- --------
- helm - Helm version checks did not support RC versions. They now accept any version tags. (https://github.com/ansible-collections/kubernetes.core/pull/745). - helm - Helm version checks did not support RC versions. They now accept any version tags. (https://github.com/ansible-collections/kubernetes.core/pull/745).
- helm_pull - Apply no_log=True to pass_credentials to silence false positive warning.. (https://github.com/ansible-collections/kubernetes.core/pull/796). - helm_pull - Apply no_log=True to pass_credentials to silence false positive warning. (https://github.com/ansible-collections/kubernetes.core/pull/796).
- k8s_drain - Fix k8s_drain does not wait for single pod (https://github.com/ansible-collections/kubernetes.core/issues/769). - k8s_drain - Fix k8s_drain does not wait for single pod (https://github.com/ansible-collections/kubernetes.core/issues/769).
- k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (https://github.com/ansible-collections/kubernetes.core/issues/792). - k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (https://github.com/ansible-collections/kubernetes.core/issues/792).
- kubeconfig option should not appear in module invocation log (https://github.com/ansible-collections/kubernetes.core/issues/782). - kubeconfig option should not appear in module invocation log (https://github.com/ansible-collections/kubernetes.core/issues/782).
@@ -104,13 +131,15 @@ v3.2.0
Release Summary Release Summary
--------------- ---------------
This release comes with documentation updates. This release comes with documentation updates.
Minor Changes Minor Changes
------------- -------------
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/734).
- connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741). - connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 5.0 (https://github.com/ansible-collections/kubernetes.core/pull/723).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0 (https://github.com/ansible-collections/kubernetes.core/pull/734).
v3.1.0 v3.1.0
====== ======

View File

@@ -1,5 +1,5 @@
# Also needs to be updated in galaxy.yml # Also needs to be updated in galaxy.yml
VERSION = 5.1.0 VERSION = 5.2.0
TEST_ARGS ?= "" TEST_ARGS ?= ""
PYTHON_VERSION ?= `python -c 'import platform; print(".".join(platform.python_version_tuple()[0:2]))'` PYTHON_VERSION ?= `python -c 'import platform; print(".".join(platform.python_version_tuple()[0:2]))'`

View File

@@ -106,7 +106,7 @@ You can also include it in a `requirements.yml` file and install it via `ansible
--- ---
collections: collections:
- name: kubernetes.core - name: kubernetes.core
version: 5.1.0 version: 5.2.0
``` ```
### Installing the Kubernetes Python Library ### Installing the Kubernetes Python Library

View File

@@ -859,15 +859,15 @@ releases:
minor_changes: minor_changes:
- connection/kubectl.py - Added an example of using the kubectl connection plugin - connection/kubectl.py - Added an example of using the kubectl connection plugin
to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741). to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 5.0 (https://github.com/ansible-collections/kubernetes.core/pull/723).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0 - inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0
(https://github.com/ansible-collections/kubernetes.core/pull/734). (https://github.com/ansible-collections/kubernetes.core/pull/734).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 5.0 (https://github.com/ansible-collections/kubernetes.core/pull/723).
release_summary: This release comes with documentation updates. release_summary: This release comes with documentation updates.
fragments: fragments:
- 20240530-defer-removal-and-ansible-core-support-update.yaml - 20240530-defer-removal-and-ansible-core-support-update.yaml
- 20240601-doc-example-of-using-kubectl.yaml - 20240601-doc-example-of-using-kubectl.yaml
- inventory-update_removal_date.yml
- 3.2.0.yml - 3.2.0.yml
- inventory-update_removal_date.yml
release_date: '2024-06-14' release_date: '2024-06-14'
3.3.0: 3.3.0:
changes: changes:
@@ -885,7 +885,8 @@ releases:
minor_changes: minor_changes:
- k8s_drain - Improve error message for pod disruption budget when draining - k8s_drain - Improve error message for pod disruption budget when draining
a node (https://github.com/ansible-collections/kubernetes.core/issues/797). a node (https://github.com/ansible-collections/kubernetes.core/issues/797).
release_summary: This release comes with improvements to the error messages in the k8s_drain module and several bug fixes. release_summary: This release comes with improvements to the error messages
in the k8s_drain module and several bug fixes.
fragments: fragments:
- 20240530-ansible-core-support-update.yaml - 20240530-ansible-core-support-update.yaml
- 20240611-helm-rc-version.yaml - 20240611-helm-rc-version.yaml
@@ -946,10 +947,10 @@ releases:
breaking_changes: breaking_changes:
- Remove support for ``ansible-core<2.15`` (https://github.com/ansible-collections/kubernetes.core/pull/737). - Remove support for ``ansible-core<2.15`` (https://github.com/ansible-collections/kubernetes.core/pull/737).
minor_changes: minor_changes:
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0
(https://github.com/ansible-collections/kubernetes.core/pull/734).
- connection/kubectl.py - Added an example of using the kubectl connection plugin - connection/kubectl.py - Added an example of using the kubectl connection plugin
to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741). to the documentation (https://github.com/ansible-collections/kubernetes.core/pull/741).
- inventory/k8s.py - Defer removal of k8s inventory plugin to version 6.0.0
(https://github.com/ansible-collections/kubernetes.core/pull/734).
release_summary: This major release drops support for ``ansible-core<2.15``. release_summary: This major release drops support for ``ansible-core<2.15``.
fragments: fragments:
- 20240530-ansible-core-support-update.yaml - 20240530-ansible-core-support-update.yaml
@@ -976,8 +977,8 @@ releases:
- k8s_drain - Improve error message for pod disruption budget when draining - k8s_drain - Improve error message for pod disruption budget when draining
a node (https://github.com/ansible-collections/kubernetes.core/issues/797). a node (https://github.com/ansible-collections/kubernetes.core/issues/797).
release_summary: This release came with new module ``helm_registry_auth``, improvements release_summary: This release came with new module ``helm_registry_auth``, improvements
to the error messages in the k8s_drain module, new parameter ``insecure_registry`` for to the error messages in the k8s_drain module, new parameter ``insecure_registry``
``helm_template`` module and several bug fixes. for ``helm_template`` module and several bug fixes.
fragments: fragments:
- 0-readme.yml - 0-readme.yml
- 20240601-doc-example-of-using-kubectl.yaml - 20240601-doc-example-of-using-kubectl.yaml
@@ -999,3 +1000,20 @@ releases:
name: helm_registry_auth name: helm_registry_auth
namespace: '' namespace: ''
release_date: '2025-01-20' release_date: '2025-01-20'
5.2.0:
changes:
minor_changes:
- k8s - Extend hidden_fields to allow the expression of more complex field types
to be hidden (https://github.com/ansible-collections/kubernetes.core/pull/872)
- k8s_info - Extend hidden_fields to allow the expression of more complex field
types to be hidden (https://github.com/ansible-collections/kubernetes.core/pull/872)
- 'waiter.py - add ClusterOperator support. The module can now check OpenShift
cluster health by verifying ClusterOperator status requiring ''Available:
True'', ''Degraded: False'', and ''Progressing: False'' for success. (https://github.com/ansible-collections/kubernetes.core/issues/869)'
release_summary: This release adds more functionality to the hidden_fields option
and support for waiting on ClusterOperators to reach a ready state.
fragments:
- 5.2.0.yml
- 643-extend-hidden-fields.yaml
- 879-clusteroperator-waiter.py.yaml
release_date: '2025-03-27'

View File

@@ -25,7 +25,7 @@ tags:
- openshift - openshift
- okd - okd
- cluster - cluster
version: 5.1.0 version: 5.2.0
build_ignore: build_ignore:
- .DS_Store - .DS_Store
- "*.tar.gz" - "*.tar.gz"

View File

@@ -4,7 +4,7 @@
import copy import copy
from json import loads from json import loads
from re import compile from re import compile
from typing import Any, Dict, List, Optional, Tuple from typing import Any, Dict, List, Optional, Tuple, Union
from ansible.module_utils.common.dict_transformations import dict_merge from ansible.module_utils.common.dict_transformations import dict_merge
from ansible_collections.kubernetes.core.plugins.module_utils.hashes import ( from ansible_collections.kubernetes.core.plugins.module_utils.hashes import (
@@ -501,47 +501,107 @@ def diff_objects(
result["before"] = diff[0] result["before"] = diff[0]
result["after"] = diff[1] result["after"] = diff[1]
if list(result["after"].keys()) != ["metadata"] or list( if list(result["after"].keys()) == ["metadata"] and list(
result["before"].keys() result["before"].keys()
) != ["metadata"]: ) == ["metadata"]:
return False, result # If only metadata.generation and metadata.resourceVersion changed, ignore it
ignored_keys = set(["generation", "resourceVersion"])
# If only metadata.generation and metadata.resourceVersion changed, ignore it if set(result["after"]["metadata"].keys()).issubset(ignored_keys) and set(
ignored_keys = set(["generation", "resourceVersion"]) result["before"]["metadata"].keys()
).issubset(ignored_keys):
if not set(result["after"]["metadata"].keys()).issubset(ignored_keys): return True, result
return False, result
if not set(result["before"]["metadata"].keys()).issubset(ignored_keys):
return False, result
result["before"] = hide_fields(result["before"], hidden_fields) result["before"] = hide_fields(result["before"], hidden_fields)
result["after"] = hide_fields(result["after"], hidden_fields) result["after"] = hide_fields(result["after"], hidden_fields)
return True, result return False, result
def hide_fields(definition: dict, hidden_fields: Optional[list]) -> dict: def hide_field_tree(hidden_field: str) -> List[str]:
if not hidden_fields: result = []
return definition key, rest = hide_field_split2(hidden_field)
result = copy.deepcopy(definition) result.append(key)
for hidden_field in hidden_fields: while rest:
result = hide_field(result, hidden_field) key, rest = hide_field_split2(rest)
result.append(key)
return result return result
# hide_field is not hugely sophisticated and designed to cope def build_hidden_field_tree(hidden_fields: List[str]) -> Dict[str, Any]:
# with e.g. status or metadata.managedFields rather than e.g. """Group hidden field targeting the same json key
# spec.template.spec.containers[0].env[3].value Example:
def hide_field(definition: dict, hidden_field: str) -> dict: Input: ['env[3]', 'env[0]']
split = hidden_field.split(".", 1) Output: {'env': [0, 3]}
if split[0] in definition: """
if len(split) == 2: output = {}
definition[split[0]] = hide_field(definition[split[0]], split[1]) for hidden_field in hidden_fields:
else: current = output
del definition[split[0]] tree = hide_field_tree(hidden_field)
for idx, key in enumerate(tree):
if current.get(key, "") is None:
break
if idx == (len(tree) - 1):
current[key] = None
elif key not in current:
current[key] = {}
current = current[key]
return output
# hide_field should be able to cope with simple or more complicated
# field definitions
# e.g. status or metadata.managedFields or
# spec.template.spec.containers[0].env[3].value or
# metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
def hide_field(
definition: Union[Dict[str, Any], List[Any]], hidden_field: Dict[str, Any]
) -> Dict[str, Any]:
def dict_contains_key(obj: Dict[str, Any], key: str) -> bool:
return key in obj
def list_contains_key(obj: List[Any], key: str) -> bool:
return int(key) < len(obj)
hidden_keys = list(hidden_field.keys())
field_contains_key = dict_contains_key
field_get_key = str
if isinstance(definition, list):
# Sort with reverse=true so that when we delete an item from the list, the order is not changed
hidden_keys = sorted(
[k for k in hidden_field.keys() if k.isdecimal()], reverse=True
)
field_contains_key = list_contains_key
field_get_key = int
for key in hidden_keys:
if field_contains_key(definition, key):
value = hidden_field.get(key)
convert_key = field_get_key(key)
if value is None:
del definition[convert_key]
else:
definition[convert_key] = hide_field(definition[convert_key], value)
if (
definition[convert_key] == dict()
or definition[convert_key] == list()
):
del definition[convert_key]
return definition return definition
def hide_fields(
definition: Dict[str, Any], hidden_fields: Optional[List[str]]
) -> Dict[str, Any]:
if not hidden_fields:
return definition
result = copy.deepcopy(definition)
hidden_field_tree = build_hidden_field_tree(hidden_fields)
return hide_field(result, hidden_field_tree)
def decode_response(resp) -> Tuple[Dict, List[str]]: def decode_response(resp) -> Tuple[Dict, List[str]]:
""" """
This function decodes unserialized responses from the Kubernetes python This function decodes unserialized responses from the Kubernetes python
@@ -620,3 +680,35 @@ def parse_quoted_string(quoted_string: str) -> Tuple[str, str]:
raise ValueError("invalid quoted string: missing closing quote") raise ValueError("invalid quoted string: missing closing quote")
return "".join(result), remainder return "".join(result), remainder
# hide_field_split2 returns the first key in hidden_field and the rest of the hidden_field
# We expect the first key to either be in brackets, to be terminated by the start of a left
# bracket, or to be terminated by a dot.
# examples would be:
# field.another.next -> (field, another.next)
# field[key].value -> (field, [key].value)
# [key].value -> (key, value)
# [one][two] -> (one, [two])
def hide_field_split2(hidden_field: str) -> Tuple[str, str]:
lbracket = hidden_field.find("[")
rbracket = hidden_field.find("]")
dot = hidden_field.find(".")
if lbracket == 0:
# skip past right bracket and any following dot
rest = hidden_field[rbracket + 1 :] # noqa: E203
if rest and rest[0] == ".":
rest = rest[1:]
return (hidden_field[lbracket + 1 : rbracket], rest) # noqa: E203
if lbracket != -1 and (dot == -1 or lbracket < dot):
return (hidden_field[:lbracket], hidden_field[lbracket:])
split = hidden_field.split(".", 1)
if len(split) == 1:
return split[0], ""
return split

View File

@@ -117,11 +117,34 @@ def exists(resource: Optional[ResourceInstance]) -> bool:
return bool(resource) and not empty_list(resource) return bool(resource) and not empty_list(resource)
def cluster_operator_ready(resource: ResourceInstance) -> bool:
"""
Predicate to check if a single ClusterOperator is healthy.
Returns True if:
- "Available" is True
- "Degraded" is False
- "Progressing" is False
"""
if not resource:
return False
# Extract conditions from the resource's status
conditions = resource.get("status", {}).get("conditions", [])
status = {x.get("type", ""): x.get("status") for x in conditions}
return (
(status.get("Degraded") == "False")
and (status.get("Progressing") == "False")
and (status.get("Available") == "True")
)
RESOURCE_PREDICATES = { RESOURCE_PREDICATES = {
"DaemonSet": daemonset_ready, "DaemonSet": daemonset_ready,
"Deployment": deployment_ready, "Deployment": deployment_ready,
"Pod": pod_ready, "Pod": pod_ready,
"StatefulSet": statefulset_ready, "StatefulSet": statefulset_ready,
"ClusterOperator": cluster_operator_ready,
} }

View File

@@ -188,7 +188,8 @@ options:
description: description:
- Hide fields matching this option in the result - Hide fields matching this option in the result
- An example might be C(hidden_fields=[metadata.managedFields]) - An example might be C(hidden_fields=[metadata.managedFields])
- Only field definitions that don't reference list items are supported (so V(spec.containers[0]) would not work) or V(hidden_fields=[spec.containers[0].env[3].value])
or V(hidden_fields=[metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]])
type: list type: list
elements: str elements: str
version_added: 3.0.0 version_added: 3.0.0

View File

@@ -48,7 +48,8 @@ options:
description: description:
- Hide fields matching any of the field definitions in the result - Hide fields matching any of the field definitions in the result
- An example might be C(hidden_fields=[metadata.managedFields]) - An example might be C(hidden_fields=[metadata.managedFields])
- Only field definitions that don't reference list items are supported (so V(spec.containers[0]) would not work) or V(hidden_fields=[spec.containers[0].env[3].value])
or V(hidden_fields=[metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]])
type: list type: list
elements: str elements: str
version_added: 3.0.0 version_added: 3.0.0

View File

@@ -77,6 +77,7 @@
definition: "{{ hide_fields_base_configmap | combine({'data':{'anew':'value'}}) }}" definition: "{{ hide_fields_base_configmap | combine({'data':{'anew':'value'}}) }}"
hidden_fields: hidden_fields:
- data - data
- metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
apply: true apply: true
register: hf6 register: hf6
diff: true diff: true
@@ -86,6 +87,22 @@
that: that:
- hf6.changed - hf6.changed
- name: Ensure hidden fields are not present
assert:
that:
- >-
'annotations' not in hf6.result.metadata or
'kubectl.kubernetes.io/last-applied-configuration'
not in hf6.result.metadata.annotations
- >-
'annotations' not in hf6.diff.before.metadata or
'kubectl.kubernetes.io/last-applied-configuration'
not in hf6.diff.before.metadata.annotations
- >-
'annotations' not in hf6.diff.after.metadata or
'kubectl.kubernetes.io/last-applied-configuration'
not in hf6.diff.after.metadata.annotations
- name: Hidden field should not show up in deletion - name: Hidden field should not show up in deletion
k8s: k8s:
definition: "{{ hide_fields_base_configmap}}" definition: "{{ hide_fields_base_configmap}}"

View File

@@ -23,7 +23,7 @@
- name: Update directory permissions - name: Update directory permissions
file: file:
path: "{{ manifests_dir.path }}" path: "{{ manifests_dir.path }}"
mode: 0755 mode: '0755'
- name: Create manifests files - name: Create manifests files
copy: copy:

View File

@@ -45,7 +45,7 @@
- name: make script as executable - name: make script as executable
file: file:
path: "{{ tmp_dir_path }}/install_kustomize.sh" path: "{{ tmp_dir_path }}/install_kustomize.sh"
mode: 0755 mode: '0755'
- name: Install kustomize - name: Install kustomize
command: "{{ tmp_dir_path }}/install_kustomize.sh" command: "{{ tmp_dir_path }}/install_kustomize.sh"

View File

@@ -10,6 +10,7 @@ plugins/module_utils/k8sdynamicclient.py import-3.11!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/unit/module_utils/fixtures/pods.yml yamllint!skip tests/unit/module_utils/fixtures/pods.yml yamllint!skip

View File

@@ -11,6 +11,7 @@ plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip

View File

@@ -14,6 +14,7 @@ plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip

View File

@@ -14,6 +14,7 @@ plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip

View File

@@ -11,6 +11,7 @@ plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip

View File

@@ -11,6 +11,7 @@ plugins/module_utils/version.py pylint!skip
plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_scale.py validate-modules:parameter-type-not-in-doc
plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc plugins/modules/k8s_service.py validate-modules:parameter-type-not-in-doc
tests/unit/module_utils/fixtures/clusteroperator.yml yamllint!skip
tests/unit/module_utils/fixtures/definitions.yml yamllint!skip tests/unit/module_utils/fixtures/definitions.yml yamllint!skip
tests/unit/module_utils/fixtures/deployments.yml yamllint!skip tests/unit/module_utils/fixtures/deployments.yml yamllint!skip
tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip tests/integration/targets/k8s_delete/files/deployments.yaml yamllint!skip

View File

@@ -0,0 +1,99 @@
---
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
name: authentication
spec: {}
status:
conditions:
- message: All is well
reason: AsExpected
status: 'False'
type: Degraded
- message: 'AuthenticatorCertKeyProgressing: All is well'
reason: AsExpected
status: 'False'
type: Progressing
- message: All is well
reason: AsExpected
status: 'True'
type: Available
- message: All is well
reason: AsExpected
status: 'True'
type: Upgradeable
- reason: NoData
status: Unknown
type: EvaluationConditionsDetected
---
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
name: dns
spec: {}
status:
conditions:
- message: DNS "default" is available.
reason: AsExpected
status: 'True'
type: Available
- message: 'DNS "default" reports Progressing=True: "Have 2 available node-resolver
pods, want 3."'
reason: DNSReportsProgressingIsTrue
status: 'True'
type: Progressing
- reason: DNSNotDegraded
status: 'False'
type: Degraded
- message: 'DNS default is upgradeable: DNS Operator can be upgraded'
reason: DNSUpgradeable
status: 'True'
type: Upgradeable
---
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
name: dns
spec: {}
status:
conditions:
- message: DNS "default" is available.
reason: AsExpected
status: 'True'
type: Available
- message: 'DNS "default" reports Progressing=True: "Have 2 available node-resolver
pods, want 3."'
reason: DNSReportsProgressingIsTrue
status: 'False'
type: Progressing
- reason: DNSNotDegraded
status: 'True'
type: Degraded
- message: 'DNS default is upgradeable: DNS Operator can be upgraded'
reason: DNSUpgradeable
status: 'False'
type: Upgradeable
---
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
name: dns
spec: {}
status:
conditions:
- message: DNS "default" is available.
reason: AsExpected
status: 'False'
type: Available
- message: 'DNS "default" reports Progressing=True: "Have 2 available node-resolver
pods, want 3."'
reason: DNSReportsProgressingIsTrue
status: 'True'
type: Progressing
- reason: DNSNotDegraded
status: 'True'
type: Degraded
- message: 'DNS default is upgradeable: DNS Operator can be upgraded'
reason: DNSUpgradeable
status: 'False'
type: Upgradeable

View File

@@ -0,0 +1,264 @@
# Copyright [2025] [Red Hat, Inc.]
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from ansible_collections.kubernetes.core.plugins.module_utils.k8s.service import (
build_hidden_field_tree,
hide_fields,
)
def test_hiding_missing_field_does_nothing():
output = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=dict(one="1", two="2")
)
hidden_fields = ["doesnotexist"]
assert hide_fields(output, hidden_fields) == output
def test_hiding_simple_field():
output = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=dict(one="1", two="2")
)
hidden_fields = ["metadata"]
expected = dict(kind="ConfigMap", data=dict(one="1", two="2"))
assert hide_fields(output, hidden_fields) == expected
def test_hiding_only_key_in_dict_removes_dict():
output = dict(kind="ConfigMap", metadata=dict(name="foo"), data=dict(one="1"))
hidden_fields = ["data.one"]
expected = dict(kind="ConfigMap", metadata=dict(name="foo"))
assert hide_fields(output, hidden_fields) == expected
def test_hiding_all_keys_in_dict_removes_dict():
output = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=dict(one="1", two="2")
)
hidden_fields = ["data.one", "data.two"]
expected = dict(kind="ConfigMap", metadata=dict(name="foo"))
assert hide_fields(output, hidden_fields) == expected
def test_hiding_multiple_fields():
output = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=dict(one="1", two="2")
)
hidden_fields = ["metadata", "data.one"]
expected = dict(kind="ConfigMap", data=dict(two="2"))
assert hide_fields(output, hidden_fields) == expected
def test_hiding_dict_key():
output = dict(
kind="ConfigMap",
metadata=dict(
name="foo",
annotations={
"kubectl.kubernetes.io/last-applied-configuration": '{"testvalue"}'
},
),
data=dict(one="1", two="2"),
)
hidden_fields = [
"metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]",
]
expected = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=dict(one="1", two="2")
)
assert hide_fields(output, hidden_fields) == expected
def test_hiding_list_value_key():
output = dict(
kind="Pod",
metadata=dict(name="foo"),
spec=dict(
containers=[
dict(
name="containers",
image="busybox",
env=[
dict(name="ENV1", value="env1"),
dict(name="ENV2", value="env2"),
dict(name="ENV3", value="env3"),
],
)
]
),
)
hidden_fields = ["spec.containers[0].env[1].value"]
expected = dict(
kind="Pod",
metadata=dict(name="foo"),
spec=dict(
containers=[
dict(
name="containers",
image="busybox",
env=[
dict(name="ENV1", value="env1"),
dict(name="ENV2"),
dict(name="ENV3", value="env3"),
],
)
]
),
)
assert hide_fields(output, hidden_fields) == expected
def test_hiding_last_list_item():
output = dict(
kind="Pod",
metadata=dict(name="foo"),
spec=dict(
containers=[
dict(
name="containers",
image="busybox",
env=[
dict(name="ENV1", value="env1"),
],
)
]
),
)
hidden_fields = ["spec.containers[0].env[0]"]
expected = dict(
kind="Pod",
metadata=dict(name="foo"),
spec=dict(
containers=[
dict(
name="containers",
image="busybox",
)
]
),
)
assert hide_fields(output, hidden_fields) == expected
def test_hiding_nested_dicts_using_brackets():
output = dict(
kind="Pod",
metadata=dict(name="foo"),
spec=dict(
containers=[
dict(
name="containers",
image="busybox",
securityContext=dict(runAsUser=101),
)
]
),
)
hidden_fields = ["spec.containers[0][securityContext][runAsUser]"]
expected = dict(
kind="Pod",
metadata=dict(name="foo"),
spec=dict(
containers=[
dict(
name="containers",
image="busybox",
)
]
),
)
assert hide_fields(output, hidden_fields) == expected
def test_using_jinja_syntax():
output = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=["0", "1", "2", "3"]
)
hidden_fields = ["data.2"]
expected = dict(kind="ConfigMap", metadata=dict(name="foo"), data=["0", "1", "3"])
assert hide_fields(output, hidden_fields) == expected
def test_remove_multiple_items_from_list():
output = dict(
kind="ConfigMap", metadata=dict(name="foo"), data=["0", "1", "2", "3"]
)
hidden_fields = ["data[0]", "data[2]"]
expected = dict(kind="ConfigMap", metadata=dict(name="foo"), data=["1", "3"])
assert hide_fields(output, hidden_fields) == expected
def test_hide_dict_and_nested_dict():
output = {
"kind": "Pod",
"metadata": {
"labels": {
"control-plane": "controller-manager",
"pod-template-hash": "687b856498",
},
"annotations": {
"kubectl.kubernetes.io/default-container": "awx-manager",
"creationTimestamp": "2025-01-16T12:40:43Z",
},
},
}
hidden_fields = ["metadata.labels.pod-template-hash", "metadata.labels"]
expected = {
"kind": "Pod",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/default-container": "awx-manager",
"creationTimestamp": "2025-01-16T12:40:43Z",
}
},
}
assert hide_fields(output, hidden_fields) == expected
@pytest.mark.parametrize(
"hidden_fields,expected",
[
(
[
"data[0]",
"data[1]",
"metadata.annotation",
"metadata.annotation[0].name",
],
{"data": {"0": None, "1": None}, "metadata": {"annotation": None}},
),
(
[
"data[0]",
"data[1]",
"metadata.annotation[0].name",
"metadata.annotation",
],
{"data": {"0": None, "1": None}, "metadata": {"annotation": None}},
),
(
[
"data[0]",
"data[1]",
"data",
"metadata.annotation[0].name",
"metadata.annotation",
],
{"data": None, "metadata": {"annotation": None}},
),
],
)
def test_build_hidden_field_tree(hidden_fields, expected):
assert build_hidden_field_tree(hidden_fields) == expected

View File

@@ -9,6 +9,7 @@ from ansible_collections.kubernetes.core.plugins.module_utils.k8s.waiter import
DummyWaiter, DummyWaiter,
Waiter, Waiter,
clock, clock,
cluster_operator_ready,
custom_condition, custom_condition,
deployment_ready, deployment_ready,
exists, exists,
@@ -29,6 +30,7 @@ def resources(filepath):
RESOURCES = resources("fixtures/definitions.yml") RESOURCES = resources("fixtures/definitions.yml")
PODS = resources("fixtures/pods.yml") PODS = resources("fixtures/pods.yml")
DEPLOYMENTS = resources("fixtures/deployments.yml") DEPLOYMENTS = resources("fixtures/deployments.yml")
CLUSTER_OPERATOR = resources("fixtures/clusteroperator.yml")
def test_clock_times_out(): def test_clock_times_out():
@@ -119,3 +121,10 @@ def test_get_waiter_returns_correct_waiter():
).predicate.func ).predicate.func
== custom_condition == custom_condition
) )
@pytest.mark.parametrize(
"clusteroperator,expected", zip(CLUSTER_OPERATOR, [True, False, False, False])
)
def test_cluster_operator(clusteroperator, expected):
assert cluster_operator_ready(clusteroperator) is expected