This is a backport of PR #915 as merged into main (6a0635a).
SUMMARY
By hiding fields first before creating a diff hidden fields will not be shown in the resulting diffs and therefore will also not trigger the changed condition.
The issue can only be reproduced when a mutating webhook changes the object while the kubernetes.core.k8s module is working with it.
kubevirt/kubevirt.core#145
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
kubernetes.core.module_utils.k8s.service
ADDITIONAL INFORMATION
Run kubernetes.core.k8s and create object with hidden fields. After run kubernetes.core.k8s again and let a webhook mutate the object that the module is working with. The module should return with changed: no.
Reviewed-by: Alina Buzachis
This is a backport of PR #899 as merged into main (1705ced).
This reverts commit 6efabd3.
SUMMARY
Fixes#870
A better solution is necessary to address #782. The current code makes getting manifests practically unusable. We need to revert this commit until a better solution is found.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
kubeconfig
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
This PR is a rebase of #898 for CI to pass
Thanks @efussi for your collaboration.
Closes#892
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
SUMMARY
This allows us to ignore e.g. the last-applied-configuration annotation by specifying
metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
hidden_fields
This replaces #643 as I no longer have permissions to push to branches in this repo
Reviewed-by: Bikouo Aubin
Reviewed-by: Helen Bailey <hebailey@redhat.com>
Reviewed-by: GomathiselviS <gomathiselvi@gmail.com>
Reviewed-by: Alina Buzachis
SUMMARY
Fixes#869
During an OpenShift installation, one of the checks to see that the cluster is ready to proceed with configuration is to check to ensure that the Cluster Operators are in an Available: True Degraded: False Progressing: False state. While you can currently use the k8s_info module to get a json response, the resulting json needs to be iterated over several times to get the appropriate status.
This PR adds functionality into waiter.py which loops over all resource instances of the cluster operators. If any of them is not ready, waiter returns False and the task false. If the task returns, you can assume that all the cluster operators are healthy.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
waiter.py
ADDITIONAL INFORMATION
A simple playbook will trigger the waiter.py to watch the ClusterOperator object
---
- name: get operators
hosts: localhost
gather_facts: false
tasks:
- name: Get cluster operators
kubernetes.core.k8s_info:
api_version: v1
kind: ClusterOperator
kubeconfig: "/home/ocp/one/auth/kubeconfig"
wait: true
wait_timeout: 30
register: cluster_operators
This will produce the simple response if everything is functioning properly:
PLAY [get operators] *************************************************************************************************
TASK [Get cluster operators] *****************************************************************************************
ok: [localhost]
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
If the timeout is reached:
PLAY [get operators] *************************************************************************************************
TASK [Get cluster operators] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to gather information about ClusterOperator(s) even after waiting for 30 seconds"}
PLAY RECAP ***********************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
UNSOLVED: How to know which Operators are failing
Reviewed-by: Mandar Kulkarni <mandar242@gmail.com>
Reviewed-by: Bikouo Aubin
* new module helm_registry_auth
* Initial integration tests
* final update copyright and integration test before pr
* update link to pr in changelog fragment
* reformat plugins/module_utils/helm.py with black
to fix linters in actions
* attempt to fix unit test
unit test was missing initially
* fix https://pycqa.github.io/isort/ linter
* next attemp to fix unit-test
* remove unused and unsupported helm_args_common
* remove unused imports and fix other linters errors
* another fix for unit test
* fix issue introducied by commit ff02893a12a31f9c44b5c48f9a8bf85057295961
* add binary_path to arg_spec
* return helm_cmd in the output of check mode
remove changlog fragment
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* description suggestion from reviewer/maintainer
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* remove changed from module return
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* remove redundant code
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* Update plugins/modules/helm_registry_auth.py
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* consider support of logout when user is not logged in
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* consider support helm < 3.0.0
* Revert "consider support helm < 3.0.0"
This reverts commit f20004d196.
* reintroduce support of helm version less than 3.8.0
reference: https://helm.sh/docs/topics/registries/#enabling-oci-support-prior-to-v380
* revert reintroducing support of helm < 3.8.0
reason: didn't find a quick way to deal with tests
* update documentation with the recent module updates
* Update plugins/modules/helm_registry_auth.py
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* add test of logout impendency
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
* fix linters
* fix intendations in the integration tests
* create tests/integration/targets/helm_registry_auth/aliases
* fix integration test (typo)
* fix integration tests (test wrong cred)
* add stderr when module fail
* another attempt to fix integration test
* fix assertion in integration test to be not affceted by the #830
---------
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
SUMMARY
Fixes#792 .
The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running:
try:
response = self._api_instance.read_namespaced_pod(
namespace=pod[0], name=pod[1]
)
if not response:
pod = None
time.sleep(wait_sleep)
This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
k8s_drain
Reviewed-by: Mike Graves <mgraves@redhat.com>
SUMMARY
If the helm CLI version includes -rc.1 for example, the version checks fails due to an incomplete regex.
The error can be triggered if you use helm v3.15.0-rc.1 for example, and apply a helm chart with wait: true
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
helm
helm_pull
ADDITIONAL INFORMATION
Reviewed-by: Yuriy Novostavskiy
Reviewed-by: Eric G.
Reviewed-by: Mike Graves <mgraves@redhat.com>
Remove deprecated function from module_utils/common.py
SUMMARY
Remove deprecated functions and class from module_utils/common.py in order to prepare release 4.0.0
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
module_utils/common.py
Reviewed-by: Alina Buzachis
k8s - remove support for merge_type=json
SUMMARY
Support for merge_type=json has been removed in version 4.0.0. Please use kubernetes.core.k8s_json_patch instead.
ISSUE TYPE
Bugfix Pull Request
Docs Pull Request
Feature Pull Request
New Module Pull Request
COMPONENT NAME
k8s.py
ADDITIONAL INFORMATION
Reviewed-by: Bikouo Aubin
Reviewed-by: Mike Graves <mgraves@redhat.com>
k8s: Display warnings to users
SUMMARY
This changes K8sService and the k8s module so warnings returned by the K8S API are displayed to the user.
Fixeskubevirt/kubevirt.core#30Fixeskubevirt/kubevirt.core#31
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
k8s module
K8sService
ADDITIONAL INFORMATION
Before:
TASK [Create VM] **********************************************************************************************************************************************
ok: [localhost]
After:
TASK [Create VM] **********************************************************************************************************************************************
[WARNING]: unknown field "spec.template.spec.disk"
[WARNING]: unknown field "spec.template.spec.domain.bogus"
ok: [localhost]
Reviewed-by: Adam Miller <admiller@redhat.com>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Felix Matouschek <felix@matouschek.org>
helm - expand kubeconfig path with user's home dir
SUMMARY
Currently the helm module fails when providing the default kubeconfig path explicitly, while the same path is fine for the k8s module.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
helm
ADDITIONAL INFORMATION
- name: Deploy kubelet-csr-approver
delegate_to: client
run_once: true
kubernetes.core.helm:
update_repo_cache: true
kubeconfig: "~/.kube/config"
state: present
name: kubelet-csr-approver
namespace: kubelet-csr-approver
create_namespace: true
chart_ref: kubelet-csr-approver/kubelet-csr-approver
chart_version: 1.0.5
values: "{{ lookup('template', 'values.yaml.j2') | from_yaml }}"
atomic: true
Before change:
TASK [kubernetes/kubelet_csr_approver : Deploy kubelet-csr-approver] ***
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: FileNotFoundError: [Errno 2] No such file or directory: '~/.kube/config'
fatal: [node-1 -> client(192.168.121.56)]: FAILED! => {"changed": false, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1697293347.7135417-118207-9805169252135/AnsiballZ_helm.py\", line 107, in <module>\r\n _ansiballz_main()\r\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1697293347.7135417-118207-9805169252135/AnsiballZ_helm.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1697293347.7135417-118207-9805169252135/AnsiballZ_helm.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.kubernetes.core.plugins.modules.helm', init_globals=dict(_module_fqn='ansible_collections.kubernetes.core.plugins.modules.helm', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py\", line 924, in <module>\r\n File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py\", line 737, in main\r\n File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/modules/helm.py\", line 435, in run_repo_update\r\n File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py\", line 169, in run_helm_command\r\n File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py\", line 162, in env_update\r\n File \"/tmp/ansible_kubernetes.core.helm_payload_o8s36dti/ansible_kubernetes.core.helm_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/helm.py\", line 120, in _prepare_helm_environment\r\nFileNotFoundError: [Errno 2] No such file or directory: '~/.kube/config'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
After change:
TASK [kubernetes/kubelet_csr_approver : Deploy kubelet-csr-approver] ***
changed: [node-1 -> client(192.168.121.56)]
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin
* Cleanup gha
* test by removing matrix excludes
* Rename sanity tests
* trigger integration tests
* Fix ansible-lint workflow
* Fix concurrency
* Add ansible-lint config
* Add ansible-lint config
* Fix integration and lint issues
* integration wf
* fix yamllint issues
* fix yamllint issues
* update readme and add ignore-2.16.txt
* fix ansible-doc
* Add version
* Use /dev/random to generate random data
The GHA environment has difficultly generating entropy. Trying to read
from /dev/urandom just blocks forever. We don't care if the random data
is cryptographically secure; it's just garbage data for the test. Read
from /dev/random, instead. This is only used during the k8s_copy test
target.
This also removes the custom test module that was being used to generate
the files. It's not worth maintaining this for two task that can be
replaced with some simple command/shell tasks.
* Fix saniry errors
* test github_action fix
* Address review comments
* Remove default types
* review comments
* isort fixes
* remove tags
* Add setuptools to venv
* Test gh changes
* update changelog
* update ignore-2.16
* Fix indentation in inventory plugin example
* Update .github/workflows/integration-tests.yaml
* Update integration-tests.yaml
---------
Co-authored-by: Mike Graves <mgraves@redhat.com>
Co-authored-by: Bikouo Aubin <79859644+abikouo@users.noreply.github.com>
Provide a mechanism to hide fields from output
SUMMARY
The k8s and k8s_info modules can be a little noisy in verbose mode, and most of that is due to managedFields.
If we can provide a mechanism to hide managedFields, the output is a lot more useful.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
k8s, k8s_info
ADDITIONAL INFORMATION
Before
ANSIBLE_COLLECTIONS_PATH=../../.. ansible -m k8s_info -a 'kind=ConfigMap name=hide-fields-cm namespace=hide-fields' localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"api_found": true,
"changed": false,
"resources": [
{
"apiVersion": "v1",
"data": {
"another": "value",
"hello": "world"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"another\":\"value\",\"hello\":\"world\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"hide-fields-cm\",\"namespace\":\"hide-fields\"}}\n"
},
"creationTimestamp": "2023-06-13T01:47:47Z",
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:another": {},
"f:hello": {}
},
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
}
}
},
"manager": "kubectl-client-side-apply",
"operation": "Update",
"time": "2023-06-13T01:47:47Z"
}
],
"name": "hide-fields-cm",
"namespace": "hide-fields",
"resourceVersion": "2557394",
"uid": "f233da63-6374-4079-9825-3562c0ed123c"
}
}
]
}
After
ANSIBLE_COLLECTIONS_PATH=../../.. ansible -m k8s_info -a 'kind=ConfigMap name=hide-fields-cm namespace=hide-fields hidden_fields=metadata.managedFields' localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"api_found": true,
"changed": false,
"resources": [
{
"apiVersion": "v1",
"data": {
"another": "value",
"hello": "world"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"another\":\"value\",\"hello\":\"world\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"hide-fields-cm\",\"namespace\":\"hide-fields\"}}\n"
},
"creationTimestamp": "2023-06-13T01:47:47Z",
"name": "hide-fields-cm",
"namespace": "hide-fields",
"resourceVersion": "2557394",
"uid": "f233da63-6374-4079-9825-3562c0ed123c"
}
}
]
}
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Will Thames
make name optional to delete all resources for the specified resource type
SUMMARY
closes#504
k8s module should allow deleting all namespace resources for the specified resource type.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
k8s
ADDITIONAL INFORMATION
Delete all Pods from namespace test
- k8s:
namespace: test
kind: Pod
api_version: v1
delete_all: true
state: absent
Reviewed-by: Gonéri Le Bouder <goneri@lebouder.net>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin
k8s_info - fix issue with kubernetes-client caching when api-server was available
SUMMARY
closes#508
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
k8s_info
ADDITIONAL INFORMATION
Reviewed-by: Mike Graves <mgraves@redhat.com>
helm - add support for -set options when running helm install
SUMMARY
helm support setting options -set, -set-string, -set-file and -set-json when running helm install
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
helm
ADDITIONAL INFORMATION
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Bikouo Aubin <None>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Helm - Fix issue with alternative kubeconfig
SUMMARY
closes#538
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
helm modules
Reviewed-by: Mike Graves <mgraves@redhat.com>
fix multiple issues with dry_run logic
SUMMARY
Fix multiple issues with dry_run logic
The parameter value passed to the client set to dry_run=All instead of dry_run=True.
Add conditional check for Kubernetes release for the dry_run to be set
Add integration test that checks to ensure server side dry run is being used during check mode.
ISSUE TYPE
Bugfix Pull Request
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Jill R <None>
k8s_cp - fix issue when directory contains space in its name
Depends-On: #549
SUMMARY
There is a remaining issue not addressed by #512 when copying directory from Pod to local filesystem, if the directory contains space into its name, the directory was not copied
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
k8s_cp
ADDITIONAL INFORMATION
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
k8s - fix issue with server side apply
SUMMARY
Fix#548 and #547
ISSUE TYPE
Bugfix Pull Request
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
Added possibility to get all values by helm_info module
SUMMARY
Parameter get_all_values has been added, which is passed to function get_values. Default is False. Parameter is not required.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
helm_info
ADDITIONAL INFORMATION
Unfortunately, helm_info module lacks functionality of getting all the values of a helm release, including the default ones. This restricts upgrade and config migration capabilities. Parameter get_all_values has been added. This parameter, if set, adds -a parameter to helm get values call. The parameter is not required and defaults to False, so backwards compability is complied.
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Bikouo Aubin <None>
Honor aliases for lookup and inventory plugins
rebase and extend the following PR #71
ISSUE TYPE
Bugfix Pull Request
Reviewed-by: Mike Graves <mgraves@redhat.com>
Ensure CoreExceptions are handled gracefully
SUMMARY
CoreExceptions, when raised, should have a reasonably helpful and
actionable message associated with them. This adds a final check in
module execution to gracefully fail from these exceptions. A new
fail_from_exception method is added both to simplify exiting the module,
and to ensure that any chained exceptions are available when using -vvv.
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Joseph Torcasso <None>
Port changes from main to refactored branch
Depends-on: ansible/ansible-zuul-jobs#1563
SUMMARY
This PR contains several commits that complete the rebase of the 2.x-refactor branch onto main. Most of the changes here had to be manually backported after rebasing as the original changes were to code that will be deprecated. In addition, rather than trying to manually sort out conflicts and changes to the sanity ignores, I rewrote the refresh_ignore_files script to fully automate the management of ignore files. Previously, these files were both manually edited and auto-generated. This should no longer be the case, and these files should never be manually edited going forward.
For the purposes of reviewing and history, I kept all changes in separate commits tied to the original commit being backported.
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Jill R <None>
This primarily moves the diff and wait logic from the various service
methods to perform_action to eliminate code duplication. I also moved
the diff_objects function out of the service object and moved most of
the find_resource logic to a new resource client method. We ended up
with several modules creating a service object just to use one of these
methods, so it seemed to make sense to make these more accessible.
Move module dependency functions outside of module
SUMMARY
This moves the has_at_least and requires functions that had been on the
module to top level functions. The functions on the module now call
these with a few added bits of functionality.
Moving these functions to the top level and removing their requirement
on having a module makes them usable in situations where we may not yet
have a module, such as during client creation.
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Alina Buzachis <None>
Reviewed-by: None <None>
K8sService class
SUMMARY
This refactors the perform_action() logic from common.py into a separate K8sService class.
TODO:
Unit tests.
ISSUE TYPE
New Module Pull Request
COMPONENT NAME
service.py
Reviewed-by: Abhijeet Kasurde <None>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: None <None>
Add new waiter
SUMMARY
This refactors the waiter logic from common.py into a separate module.
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: None <None>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: None <None>
Initial work K8S client class
SUMMARY
Initial work on K8SClient Class.
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: None <None>
Add resource definition refactor
SUMMARY
This refactors most of the logic around creating a list of functional
resource definitions based on input parameters for the module.
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Abhijeet Kasurde <None>
Reviewed-by: Mike Graves <mgraves@redhat.com>
Reviewed-by: None <None>
* Add new AnsibleK8SModule class
This class is intended to replace part of the K8SAnsibleMixin class and
is part of a larger refactoring effort.
* Fix sanity errors
* Fix unit tests
* Add mock to test requirements
k8s - fix issue when try to delete resources using label_selectors
SUMMARY
The kubernetes dynamic client has label_selector parameter for the delete method, however based on the documentation of REST API we cannot delete resources using labelSelector option, this fix update the way the resources are deleted. The list of resources are deleted one after another like in the kubectl go client.
Fixes#428
ISSUE TYPE
Bugfix Pull Request
Reviewed-by: Abhijeet Kasurde <None>
Upgrade black version
SUMMARY
Move off of beta version of black and pin to current calendar year
version.
The only manual changes here are to tox.ini. Everything else is from running the new version of black.
ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Abhijeet Kasurde <None>