add support of kubectl_local_env_vars (#698) SUMMARY Support of local environmental variable that may be required to be set on Ansible Controller before the connection is set and may be used for kubectl command. This PR addressed for #698 The main idea is to have the support of additional/extra local environmental variable that may be required for kubectl itself, i.e. for authorization in case of public clouds ISSUE TYPE Feature Pull Request COMPONENT NAME kubernetes.core.kubectl connection plugin ADDITIONAL INFORMATION This PR attempts to implement local env support for the kubectl connection plugin that may be useful in case of using kubectl against public cloud kubernetes environment that uses some authorization (i.e. aws cli) additionally to kubeconfig file. More detail in #698 The output that shows that the connection plugin can use local environment variable for kubectl command (with some debug that used during development but removed then): root@ubuntu-shell:/# cat test.yaml - hosts: localhost gather_facts: no any_errors_fatal: yes vars: ansible_connection: "kubectl" ansible_kubectl_namespace: "test" ansible_kubectl_config: "/.kube/config" ansible_kubectl_pod: "ubuntu" ansible_kubectl_container: "ubuntu" ansible_kubectl_local_env_vars: TESTVAR1: "test" TESTVAR2: "test" TESTVAR3: "test" environment: TEST_ENV1: value1 TEST_ENV2: value2 tasks: - name: test ansible.builtin.shell: env register: result - debug: var: result.stdout_lines root@ubuntu-shell:/# ansible-playbook test.yaml [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ************************************************************************************************************************************** TASK [test] ******************************************************************************************************************************************* changed: [localhost] TASK [debug] ****************************************************************************************************************************************** ok: [localhost] => { "result.stdout_lines": [ "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_SERVICE_PORT=443", "HOSTNAME=ubuntu", "HOME=/root", "LC_CTYPE=C.UTF-8", "TEST_ENV1=value1", "TEST_ENV2=value2", "TERM=xterm", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_SERVICE_HOST=10.96.0.1", "PWD=/" ] } PLAY RECAP ******************************************************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 root@ubuntu-shell:/# ansible-playbook test.yaml -vvv ansible-playbook [core 2.14.5] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (/usr/bin/python3) jinja version = 3.1.3 libyaml = True No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: test.yaml *********************************************************************************************************************************** 1 plays in test.yaml PLAY [localhost] ************************************************************************************************************************************** TASK [test] ******************************************************************************************************************************************* task path: /test.yaml:19 redirecting (type: connection) ansible.builtin.kubectl to kubernetes.core.kubectl <127.0.0.1> ESTABLISH kubectl CONNECTION <127.0.0.1> ENV: KUBERNETES_SERVICE_PORT_HTTPS=443 <127.0.0.1> ENV: KUBERNETES_SERVICE_PORT=443 <127.0.0.1> ENV: HOSTNAME=ubuntu-shell <127.0.0.1> ENV: PWD=/ <127.0.0.1> ENV: HOME=/root <127.0.0.1> ENV: KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 <127.0.0.1> ENV: LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: <127.0.0.1> ENV: TERM=xterm <127.0.0.1> ENV: SHLVL=1 <127.0.0.1> ENV: KUBERNETES_PORT_443_TCP_PROTO=tcp <127.0.0.1> ENV: KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 <127.0.0.1> ENV: KUBERNETES_SERVICE_HOST=10.96.0.1 <127.0.0.1> ENV: KUBERNETES_PORT=tcp://10.96.0.1:443 <127.0.0.1> ENV: KUBERNETES_PORT_443_TCP_PORT=443 <127.0.0.1> ENV: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin <127.0.0.1> ENV: _=/usr/local/bin/ansible-playbook <127.0.0.1> ENV: LC_CTYPE=C.UTF-8 <127.0.0.1> ENV: TESTVAR1=test <127.0.0.1> ENV: TESTVAR2=test <127.0.0.1> ENV: TESTVAR3=test <127.0.0.1> EXEC ['/usr/local/bin/kubectl', '-n', 'test', '--kubeconfig', '/.kube/config', 'exec', '-i', 'ubuntu', '-c', 'ubuntu', '--', '/bin/sh', '-c', "/bin/sh -c 'echo ~ && sleep 0'"] <127.0.0.1> EXEC ['/usr/local/bin/kubectl', '-n', 'test', '--kubeconfig', '/.kube/config', 'exec', '-i', 'ubuntu', '-c', 'ubuntu', '--', '/bin/sh', '-c', '/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133 `" && echo ansible-tmp-1713785852.548581-6866-69007595335133="` echo /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133 `" ) && sleep 0\''] Using module file /usr/local/lib/python3.10/dist-packages/ansible/modules/command.py <127.0.0.1> PUT /root/.ansible/tmp/ansible-local-6862s5_lr_wb/tmpxwmx0qeh TO /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133/AnsiballZ_command.py <127.0.0.1> EXEC ['/usr/local/bin/kubectl', '-n', 'test', '--kubeconfig', '/.kube/config', 'exec', '-i', 'ubuntu', '-c', 'ubuntu', '--', '/bin/sh', '-c', "/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133/ /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133/AnsiballZ_command.py && sleep 0'"] <127.0.0.1> EXEC ['/usr/local/bin/kubectl', '-n', 'test', '--kubeconfig', '/.kube/config', 'exec', '-i', 'ubuntu', '-c', 'ubuntu', '--', '/bin/sh', '-c', "/bin/sh -c 'TEST_ENV1=value1 TEST_ENV2=value2 /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133/AnsiballZ_command.py && sleep 0'"] <127.0.0.1> EXEC ['/usr/local/bin/kubectl', '-n', 'test', '--kubeconfig', '/.kube/config', 'exec', '-i', 'ubuntu', '-c', 'ubuntu', '--', '/bin/sh', '-c', "/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1713785852.548581-6866-69007595335133/ > /dev/null 2>&1 && sleep 0'"] changed: [localhost] => { "changed": true, "cmd": "env", "delta": "0:00:00.005088", "end": "2024-04-22 11:37:33.655340", "invocation": { "module_args": { "_raw_params": "env", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true } }, "msg": "", "rc": 0, "start": "2024-04-22 11:37:33.650252", "stderr": "", "stderr_lines": [], "stdout": "KUBERNETES_PORT=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=ubuntu\nHOME=/root\nLC_CTYPE=C.UTF-8\nTEST_ENV1=value1\nTEST_ENV2=value2\nTERM=xterm\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/", "stdout_lines": [ "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_SERVICE_PORT=443", "HOSTNAME=ubuntu", "HOME=/root", "LC_CTYPE=C.UTF-8", "TEST_ENV1=value1", "TEST_ENV2=value2", "TERM=xterm", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_SERVICE_HOST=10.96.0.1", "PWD=/" ] } TASK [debug] ****************************************************************************************************************************************** task path: /test.yaml:22 redirecting (type: connection) ansible.builtin.kubectl to kubernetes.core.kubectl ok: [localhost] => { "result.stdout_lines": [ "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_SERVICE_PORT=443", "HOSTNAME=ubuntu", "HOME=/root", "LC_CTYPE=C.UTF-8", "TEST_ENV1=value1", "TEST_ENV2=value2", "TERM=xterm", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_SERVICE_HOST=10.96.0.1", "PWD=/" ] } PLAY RECAP ******************************************************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 root@ubuntu-shell:/# Reviewed-by: Bikouo Aubin Reviewed-by: Yuriy Novostavskiy Reviewed-by: Mike Graves <mgraves@redhat.com>
Kubernetes Collection for Ansible
This repository hosts the kubernetes.core (formerly known as community.kubernetes) Ansible Collection.
The collection includes a variety of Ansible content to help automate the management of applications in Kubernetes and OpenShift clusters, as well as the provisioning and maintenance of clusters themselves.
Ansible version compatibility
This collection has been tested against following Ansible versions: >=2.14.0.
For collections that support Ansible 2.9, please ensure you update your network_os to use the
fully qualified collection name (for example, cisco.ios.ios).
Plugins and modules within a collection may be tested with only specific Ansible versions.
A collection may contain metadata that identifies these versions.
PEP440 is the schema used to describe the versions of Ansible.
Python Support
- Collection supports 3.9+
Note: Python2 is deprecated from 1st January 2020. Please switch to Python3.
Kubernetes Version Support
This collection supports Kubernetes versions >= 1.24.
Included content
Click on the name of a plugin or module to view that content's documentation:
Connection plugins
| Name | Description |
|---|---|
| kubernetes.core.kubectl | Execute tasks in pods running on Kubernetes. |
K8s filter plugins
| Name | Description |
|---|---|
| kubernetes.core.k8s_config_resource_name | Generate resource name for the given resource of type ConfigMap, Secret |
Inventory plugins
| Name | Description |
|---|---|
| kubernetes.core.k8s | Kubernetes (K8s) inventory source |
Lookup plugins
| Name | Description |
|---|---|
| kubernetes.core.k8s | Query the K8s API |
| kubernetes.core.kustomize | Build a set of kubernetes resources using a 'kustomization.yaml' file. |
Modules
| Name | Description |
|---|---|
| kubernetes.core.helm | Manages Kubernetes packages with the Helm package manager |
| kubernetes.core.helm_info | Get information from Helm package deployed inside the cluster |
| kubernetes.core.helm_plugin | Manage Helm plugins |
| kubernetes.core.helm_plugin_info | Gather information about Helm plugins |
| kubernetes.core.helm_pull | download a chart from a repository and (optionally) unpack it in local directory. |
| kubernetes.core.helm_repository | Manage Helm repositories. |
| kubernetes.core.helm_template | Render chart templates |
| kubernetes.core.k8s | Manage Kubernetes (K8s) objects |
| kubernetes.core.k8s_cluster_info | Describe Kubernetes (K8s) cluster, APIs available and their respective versions |
| kubernetes.core.k8s_cp | Copy files and directories to and from pod. |
| kubernetes.core.k8s_drain | Drain, Cordon, or Uncordon node in k8s cluster |
| kubernetes.core.k8s_exec | Execute command in Pod |
| kubernetes.core.k8s_info | Describe Kubernetes (K8s) objects |
| kubernetes.core.k8s_json_patch | Apply JSON patch operations to existing objects |
| kubernetes.core.k8s_log | Fetch logs from Kubernetes resources |
| kubernetes.core.k8s_rollback | Rollback Kubernetes (K8S) Deployments and DaemonSets |
| kubernetes.core.k8s_scale | Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job. |
| kubernetes.core.k8s_service | Manage Services on Kubernetes |
| kubernetes.core.k8s_taint | Taint a node in a Kubernetes/OpenShift cluster |
Installation and Usage
Installing the Collection from Ansible Galaxy
Before using the Kubernetes collection, you need to install it with the Ansible Galaxy CLI:
ansible-galaxy collection install kubernetes.core
You can also include it in a requirements.yml file and install it via ansible-galaxy collection install -r requirements.yml, using the format:
---
collections:
- name: kubernetes.core
version: 3.0.0
Installing the Kubernetes Python Library
Content in this collection requires the Kubernetes Python client to interact with Kubernetes' APIs. You can install it with:
pip3 install kubernetes
Using modules from the Kubernetes Collection in your playbooks
It's preferable to use content in this collection using their Fully Qualified Collection Namespace (FQCN), for example kubernetes.core.k8s_info:
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Ensure the myapp Namespace exists.
kubernetes.core.k8s:
api_version: v1
kind: Namespace
name: myapp
state: present
- name: Ensure the myapp Service exists in the myapp Namespace.
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: myapp
- name: Get a list of all Services in the myapp namespace.
kubernetes.core.k8s_info:
kind: Service
namespace: myapp
register: myapp_services
- name: Display number of Services in the myapp namespace.
debug:
var: myapp_services.resources | count
If upgrading older playbooks which were built prior to Ansible 2.10 and this collection's existence, you can also define collections in your play and refer to this collection's modules as you did in Ansible 2.9 and below, as in this example:
---
- hosts: localhost
gather_facts: false
connection: local
collections:
- kubernetes.core
tasks:
- name: Ensure the myapp Namespace exists.
k8s:
api_version: v1
kind: Namespace
name: myapp
state: present
For documentation on how to use individual modules and other content included in this collection, please see the links in the 'Included content' section earlier in this README.
Ansible Turbo mode Tech Preview
The kubernetes.core collection supports Ansible Turbo mode as a tech preview via the cloud.common collection. By default, this feature is disabled. To enable Turbo mode for modules, set the environment variable ENABLE_TURBO_MODE=1 on the managed node. For example:
---
- hosts: remote
environment:
ENABLE_TURBO_MODE: 1
tasks:
...
To enable Turbo mode for k8s lookup plugin, set the environment variable ENABLE_TURBO_MODE=1 on the managed node. This is not working when
defined in the playbook using environment keyword as above, you must set it using export ENABLE_TURBO_MODE=1.
Please read more about Ansible Turbo mode - here.
Testing and Development
If you want to develop new content for this collection or improve what's already here, the easiest way to work on the collection is to clone it into one of the configured COLLECTIONS_PATHS, and work on it there.
See Contributing to kubernetes.core.
Testing with ansible-test
The tests directory contains configuration for running sanity and integration tests using ansible-test.
You can run the collection's test suites with the commands:
make test-sanity
make test-integration
make test-unit
Testing with molecule
There are also integration tests in the molecule directory which are meant to be run against a local Kubernetes cluster, e.g. using KinD or Minikube. To setup a local cluster using KinD and run Molecule:
kind create cluster
make test-molecule
Publishing New Versions
Releases are automatically built and pushed to Ansible Galaxy for any new tag. Before tagging a release, make sure to do the following:
- Update the version in the following places:
- The
versioningalaxy.yml - This README's
requirements.ymlexample - The
VERSIONinMakefile
- The
- Update the CHANGELOG:
- Make sure you have
antsibull-changeloginstalled. - Make sure there are fragments for all known changes in
changelogs/fragments. - Run
antsibull-changelog release.
- Make sure you have
- Commit the changes and create a PR with the changes. Wait for tests to pass, then merge it once they have.
- Tag the version in Git and push to GitHub.
After the version is published, verify it exists on the Kubernetes Collection Galaxy page.
The process for uploading a supported release to Automation Hub is documented separately.
More Information
For more information about Ansible's Kubernetes integration, join the #ansible-kubernetes channel on libera.chat IRC, and browse the resources in the Kubernetes Working Group Community wiki page.
License
GNU General Public License v3.0 or later
See LICENCE to see the full text.