Compare commits

...

48 Commits
7.2.0 ... 7.3.0

Author SHA1 Message Date
Felix Fontein
a357944fb0 Release 7.3.0. 2023-08-15 07:06:17 +02:00
Felix Fontein
5d7d973f6d Prepare 7.3.0 release. 2023-08-15 07:04:32 +02:00
patchback[bot]
f3a516b79d [PR #7113/6a558734 backport][stable-7] Add support for Redfish PowerCycle reset type (#7115)
Add support for Redfish PowerCycle reset type (#7113)

* Add support for Redfish PowerCycle reset type

* Add changelog frament

(cherry picked from commit 6a558734f7)

Co-authored-by: Scott Seekamp <sseekamp@coreweave.com>
2023-08-14 22:15:44 +02:00
patchback[bot]
d4eaef2d83 [PR #7102/55cfd27b backport][stable-7] freebsd: shutdown -p ... on freebsd to power off machine (#7112)
freebsd: shutdown -p ... on freebsd to power off machine (#7102)

* freebsd: shutdown -p ... on freebsd to power off machine

* Use shutdown -p ... on FreeBSD such that the machine is halted and
  powered off (-p) otherwise the machine is halted (-h) but remains on.

* Update changelogs/fragments/7102-freebsd-shutdown-p.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 55cfd27be9)

Co-authored-by: Derek Schrock <dereks@lifeofadishwasher.com>
2023-08-14 21:19:16 +02:00
patchback[bot]
235e55fa9f [PR #7099/bf728aad backport][stable-7] chroot: add disable_root_check option (#7111)
chroot: add `disable_root_check` option (#7099)

* Initial commit

* Update plugins/connection/chroot.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add changelog fragment

* Update changelogs/fragments/7099-chroot-disable-root-check-option.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Сашка724ая <git@sashok724.net>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit bf728aadfb)

Co-authored-by: Сашка724ая <github@sashok724.net>
2023-08-14 19:19:05 +00:00
patchback[bot]
c3baaa8cfa [PR #7075/f9f5c45c backport][stable-7] ejabberd_user: use CmdRunner (#7110)
ejabberd_user: use CmdRunner (#7075)

* ejabberd_user: use CmdRunner

* add changelog frag

* regain sanity

(cherry picked from commit f9f5c45c94)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-08-13 22:15:35 +02:00
patchback[bot]
d68f6fcfff [PR #7104/eafdf87b backport][stable-7] lxc: fix remote_addr default to inventory_hostname (#7109)
lxc: fix remote_addr default to inventory_hostname (#7104)

* lxc: fix remote_addr default to inventory_hostname

* Update changelogs/fragments/7104_fix_lxc_remoteaddr_default.yml

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit eafdf87b1b)

Co-authored-by: Corubba <97832352+corubba@users.noreply.github.com>
2023-08-13 22:15:26 +02:00
patchback[bot]
70e4ae440c [PR #7103/5e27bbfd backport][stable-7] CI: FreeBSD 13.0 and 12.3 are no longer availabe, bump versions and disable since these versions are already tested with stable-2.15 (#7107)
CI: FreeBSD 13.0 and 12.3 are no longer availabe, bump versions and disable since these versions are already tested with stable-2.15 (#7103)

FreeBSD 13.0 and 12.3 are no longer availabe, bump versions and disable since these versions are already tested with stable-2.15.

(cherry picked from commit 5e27bbfdf6)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-08-13 20:21:44 +02:00
patchback[bot]
8b66bb9a02 [PR #7010/7496466f backport][stable-7] Update chroot.py - better documentation (#7101)
Update chroot.py - better documentation (#7010)

* Update chroot.py

Better informations about sudo and env.

* Update plugins/connection/chroot.py

---------

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 7496466f9d)

Co-authored-by: Przemysław Sztoch <psztoch@finn.pl>
2023-08-12 19:35:50 +02:00
patchback[bot]
76fbb50270 [PR #7053/a0c67a88 backport][stable-7] lvol: Fix pct of origin (#7097)
lvol: Fix pct of origin (#7053)

* add support for percentage of origin size for creating snapshot volumes

* add changelog fragment

* add pull request link

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix what's not idempotent

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit a0c67a8894)

Co-authored-by: Bob Mader <bmader@redhat.com>
2023-08-12 11:39:59 +02:00
patchback[bot]
93971b292a [PR #6989/5988b9ac backport][stable-7] npm: changed to cmdrunner (#7096)
npm: changed to cmdrunner (#6989)

* npm: refactor to use CmdRunner

- initial commit, not working

* better handling of parameter "production"

* add changelog frag

* fixed command call and tests

* removed extraneous commented debug code

(cherry picked from commit 5988b9acea)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-08-12 10:26:38 +02:00
patchback[bot]
724bba79d5 [PR #7091/f7176df4 backport][stable-7] sorcery: update only specified grimoires (#7095)
sorcery: update only specified grimoires (#7091)

* sorcery: update only specified grimoires

* Update plugins/modules/sorcery.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add a flag to omit listing new repositories before add/remove

* No need to append an empty string

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f7176df480)

Co-authored-by: Vlad Glagolev <vaygr@users.noreply.github.com>
2023-08-12 10:13:45 +02:00
patchback[bot]
e44f43b4d2 [PR #7081/fe3eec01 backport][stable-7] Follow DMTF redfish deprecation on StorageControllers (#7093)
Follow DMTF redfish deprecation on StorageControllers (#7081)

* Get controller information from "Controllers" field instead of "StorageControllers" which is deprecated

* Add changelog fragment

* Changelog fragment writing guide formatting

* For consistency, get_disk_inventory and get_volume_inventory use Controllers key instead of StorageControllers to obtain controller name

---------

Co-authored-by: Pierre-yves FONTANIERE <pyf@cc.in2p3.fr>
(cherry picked from commit fe3eec0122)

Co-authored-by: Pierre-yves Fontaniere <pyfontan@cc.in2p3.fr>
2023-08-11 19:55:49 +02:00
patchback[bot]
f82422502b [PR #7061/e75dc746 backport][stable-7] bitwarden lookup fix get_field (#7090)
bitwarden lookup fix `get_field` (#7061)

* bitwarden lookup rewrite `get_field`

* add changelog fragment

* PEP8 add newline

* Update changelogs/fragments/7061-fix-bitwarden-get_field.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/bitwarden.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/bitwarden.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/bitwarden.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Simon <simonleary@umass.edu>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e75dc74613)

Co-authored-by: simonLeary42 <71396965+simonLeary42@users.noreply.github.com>
2023-08-11 13:53:36 +02:00
patchback[bot]
5588ce3741 [PR #7067/e7a6412e backport][stable-7] Fix KeycloakAPI's missing http_agent, timeout, and validate_certs open_url() parameters (#7088)
Fix KeycloakAPI's missing http_agent, timeout, and validate_certs open_url() parameters (#7067)

* Fix KeycloakAPI's missing http_agent, timeout, and validate_certs open_url() parameters

* Add changelog fragment

* Update changelogs/fragments/7067-keycloak-api-paramerter-fix.yml

Following suggestion

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e7a6412ec4)

Co-authored-by: Loric Vandentempel <loricvdt@gmail.com>
2023-08-11 13:39:46 +02:00
patchback[bot]
719ecc9e85 [PR #7085/a8809401 backport][stable-7] Avoid direct type comparisons (#7086)
Avoid direct type comparisons (#7085)

Avoid direct type comparisons.

(cherry picked from commit a8809401ee)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-08-11 10:23:58 +02:00
patchback[bot]
1a801323a8 [PR #7049/2089769c backport][stable-7] [proxmox_vm_info] Return empty list when requested VM doesn't exist (#7079)
[proxmox_vm_info] Return empty list when requested VM doesn't exist (#7049)

* [proxmox_vm_info] Return empty list when requested VM doesn't exist

* Update documentation

* Add changelog fragment

* Address review comments

* Allow to filter by empty name

* Update plugins/modules/proxmox_vm_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 2089769ccc)

Co-authored-by: Sergei Antipov <greendayonfire@gmail.com>
2023-08-09 16:54:35 +02:00
patchback[bot]
7ebb301930 [PR #7012/d7442558 backport][stable-7] Add grimoire management to sorcery module (#7078)
Add grimoire management to sorcery module (#7012)

* Add grimoire management to sorcery module

* Add changelog fragment

* Bump copyright year

* Separate update_cache and latest state

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add note on latest state and cache_update link

* Unblock execution of multiple stages

* Update plugins/modules/sorcery.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update Codex logic to match Sorcery

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit d74425580b)

Co-authored-by: Vlad Glagolev <vaygr@users.noreply.github.com>
2023-08-09 16:25:41 +02:00
patchback[bot]
fb5047b605 [PR #6931/91152cb1 backport][stable-7] Keycloak client secret (#7077)
Keycloak client secret (#6931)

* fixe missing secret at creation

* Update doc

* changelogs

* Default protocol only when creation

* Fix sanity test

* Add documentation

* Update plugins/modules/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 91152cb123)

Co-authored-by: desand01 <desrosiers.a@hotmail.com>
2023-08-09 16:15:15 +02:00
patchback[bot]
b7977b8fa9 [PR #7019/4fda040e backport][stable-7] ipa_config: add user and group ojectclasses parameters (#7071)
ipa_config: add user and group ojectclasses parameters (#7019)

* ipa_config: add user and group ojectclasses parameters

* fix typo

* add changelog fragments and fix version_added

* fix changelog fragment permissions

* Update changelogs/fragments/7019-ipa_config-user-and-group-objectclasses.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Dmitriy Usachev <dmitrii.usachev@hyperus.team>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 4fda040e9e)

Co-authored-by: Dmitriy Usachev <diman-110@list.ru>
2023-08-08 13:45:47 +02:00
patchback[bot]
bae1440425 [PR #7051/c6393cb2 backport][stable-7] ipa_config: add idp choice to ipauserauthtype (#7072)
ipa_config: add idp choice to ipauserauthtype (#7051)

* ipa_config: add idp choice to ipauserauthtype

* ipa_config: edit ipauserauthtype description

Co-authored-by: Felix Fontein <felix@fontein.de>

* Changelog Fragment - 7051

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c6393cb2ac)

Co-authored-by: Christer Warén <cwchristerw@gmail.com>
2023-08-08 13:45:35 +02:00
patchback[bot]
04f3dd2b56 [PR #7033/c1f2f126 backport][stable-7] ejabberd_user: bug fixes + tests (#7070)
ejabberd_user: bug fixes + tests (#7033)

* ejabberd_user: bug fixes + tests

* fix changed property

* add license to handler file

* adjustments to test

* add needs/target/setup_epel to aliases

* further adjustments to integration tests

* add target to integration tests

* add some skips to test

* skip centos as it has no ejabberd

* skip fedora as it has no ejabberd

* discard unused epel setup

* add changelog frag

* remove ejabberd before tests

* fix typo

(cherry picked from commit c1f2f126cf)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-08-08 13:45:25 +02:00
patchback[bot]
99e3965ece [PR #7036/d08924d7 backport][stable-7] Documentation to reflect newer Proxmox VE boot string format (#7068)
Documentation to reflect newer Proxmox VE boot string format (#7036)

Co-authored-by: jonathan lung <lungj@heresjono.com>
(cherry picked from commit d08924d759)

Co-authored-by: Jonathan Lung <lungj@users.noreply.github.com>
2023-08-08 13:45:10 +02:00
patchback[bot]
14625a214a [PR #6814/47865284 backport][stable-7] Adding DeleteVolumes functionality (#7066)
Adding DeleteVolumes functionality (#6814)

* Adding DeleteAllVolumes functionality

* Adding changelog fragment and sanity fix

* Sanity Fix

* Updating as per PR suggestions

* Sanity fix

* Adjust version_added.

---------

Co-authored-by: Kushal <t-s.kushal@hpe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 478652843f)

Co-authored-by: TSKushal <44438079+TSKushal@users.noreply.github.com>
2023-08-05 20:52:55 +02:00
patchback[bot]
3c067aa2c3 [PR #7064/4b17fd42 backport][stable-7] Snap add test (#7065)
Snap add test (#7064)

* add test for 3 dashes in description

* remove extraneous comment

(cherry picked from commit 4b17fd4265)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-08-04 22:31:08 +02:00
patchback[bot]
01004bd27b [PR #7046/c7fa11d5 backport][stable-7] snap: Only treat --- as an info separator when it's preceded by newline (#7063)
snap: Only treat `---` as an info separator when it's preceded by newline (#7046)

* Only treat `---` as an info separator when it's preceded by newline

The code for splitting the output of `snap info` for multiple snaps
can't assume that `---` separates snaps any time it appears in the
output; it needs to treat that as the separator only when it's at the
start of a line. Otherwise it breaks if any snap uses `---` in its
description text, which, e.g., the `bw` snap does as of the date of
this commit.

Fixes #7045.

* Add changelog fragment

* Add a comment explaining why \n is necessary before ---

* Update changelogs/fragments/7046-snap-newline-before-separator.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Jonathan Kamens <jik@jik5.kamens.us>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c7fa11d576)

Co-authored-by: Jonathan Kamens <jik@kamens.us>
2023-08-04 08:43:53 +02:00
patchback[bot]
f8265ecc4e [PR #7043/d17ec06d backport][stable-7] ejabberd_user: deprecate parameter logging (#7058)
ejabberd_user: deprecate parameter logging (#7043)

* ejabberd_user: deprecate parameter logging

* add changelog frag

* Update plugins/modules/ejabberd_user.py

(cherry picked from commit d17ec06d2a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-08-02 11:46:13 +02:00
patchback[bot]
2e355bef9f [PR #7055/fd9d9482 backport][stable-7] CI: ansible-core devel only supports Alpine 3.18 VMs, no longer Alpine 3.17 VMs (#7057)
CI: ansible-core devel only supports Alpine 3.18 VMs, no longer Alpine 3.17 VMs (#7055)

ansible-core devel only supports Alpine 3.18 VMs, no longer Alpine 3.17 VMs.

(cherry picked from commit fd9d948267)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-08-02 10:55:58 +02:00
Felix Fontein
e6f65634fe Revert "[stable-7] Revert new features to be able to do 7.2.1 release (#7042)"
This reverts commit 7cf834fb3c.
2023-07-31 16:29:37 +02:00
Felix Fontein
61314898ca The next expected release will be 7.3.0. 2023-07-31 16:29:32 +02:00
Felix Fontein
301711e0d3 Release 7.2.1. 2023-07-31 15:49:21 +02:00
Felix Fontein
7cf834fb3c [stable-7] Revert new features to be able to do 7.2.1 release (#7042)
* Revert "[PR #7020/b46d5d81 backport][stable-7] redfish_utils: Add support for "nextLink" property tag pagination (#7026)"

This reverts commit 1dad95370e.

* Revert "[PR #6914/17b4219b backport][stable-7] proxmox_kvm: enable 'force' restart of vm (as documented) (#6997)"

This reverts commit 7d68af57af.

* Revert "[PR #6976/d7c1a814 backport][stable-7] [proxmox_vm_info] Re-use cluster resources API to use module without requiring node param (#6993)"

This reverts commit fb3768aada.
2023-07-31 15:45:08 +02:00
patchback[bot]
eda3d160fa [PR #6983/a942545d backport][stable-7] Rundeck - fix TypeError on 404 api response (#7041)
Rundeck - fix TypeError on 404 api response (#6983)

* fix TypeError on 404 api response

* add changelog fragment

* Update changelogs/fragments/6983-rundeck-fix-typerrror-on-404-api-response.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Vincent CHARLES <vincent.charles@swatchgroup.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit a942545dd2)

Co-authored-by: Vincent CHARLES <124702855+MehuiSeklayr@users.noreply.github.com>
2023-07-31 15:44:55 +02:00
Felix Fontein
b71d8813b2 Prepare 7.2.1 release. 2023-07-31 08:47:47 +02:00
patchback[bot]
a4c0df1ded [PR #6991/00bfc3e1 backport][stable-7] Add example for ECS Fargate/EFS Jenkins authentication (#7016)
Add example for ECS Fargate/EFS Jenkins authentication (#6991)

* Add example for ECS Fargate/EFS Jenkins authentication

Since ECS Fargate is serverless, one cannot access its jenkins_home other than from a machine (EC2 for example) that actually mounts and owns its EFS storage.

That way we provide user/group of a defatul local user which has the same uid/gid 1000 as the default jenkins user inside the container and also can authenticate at Jenkins URL.

I feel this is not as straightforward from the docs and someone might benefit from such an example being present

* Added an empty line

* Float value now in single quotes

* Use UID/GID instead user/group name

(cherry picked from commit 00bfc3e131)

Co-authored-by: TeekWan <74403302+teekwan@users.noreply.github.com>
2023-07-31 08:37:15 +02:00
patchback[bot]
a2b1756bea [PR #7027/87053e52 backport][stable-7] Add tarka to ignore list. (#7032)
Add tarka to ignore list. (#7027)

* Add tarka to ignore list.

* Remove tarka from maintainers.

(cherry picked from commit 87053e5266)

Co-authored-by: Steve Smith <tarkasteve@gmail.com>
2023-07-29 21:51:25 +02:00
patchback[bot]
08d89a2f85 [PR #7028/3a7044e2 backport][stable-7] ejabberd_user: better error when command not installed (#7030)
ejabberd_user: better error when command not installed (#7028)

* ejabberd_user: better error when command not installed

* add changelog frag

(cherry picked from commit 3a7044e2b8)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-07-29 21:40:06 +02:00
patchback[bot]
1dad95370e [PR #7020/b46d5d81 backport][stable-7] redfish_utils: Add support for "nextLink" property tag pagination (#7026)
redfish_utils: Add support for "nextLink" property tag pagination (#7020)

* Add support for Redfish "nextLink" property tag pagination for
FirmwareInventory

* Add changelog fragment

* Fix indention

* Updated fragment per suggestion

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit b46d5d8197)

Co-authored-by: Scott Seekamp <sseekamp@coreweave.com>
2023-07-28 21:30:40 +02:00
patchback[bot]
200b858b36 [PR #7022/5d7899b3 backport][stable-7] setup_docker: handlers to stop service and remove requests (#7024)
setup_docker: handlers to stop service and remove requests (#7022)

(cherry picked from commit 5d7899b341)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-07-28 20:19:27 +02:00
patchback[bot]
342a5a14f9 [PR #6995/cc5e1b6f backport][stable-7] Skip java_cert and java_keystore tests on RHEL (#7000)
Skip java_cert and java_keystore tests on RHEL (#6995)

Skip java_cert and java_keystore tests on RHEL.

(cherry picked from commit cc5e1b6fe7)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-07-24 08:27:25 +02:00
patchback[bot]
4609907367 [PR #6994/e8150408 backport][stable-7] crypt is still deprecated in Python 3.12 (#7002)
crypt is still deprecated in Python 3.12 (#6994)

crypt is still deprecated in Python 3.12.

(cherry picked from commit e815040877)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-07-24 08:27:08 +02:00
patchback[bot]
7d68af57af [PR #6914/17b4219b backport][stable-7] proxmox_kvm: enable 'force' restart of vm (as documented) (#6997)
proxmox_kvm: enable 'force' restart of vm (as documented) (#6914)

* enable 'force' restart of vm

* added changelog fragment

* Update changelogs/fragments/6914-proxmox_kvm-enable-force-restart.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 17b4219b8b)

Co-authored-by: Jeff Turner <jeff@torusoft.com>
2023-07-23 22:53:02 +02:00
patchback[bot]
fb3768aada [PR #6976/d7c1a814 backport][stable-7] [proxmox_vm_info] Re-use cluster resources API to use module without requiring node param (#6993)
[proxmox_vm_info] Re-use cluster resources API to use module without requiring node param (#6976)

* [proxmox_vm_info] Re-use cluster resources API to use module without requiring node param

* More consife if

* Fix use case when requesting all vms from specific node

* Add changelog fragment

(cherry picked from commit d7c1a814ea)

Co-authored-by: Sergei Antipov <greendayonfire@gmail.com>
2023-07-23 21:55:32 +02:00
patchback[bot]
93f990a1b9 [PR #6981/f9448574 backport][stable-7] [proxmox_kvm] Don't create VM if name is used without vmid (#6992)
[proxmox_kvm] Don't create VM if name is used without vmid (#6981)

* [proxmox_kvm] Don't create VM if name is used without vmid

* Add changelog and unit tests

(cherry picked from commit f9448574bd)

Co-authored-by: Sergei Antipov <greendayonfire@gmail.com>
2023-07-23 21:48:36 +02:00
patchback[bot]
f003833c1a [PR #6980/796ad356 backport][stable-7] [proxmox] Use proxmoxer_version instead of server API version (#6982)
[proxmox] Use proxmoxer_version instead of server API version (#6980)

* Use proxmoxer_version instead of server API version

* Add changelog fragment

(cherry picked from commit 796ad3565e)

Co-authored-by: Sergei Antipov <greendayonfire@gmail.com>
2023-07-21 17:03:41 +02:00
patchback[bot]
8eb94dc36b [PR #6985/d9951cbc backport][stable-7] CI: move FreeBSD 12.4 from ansible-core devel to stable-2.15 (#6987)
CI: move FreeBSD 12.4 from ansible-core devel to stable-2.15 (#6985)

Move FreeBSD 12.4 from ansible-core devel to stable-2.15.

(cherry picked from commit d9951cbc32)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-07-21 16:56:03 +02:00
patchback[bot]
7bf155284f [PR #6968/f6714eda backport][stable-7] cmd_runner module utils: fix bug when argument spec has implicit type (#6978)
cmd_runner module utils: fix bug when argument spec has implicit type (#6968)

* cmd_runner module utils: fix bug when argument spec has implicit type

* add changelog frag

(cherry picked from commit f6714edabb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-07-20 08:34:39 +02:00
Felix Fontein
0f5f00f41a Next expected release is 7.3.0. 2023-07-17 12:30:31 +02:00
50 changed files with 1708 additions and 452 deletions

View File

@@ -171,8 +171,8 @@ stages:
parameters:
testFormat: devel/{0}
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Alpine 3.18
test: alpine/3.18
# - name: Fedora 38
# test: fedora/38
- name: Ubuntu 22.04
@@ -195,8 +195,6 @@ stages:
test: rhel/8.8
- name: FreeBSD 13.2
test: freebsd/13.2
- name: FreeBSD 12.4
test: freebsd/12.4
groups:
- 1
- 2
@@ -217,6 +215,8 @@ stages:
test: rhel/7.9
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 12.4
test: freebsd/12.4
groups:
- 1
- 2
@@ -231,8 +231,8 @@ stages:
targets:
- name: RHEL 9.0
test: rhel/9.0
- name: FreeBSD 12.3
test: freebsd/12.3
#- name: FreeBSD 12.4
# test: freebsd/12.4
groups:
- 1
- 2
@@ -249,8 +249,8 @@ stages:
test: macos/12.0
- name: RHEL 8.5
test: rhel/8.5
- name: FreeBSD 13.0
test: freebsd/13.0
#- name: FreeBSD 13.1
# test: freebsd/13.1
groups:
- 1
- 2

4
.github/BOTMETA.yml vendored
View File

@@ -679,9 +679,9 @@ files:
$modules/jenkins_script.py:
maintainers: hogarthj
$modules/jira.py:
ignore: DWSR
ignore: DWSR tarka
labels: jira
maintainers: Slezhuk tarka pertoft
maintainers: Slezhuk pertoft
$modules/kdeconfig.py:
maintainers: smeso
$modules/kernel_blacklist.py:

View File

@@ -6,6 +6,70 @@ Community General Release Notes
This changelog describes changes after version 6.0.0.
v7.3.0
======
Release Summary
---------------
Feature and bugfix release.
Minor Changes
-------------
- chroot connection plugin - add ``disable_root_check`` option (https://github.com/ansible-collections/community.general/pull/7099).
- ejabberd_user - module now using ``CmdRunner`` to execute external command (https://github.com/ansible-collections/community.general/pull/7075).
- ipa_config - add module parameters to manage FreeIPA user and group objectclasses (https://github.com/ansible-collections/community.general/pull/7019).
- ipa_config - adds ``idp`` choice to ``ipauserauthtype`` parameter's choices (https://github.com/ansible-collections/community.general/pull/7051).
- npm - module now using ``CmdRunner`` to execute external commands (https://github.com/ansible-collections/community.general/pull/6989).
- proxmox_kvm - enabled force restart of VM, bringing the ``force`` parameter functionality in line with what is described in the docs (https://github.com/ansible-collections/community.general/pull/6914).
- proxmox_vm_info - ``node`` parameter is no longer required. Information can be obtained for the whole cluster (https://github.com/ansible-collections/community.general/pull/6976).
- proxmox_vm_info - non-existing provided by name/vmid VM would return empty results instead of failing (https://github.com/ansible-collections/community.general/pull/7049).
- redfish_config - add ``DeleteAllVolumes`` command to allow deletion of all volumes on servers (https://github.com/ansible-collections/community.general/pull/6814).
- redfish_utils - use ``Controllers`` key in redfish data to obtain Storage controllers properties (https://github.com/ansible-collections/community.general/pull/7081).
- redfish_utils module utils - add support for ``PowerCycle`` reset type for ``redfish_command`` responses feature (https://github.com/ansible-collections/community.general/issues/7083).
- redfish_utils module utils - add support for following ``@odata.nextLink`` pagination in ``software_inventory`` responses feature (https://github.com/ansible-collections/community.general/pull/7020).
- shutdown - use ``shutdown -p ...`` with FreeBSD to halt and power off machine (https://github.com/ansible-collections/community.general/pull/7102).
- sorcery - add grimoire (repository) management support (https://github.com/ansible-collections/community.general/pull/7012).
Deprecated Features
-------------------
- ejabberd_user - deprecate the parameter ``logging`` in favour of producing more detailed information in the module output (https://github.com/ansible-collections/community.general/pull/7043).
Bugfixes
--------
- bitwarden lookup plugin - the plugin made assumptions about the structure of a Bitwarden JSON object which may have been broken by an update in the Bitwarden API. Remove assumptions, and allow queries for general fields such as ``notes`` (https://github.com/ansible-collections/community.general/pull/7061).
- ejabberd_user - module was failing to detect whether user was already created and/or password was changed (https://github.com/ansible-collections/community.general/pull/7033).
- keycloak module util - fix missing ``http_agent``, ``timeout``, and ``validate_certs`` ``open_url()`` parameters (https://github.com/ansible-collections/community.general/pull/7067).
- keycloak_client inventory plugin - fix missing client secret (https://github.com/ansible-collections/community.general/pull/6931).
- lvol - add support for percentage of origin size specification when creating snapshot volumes (https://github.com/ansible-collections/community.general/issues/1630, https://github.com/ansible-collections/community.general/pull/7053).
- lxc connection plugin - now handles ``remote_addr`` defaulting to ``inventory_hostname`` correctly (https://github.com/ansible-collections/community.general/pull/7104).
- oci_utils module utils - avoid direct type comparisons (https://github.com/ansible-collections/community.general/pull/7085).
- proxmox_user_info - avoid direct type comparisons (https://github.com/ansible-collections/community.general/pull/7085).
- snap - fix crash when multiple snaps are specified and one has ``---`` in its description (https://github.com/ansible-collections/community.general/pull/7046).
- sorcery - fix interruption of the multi-stage process (https://github.com/ansible-collections/community.general/pull/7012).
- sorcery - fix queue generation before the whole system rebuild (https://github.com/ansible-collections/community.general/pull/7012).
- sorcery - latest state no longer triggers update_cache (https://github.com/ansible-collections/community.general/pull/7012).
v7.2.1
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- cmd_runner module utils - when a parameter in ``argument_spec`` has no type, meaning it is implicitly a ``str``, ``CmdRunner`` would fail trying to find the ``type`` key in that dictionary (https://github.com/ansible-collections/community.general/pull/6968).
- ejabberd_user - provide meaningful error message when the ``ejabberdctl`` command is not found (https://github.com/ansible-collections/community.general/pull/7028, https://github.com/ansible-collections/community.general/issues/6949).
- proxmox module utils - fix proxmoxer library version check (https://github.com/ansible-collections/community.general/issues/6974, https://github.com/ansible-collections/community.general/issues/6975, https://github.com/ansible-collections/community.general/pull/6980).
- proxmox_kvm - when ``name`` option is provided without ``vmid`` and VM with that name already exists then no new VM will be created (https://github.com/ansible-collections/community.general/issues/6911, https://github.com/ansible-collections/community.general/pull/6981).
- rundeck - fix ``TypeError`` on 404 API response (https://github.com/ansible-collections/community.general/pull/6983).
v7.2.0
======

View File

@@ -1271,3 +1271,107 @@ releases:
name: bitwarden_secrets_manager
namespace: null
release_date: '2023-07-17'
7.2.1:
changes:
bugfixes:
- cmd_runner module utils - when a parameter in ``argument_spec`` has no type,
meaning it is implicitly a ``str``, ``CmdRunner`` would fail trying to find
the ``type`` key in that dictionary (https://github.com/ansible-collections/community.general/pull/6968).
- ejabberd_user - provide meaningful error message when the ``ejabberdctl``
command is not found (https://github.com/ansible-collections/community.general/pull/7028,
https://github.com/ansible-collections/community.general/issues/6949).
- proxmox module utils - fix proxmoxer library version check (https://github.com/ansible-collections/community.general/issues/6974,
https://github.com/ansible-collections/community.general/issues/6975, https://github.com/ansible-collections/community.general/pull/6980).
- proxmox_kvm - when ``name`` option is provided without ``vmid`` and VM with
that name already exists then no new VM will be created (https://github.com/ansible-collections/community.general/issues/6911,
https://github.com/ansible-collections/community.general/pull/6981).
- rundeck - fix ``TypeError`` on 404 API response (https://github.com/ansible-collections/community.general/pull/6983).
release_summary: Bugfix release.
fragments:
- 6949-ejabberdctl-error.yml
- 6968-cmdrunner-implicit-type.yml
- 6980-proxmox-fix-token-auth.yml
- 6981-proxmox-fix-vm-creation-when-only-name-provided.yml
- 6983-rundeck-fix-typerrror-on-404-api-response.yml
- 7.2.1.yml
release_date: '2023-07-31'
7.3.0:
changes:
bugfixes:
- bitwarden lookup plugin - the plugin made assumptions about the structure
of a Bitwarden JSON object which may have been broken by an update in the
Bitwarden API. Remove assumptions, and allow queries for general fields such
as ``notes`` (https://github.com/ansible-collections/community.general/pull/7061).
- ejabberd_user - module was failing to detect whether user was already created
and/or password was changed (https://github.com/ansible-collections/community.general/pull/7033).
- keycloak module util - fix missing ``http_agent``, ``timeout``, and ``validate_certs``
``open_url()`` parameters (https://github.com/ansible-collections/community.general/pull/7067).
- keycloak_client inventory plugin - fix missing client secret (https://github.com/ansible-collections/community.general/pull/6931).
- lvol - add support for percentage of origin size specification when creating
snapshot volumes (https://github.com/ansible-collections/community.general/issues/1630,
https://github.com/ansible-collections/community.general/pull/7053).
- lxc connection plugin - now handles ``remote_addr`` defaulting to ``inventory_hostname``
correctly (https://github.com/ansible-collections/community.general/pull/7104).
- oci_utils module utils - avoid direct type comparisons (https://github.com/ansible-collections/community.general/pull/7085).
- proxmox_user_info - avoid direct type comparisons (https://github.com/ansible-collections/community.general/pull/7085).
- snap - fix crash when multiple snaps are specified and one has ``---`` in
its description (https://github.com/ansible-collections/community.general/pull/7046).
- sorcery - fix interruption of the multi-stage process (https://github.com/ansible-collections/community.general/pull/7012).
- sorcery - fix queue generation before the whole system rebuild (https://github.com/ansible-collections/community.general/pull/7012).
- sorcery - latest state no longer triggers update_cache (https://github.com/ansible-collections/community.general/pull/7012).
deprecated_features:
- ejabberd_user - deprecate the parameter ``logging`` in favour of producing
more detailed information in the module output (https://github.com/ansible-collections/community.general/pull/7043).
minor_changes:
- chroot connection plugin - add ``disable_root_check`` option (https://github.com/ansible-collections/community.general/pull/7099).
- ejabberd_user - module now using ``CmdRunner`` to execute external command
(https://github.com/ansible-collections/community.general/pull/7075).
- ipa_config - add module parameters to manage FreeIPA user and group objectclasses
(https://github.com/ansible-collections/community.general/pull/7019).
- ipa_config - adds ``idp`` choice to ``ipauserauthtype`` parameter's choices
(https://github.com/ansible-collections/community.general/pull/7051).
- npm - module now using ``CmdRunner`` to execute external commands (https://github.com/ansible-collections/community.general/pull/6989).
- proxmox_kvm - enabled force restart of VM, bringing the ``force`` parameter
functionality in line with what is described in the docs (https://github.com/ansible-collections/community.general/pull/6914).
- proxmox_vm_info - ``node`` parameter is no longer required. Information can
be obtained for the whole cluster (https://github.com/ansible-collections/community.general/pull/6976).
- proxmox_vm_info - non-existing provided by name/vmid VM would return empty
results instead of failing (https://github.com/ansible-collections/community.general/pull/7049).
- redfish_config - add ``DeleteAllVolumes`` command to allow deletion of all
volumes on servers (https://github.com/ansible-collections/community.general/pull/6814).
- redfish_utils - use ``Controllers`` key in redfish data to obtain Storage
controllers properties (https://github.com/ansible-collections/community.general/pull/7081).
- redfish_utils module utils - add support for ``PowerCycle`` reset type for
``redfish_command`` responses feature (https://github.com/ansible-collections/community.general/issues/7083).
- redfish_utils module utils - add support for following ``@odata.nextLink``
pagination in ``software_inventory`` responses feature (https://github.com/ansible-collections/community.general/pull/7020).
- shutdown - use ``shutdown -p ...`` with FreeBSD to halt and power off machine
(https://github.com/ansible-collections/community.general/pull/7102).
- sorcery - add grimoire (repository) management support (https://github.com/ansible-collections/community.general/pull/7012).
release_summary: Feature and bugfix release.
fragments:
- 6814-redfish-config-add-delete-all-volumes.yml
- 6914-proxmox_kvm-enable-force-restart.yml
- 6931-keycloak_client-inventory-bugfix.yml
- 6976-proxmox-vm-info-not-require-node.yml
- 6989-npm-cmdrunner.yml
- 7.3.0.yml
- 7012-sorcery-grimoire-mgmt.yml
- 7019-ipa_config-user-and-group-objectclasses.yml
- 7020-redfish-utils-pagination.yml
- 7033-ejabberd-user-bugs.yml
- 7043-ejabberd-user-deprecate-logging.yml
- 7046-snap-newline-before-separator.yml
- 7049-proxmox-vm-info-empty-results.yml
- 7051-ipa-config-new-choice-idp-to-ipauserauthtype.yml
- 7061-fix-bitwarden-get_field.yml
- 7067-keycloak-api-paramerter-fix.yml
- 7075-ejabberd-user-cmdrunner.yml
- 7081-redfish-utils-fix-for-storagecontrollers-deprecated-key.yaml
- 7085-sanity.yml
- 7099-chroot-disable-root-check-option.yml
- 7102-freebsd-shutdown-p.yml
- 7104_fix_lxc_remoteaddr_default.yml
- 7113-redfish-utils-power-cycle.yml
- lvol-pct-of-origin.yml
release_date: '2023-08-15'

View File

@@ -5,7 +5,7 @@
namespace: community
name: general
version: 7.2.0
version: 7.3.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -45,7 +45,7 @@ class ActionModule(ActionBase):
SHUTDOWN_COMMAND_ARGS = {
'alpine': '',
'void': '-h +{delay_min} "{message}"',
'freebsd': '-h +{delay_sec}s "{message}"',
'freebsd': '-p +{delay_sec}s "{message}"',
'linux': DEFAULT_SHUTDOWN_COMMAND_ARGS,
'macosx': '-h +{delay_min} "{message}"',
'openbsd': '-h +{delay_min} "{message}"',

View File

@@ -46,11 +46,26 @@ DOCUMENTATION = '''
vars:
- name: ansible_chroot_exe
default: chroot
disable_root_check:
description:
- Do not check that the user is not root.
ini:
- section: chroot_connection
key: disable_root_check
env:
- name: ANSIBLE_CHROOT_DISABLE_ROOT_CHECK
vars:
- name: ansible_chroot_disable_root_check
default: false
type: bool
version_added: 7.3.0
'''
EXAMPLES = r"""
# Static inventory file
# Plugin requires root privileges for chroot, -E preserves your env (and location of ~/.ansible):
# sudo -E ansible-playbook ...
#
# Static inventory file
# [chroots]
# /path/to/debootstrap
# /path/to/feboostrap
@@ -100,11 +115,7 @@ class Connection(ConnectionBase):
self.chroot = self._play_context.remote_addr
if os.geteuid() != 0:
raise AnsibleError("chroot connection requires running as root")
# we're running as root on the local system so do some
# trivial checks for ensuring 'host' is actually a chroot'able dir
# do some trivial checks for ensuring 'host' is actually a chroot'able dir
if not os.path.isdir(self.chroot):
raise AnsibleError("%s is not a directory" % self.chroot)
@@ -118,6 +129,11 @@ class Connection(ConnectionBase):
def _connect(self):
""" connect to the chroot """
if not self.get_option('disable_root_check') and os.geteuid() != 0:
raise AnsibleError(
"chroot connection requires running as root. "
"You can override this check with the `disable_root_check` option.")
if os.path.isabs(self.get_option('chroot_exe')):
self.chroot_cmd = self.get_option('chroot_exe')
else:

View File

@@ -19,6 +19,7 @@ DOCUMENTATION = '''
- Container identifier
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_lxc_host
executable:

View File

@@ -132,20 +132,29 @@ class Bitwarden(object):
If field is None, return the whole record for each match.
"""
matches = self._get_matches(search_value, search_field, collection_id)
if field in ['autofillOnPageLoad', 'password', 'passwordRevisionDate', 'totp', 'uris', 'username']:
return [match['login'][field] for match in matches]
elif not field:
if not field:
return matches
else:
custom_field_matches = []
for match in matches:
field_matches = []
for match in matches:
# if there are no custom fields, then `match` has no key 'fields'
if 'fields' in match:
custom_field_found = False
for custom_field in match['fields']:
if custom_field['name'] == field:
custom_field_matches.append(custom_field['value'])
if matches and not custom_field_matches:
raise AnsibleError("Custom field {field} does not exist in {search_value}".format(field=field, search_value=search_value))
return custom_field_matches
if field == custom_field['name']:
field_matches.append(custom_field['value'])
custom_field_found = True
break
if custom_field_found:
continue
if 'login' in match and field in match['login']:
field_matches.append(match['login'][field])
continue
if field in match:
field_matches.append(match[field])
continue
if matches and not field_matches:
raise AnsibleError("field {field} does not exist in {search_value}".format(field=field, search_value=search_value))
return field_matches
class LookupModule(LookupBase):

View File

@@ -208,7 +208,7 @@ class CmdRunner(object):
for mod_param_name, spec in iteritems(module.argument_spec):
if mod_param_name not in self.arg_formats:
self.arg_formats[mod_param_name] = _Format.as_default_type(spec['type'], mod_param_name)
self.arg_formats[mod_param_name] = _Format.as_default_type(spec.get('type', 'str'), mod_param_name)
def __call__(self, args_order=None, output_process=None, ignore_value_none=True, check_mode_skip=False, check_mode_return=None, **kwargs):
if output_process is None:

View File

@@ -777,7 +777,8 @@ class KeycloakAPI(object):
users_url += '?username=%s&exact=true' % username
try:
userrep = None
users = json.loads(to_native(open_url(users_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
users = json.loads(to_native(open_url(users_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
for user in users:
if user['username'] == username:
@@ -803,7 +804,8 @@ class KeycloakAPI(object):
service_account_user_url = URL_CLIENT_SERVICE_ACCOUNT_USER.format(url=self.baseurl, realm=realm, id=cid)
try:
return json.loads(to_native(open_url(service_account_user_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(service_account_user_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain the service-account-user for realm %s and client_id %s: %s'
@@ -1347,7 +1349,8 @@ class KeycloakAPI(object):
clientsecret_url = URL_CLIENTSECRET.format(url=self.baseurl, realm=realm, id=id)
try:
return json.loads(to_native(open_url(clientsecret_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(clientsecret_url, method='POST', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -1370,7 +1373,8 @@ class KeycloakAPI(object):
clientsecret_url = URL_CLIENTSECRET.format(url=self.baseurl, realm=realm, id=id)
try:
return json.loads(to_native(open_url(clientsecret_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(clientsecret_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -2678,7 +2682,9 @@ class KeycloakAPI(object):
open_url(
user_url,
method='GET',
headers=self.restheaders))
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
return userrep
except Exception as e:
self.module.fail_json(msg='Could not get user %s in realm %s: %s'
@@ -2700,8 +2706,10 @@ class KeycloakAPI(object):
realm=realm)
open_url(users_url,
method='POST',
headers=self.restheaders,
data=json.dumps(userrep))
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(userrep),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
created_user = self.get_user_by_username(
username=userrep['username'],
realm=realm)
@@ -2744,8 +2752,10 @@ class KeycloakAPI(object):
open_url(
user_url,
method='PUT',
headers=self.restheaders,
data=json.dumps(userrep))
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(userrep),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
updated_user = self.get_user_by_id(
user_id=userrep['id'],
realm=realm)
@@ -2769,7 +2779,9 @@ class KeycloakAPI(object):
return open_url(
user_url,
method='DELETE',
headers=self.restheaders)
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not delete user %s in realm %s: %s'
% (user_id, realm, str(e)))
@@ -2791,7 +2803,9 @@ class KeycloakAPI(object):
open_url(
user_groups_url,
method='GET',
headers=self.restheaders))
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for user_group in user_groups:
groups.append(user_group["name"])
return groups
@@ -2816,7 +2830,9 @@ class KeycloakAPI(object):
return open_url(
user_group_url,
method='PUT',
headers=self.restheaders)
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not add user %s in group %s in realm %s: %s'
% (user_id, group_id, realm, str(e)))
@@ -2838,7 +2854,9 @@ class KeycloakAPI(object):
return open_url(
user_group_url,
method='DELETE',
headers=self.restheaders)
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not remove user %s from group %s in realm %s: %s'
% (user_id, group_id, realm, str(e)))

View File

@@ -570,7 +570,7 @@ def are_lists_equal(s, t):
s = to_dict(s)
t = to_dict(t)
if type(s[0]) == dict:
if isinstance(s[0], dict):
# Handle list of dicts. Dictionary returned by the API may have additional keys. For example, a get call on
# service gateway has an attribute `services` which is a list of `ServiceIdResponseDetails`. This has a key
# `service_name` which is not provided in the list of `services` by a user while making an update call; only
@@ -604,9 +604,9 @@ def get_attr_to_update(get_fn, kwargs_get, module, update_attributes):
user_provided_attr_value = module.params.get(attr, None)
unequal_list_attr = (
type(resources_attr_value) == list or type(user_provided_attr_value) == list
isinstance(resources_attr_value, list) or isinstance(user_provided_attr_value, list)
) and not are_lists_equal(user_provided_attr_value, resources_attr_value)
unequal_attr = type(resources_attr_value) != list and to_dict(
unequal_attr = not isinstance(resources_attr_value, list) and to_dict(
resources_attr_value
) != to_dict(user_provided_attr_value)
if unequal_list_attr or unequal_attr:
@@ -936,9 +936,9 @@ def tuplize(d):
list_of_tuples = []
key_list = sorted(list(d.keys()))
for key in key_list:
if type(d[key]) == list:
if isinstance(d[key], list):
# Convert a value which is itself a list of dict to a list of tuples.
if d[key] and type(d[key][0]) == dict:
if d[key] and isinstance(d[key][0], dict):
sub_tuples = []
for sub_dict in d[key]:
sub_tuples.append(tuplize(sub_dict))
@@ -948,7 +948,7 @@ def tuplize(d):
list_of_tuples.append((sub_tuples is None, key, sub_tuples))
else:
list_of_tuples.append((d[key] is None, key, d[key]))
elif type(d[key]) == dict:
elif isinstance(d[key], dict):
tupled_value = tuplize(d[key])
list_of_tuples.append((tupled_value is None, key, tupled_value))
else:
@@ -969,13 +969,13 @@ def sort_dictionary(d):
"""
sorted_d = {}
for key in d:
if type(d[key]) == list:
if d[key] and type(d[key][0]) == dict:
if isinstance(d[key], list):
if d[key] and isinstance(d[key][0], dict):
sorted_value = sort_list_of_dictionary(d[key])
sorted_d[key] = sorted_value
else:
sorted_d[key] = sorted(d[key])
elif type(d[key]) == dict:
elif isinstance(d[key], dict):
sorted_d[key] = sort_dictionary(d[key])
else:
sorted_d[key] = d[key]
@@ -1044,7 +1044,7 @@ def check_if_user_value_matches_resources_attr(
if (
user_provided_value_for_attr
and type(user_provided_value_for_attr[0]) == dict
and isinstance(user_provided_value_for_attr[0], dict)
):
# Process a list of dict
sorted_user_provided_value_for_attr = sort_list_of_dictionary(
@@ -1547,7 +1547,7 @@ def delete_and_wait(
except ServiceError as ex:
# DNS API throws a 400 InvalidParameter when a zone id is provided for zone_name_or_id and if the zone
# resource is not available, instead of the expected 404. So working around this for now.
if type(client) == oci.dns.DnsClient:
if isinstance(client, oci.dns.DnsClient):
if ex.status == 400 and ex.code == "InvalidParameter":
_debug(
"Resource {0} with {1} already deleted. So returning changed=False".format(

View File

@@ -80,8 +80,8 @@ class ProxmoxAnsible(object):
module.fail_json(msg=missing_required_lib('proxmoxer'), exception=PROXMOXER_IMP_ERR)
self.module = module
self.proxmox_api = self._connect()
self.proxmoxer_version = proxmoxer_version
self.proxmox_api = self._connect()
# Test token validity
try:
self.proxmox_api.version.get()
@@ -100,7 +100,7 @@ class ProxmoxAnsible(object):
if api_password:
auth_args['password'] = api_password
else:
if self.version() < LooseVersion('1.1.0'):
if self.proxmoxer_version < LooseVersion('1.1.0'):
self.module.fail_json('Using "token_name" and "token_value" require proxmoxer>=1.1.0')
auth_args['token_name'] = api_token_id
auth_args['token_value'] = api_token_secret

View File

@@ -717,7 +717,8 @@ class RedfishUtils(object):
properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers',
'Location', 'Manufacturer', 'Model', 'Name', 'Id',
'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status']
key = "StorageControllers"
key = "Controllers"
deprecated_key = "StorageControllers"
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
@@ -745,7 +746,30 @@ class RedfishUtils(object):
data = response['data']
if key in data:
controller_list = data[key]
controllers_uri = data[key][u'@odata.id']
response = self.get_request(self.root_uri + controllers_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data[u'Members']:
for controller_member in data[u'Members']:
controller_member_uri = controller_member[u'@odata.id']
response = self.get_request(self.root_uri + controller_member_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
controller_result = {}
for property in properties:
if property in data:
controller_result[property] = data[property]
controller_results.append(controller_result)
elif deprecated_key in data:
controller_list = data[deprecated_key]
for controller in controller_list:
controller_result = {}
for property in properties:
@@ -800,7 +824,25 @@ class RedfishUtils(object):
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
if 'Controllers' in data:
controllers_uri = data['Controllers'][u'@odata.id']
response = self.get_request(self.root_uri + controllers_uri)
if response['ret'] is False:
return response
result['ret'] = True
cdata = response['data']
if cdata[u'Members']:
controller_member_uri = cdata[u'Members'][0][u'@odata.id']
response = self.get_request(self.root_uri + controller_member_uri)
if response['ret'] is False:
return response
result['ret'] = True
cdata = response['data']
controller_name = cdata['Name']
elif 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
@@ -904,15 +946,7 @@ class RedfishUtils(object):
return response
data = response['data']
controller_name = 'Controller %s' % str(idx)
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
elif 'Controllers' in data:
if 'Controllers' in data:
response = self.get_request(self.root_uri + data['Controllers'][u'@odata.id'])
if response['ret'] is False:
return response
@@ -930,6 +964,14 @@ class RedfishUtils(object):
else:
controller_id = member_data.get('Id', '1')
controller_name = 'Controller %s' % controller_id
elif 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
volume_results = []
volume_list = []
if 'Volumes' in data:
@@ -1032,7 +1074,12 @@ class RedfishUtils(object):
# command should be PowerOn, PowerForceOff, etc.
if not command.startswith('Power'):
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
reset_type = command[5:]
# Commands (except PowerCycle) will be stripped of the 'Power' prefix
if command == 'PowerCycle':
reset_type = command
else:
reset_type = command[5:]
# map Reboot to a ResetType that does a reboot
if reset_type == 'Reboot':
@@ -1499,29 +1546,37 @@ class RedfishUtils(object):
def _software_inventory(self, uri):
result = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
result['entries'] = []
for member in data[u'Members']:
uri = self.root_uri + member[u'@odata.id']
# Get details for each software or firmware member
response = self.get_request(uri)
while uri:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
software = {}
# Get these standard properties if present
for key in ['Name', 'Id', 'Status', 'Version', 'Updateable',
'SoftwareId', 'LowestSupportedVersion', 'Manufacturer',
'ReleaseDate']:
if key in data:
software[key] = data.get(key)
result['entries'].append(software)
if data.get('Members@odata.nextLink'):
uri = data.get('Members@odata.nextLink')
else:
uri = None
for member in data[u'Members']:
fw_uri = self.root_uri + member[u'@odata.id']
# Get details for each software or firmware member
response = self.get_request(fw_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
software = {}
# Get these standard properties if present
for key in ['Name', 'Id', 'Status', 'Version', 'Updateable',
'SoftwareId', 'LowestSupportedVersion', 'Manufacturer',
'ReleaseDate']:
if key in data:
software[key] = data.get(key)
result['entries'].append(software)
return result
def get_firmware_inventory(self):
@@ -3403,3 +3458,65 @@ class RedfishUtils(object):
fan_percent_min_config = hpe.get('FanPercentMinimum')
result["fan_percent_min"] = fan_percent_min_config
return result
def delete_volumes(self, storage_subsystem_id, volume_ids):
# Find the Storage resource from the requested ComputerSystem resource
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
storage_uri = data.get('Storage', {}).get('@odata.id')
if storage_uri is None:
return {'ret': False, 'msg': 'Storage resource not found'}
# Get Storage Collection
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
data = response['data']
# Collect Storage Subsystems
self.storage_subsystems_uris = [i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.storage_subsystems_uris:
return {
'ret': False,
'msg': "StorageCollection's Members array is either empty or missing"}
# Matching Storage Subsystem ID with user input
self.storage_subsystem_uri = ""
for storage_subsystem_uri in self.storage_subsystems_uris:
if storage_subsystem_uri.split("/")[-2] == storage_subsystem_id:
self.storage_subsystem_uri = storage_subsystem_uri
if not self.storage_subsystem_uri:
return {
'ret': False,
'msg': "Provided Storage Subsystem ID %s does not exist on the server" % storage_subsystem_id}
# Get Volume Collection
response = self.get_request(self.root_uri + self.storage_subsystem_uri)
if response['ret'] is False:
return response
data = response['data']
response = self.get_request(self.root_uri + data['Volumes']['@odata.id'])
if response['ret'] is False:
return response
data = response['data']
# Collect Volumes
self.volume_uris = [i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.volume_uris:
return {
'ret': True, 'changed': False,
'msg': "VolumeCollection's Members array is either empty or missing"}
# Delete each volume
for volume in self.volume_uris:
if volume.split("/")[-1] in volume_ids:
response = self.delete_request(self.root_uri + volume)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "The following volumes were deleted: %s" % str(volume_ids)}

View File

@@ -72,7 +72,9 @@ def api_request(module, endpoint, data=None, method="GET"):
if info["status"] == 403:
module.fail_json(msg="Token authorization failed",
execution_info=json.loads(info["body"]))
if info["status"] == 409:
elif info["status"] == 404:
return None, info
elif info["status"] == 409:
module.fail_json(msg="Job executions limit reached",
execution_info=json.loads(info["body"]))
elif info["status"] >= 500:

View File

@@ -78,6 +78,7 @@ EXAMPLES = '''
import syslog
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, cmd_runner_fmt
class EjabberdUser(object):
@@ -95,6 +96,17 @@ class EjabberdUser(object):
self.host = module.params.get('host')
self.user = module.params.get('username')
self.pwd = module.params.get('password')
self.runner = CmdRunner(
module,
command="ejabberdctl",
arg_formats=dict(
cmd=cmd_runner_fmt.as_list(),
host=cmd_runner_fmt.as_list(),
user=cmd_runner_fmt.as_list(),
pwd=cmd_runner_fmt.as_list(),
),
check_rc=False,
)
@property
def changed(self):
@@ -102,7 +114,7 @@ class EjabberdUser(object):
changed. It will return True if the user does not match the supplied
credentials and False if it does not
"""
return self.run_command('check_password', [self.user, self.host, self.pwd])
return self.run_command('check_password', 'user host pwd', (lambda rc, out, err: bool(rc)))
@property
def exists(self):
@@ -110,7 +122,7 @@ class EjabberdUser(object):
host specified. If the user exists True is returned, otherwise False
is returned
"""
return self.run_command('check_account', [self.user, self.host])
return self.run_command('check_account', 'user host', (lambda rc, out, err: not bool(rc)))
def log(self, entry):
""" This method will log information to the local syslog facility """
@@ -118,29 +130,36 @@ class EjabberdUser(object):
syslog.openlog('ansible-%s' % self.module._name)
syslog.syslog(syslog.LOG_NOTICE, entry)
def run_command(self, cmd, options):
def run_command(self, cmd, options, process=None):
""" This method will run the any command specified and return the
returns using the Ansible common module
"""
cmd = [self.module.get_bin_path('ejabberdctl'), cmd] + options
self.log('command: %s' % " ".join(cmd))
return self.module.run_command(cmd)
def _proc(*a):
return a
if process is None:
process = _proc
with self.runner("cmd " + options, output_process=process) as ctx:
res = ctx.run(cmd=cmd, host=self.host, user=self.user, pwd=self.pwd)
self.log('command: %s' % " ".join(ctx.run_info['cmd']))
return res
def update(self):
""" The update method will update the credentials for the user provided
"""
return self.run_command('change_password', [self.user, self.host, self.pwd])
return self.run_command('change_password', 'user host pwd')
def create(self):
""" The create method will create a new user on the host with the
password provided
"""
return self.run_command('register', [self.user, self.host, self.pwd])
return self.run_command('register', 'user host pwd')
def delete(self):
""" The delete method will delete the user from the host
"""
return self.run_command('unregister', [self.user, self.host])
return self.run_command('unregister', 'user host')
def main():
@@ -150,7 +169,7 @@ def main():
username=dict(required=True, type='str'),
password=dict(type='str', no_log=True),
state=dict(default='present', choices=['present', 'absent']),
logging=dict(default=False, type='bool') # deprecate in favour of c.g.syslogger?
logging=dict(default=False, type='bool', removed_in_version='10.0.0', removed_from_collection='community.general'),
),
required_if=[
('state', 'present', ['password']),

View File

@@ -40,6 +40,12 @@ options:
aliases: ["primarygroup"]
type: str
version_added: '2.5.0'
ipagroupobjectclasses:
description: A list of group objectclasses.
aliases: ["groupobjectclasses"]
type: list
elements: str
version_added: '7.3.0'
ipagroupsearchfields:
description: A list of fields to search in when searching for groups.
aliases: ["groupsearchfields"]
@@ -85,12 +91,20 @@ options:
elements: str
version_added: '3.7.0'
ipauserauthtype:
description: The authentication type to use by default.
description:
- The authentication type to use by default.
- The choice V(idp) has been added in community.general 7.3.0.
aliases: ["userauthtype"]
choices: ["password", "radius", "otp", "pkinit", "hardened", "disabled"]
choices: ["password", "radius", "otp", "pkinit", "hardened", "idp", "disabled"]
type: list
elements: str
version_added: '2.5.0'
ipauserobjectclasses:
description: A list of user objectclasses.
aliases: ["userobjectclasses"]
type: list
elements: str
version_added: '7.3.0'
ipausersearchfields:
description: A list of fields to search in when searching for users.
aliases: ["usersearchfields"]
@@ -235,11 +249,12 @@ class ConfigIPAClient(IPAClient):
def get_config_dict(ipaconfigstring=None, ipadefaultloginshell=None,
ipadefaultemaildomain=None, ipadefaultprimarygroup=None,
ipagroupsearchfields=None, ipahomesrootdir=None,
ipakrbauthzdata=None, ipamaxusernamelength=None,
ipapwdexpadvnotify=None, ipasearchrecordslimit=None,
ipasearchtimelimit=None, ipaselinuxusermaporder=None,
ipauserauthtype=None, ipausersearchfields=None):
ipagroupsearchfields=None, ipagroupobjectclasses=None,
ipahomesrootdir=None, ipakrbauthzdata=None,
ipamaxusernamelength=None, ipapwdexpadvnotify=None,
ipasearchrecordslimit=None, ipasearchtimelimit=None,
ipaselinuxusermaporder=None, ipauserauthtype=None,
ipausersearchfields=None, ipauserobjectclasses=None):
config = {}
if ipaconfigstring is not None:
config['ipaconfigstring'] = ipaconfigstring
@@ -249,6 +264,8 @@ def get_config_dict(ipaconfigstring=None, ipadefaultloginshell=None,
config['ipadefaultemaildomain'] = ipadefaultemaildomain
if ipadefaultprimarygroup is not None:
config['ipadefaultprimarygroup'] = ipadefaultprimarygroup
if ipagroupobjectclasses is not None:
config['ipagroupobjectclasses'] = ipagroupobjectclasses
if ipagroupsearchfields is not None:
config['ipagroupsearchfields'] = ','.join(ipagroupsearchfields)
if ipahomesrootdir is not None:
@@ -267,6 +284,8 @@ def get_config_dict(ipaconfigstring=None, ipadefaultloginshell=None,
config['ipaselinuxusermaporder'] = '$'.join(ipaselinuxusermaporder)
if ipauserauthtype is not None:
config['ipauserauthtype'] = ipauserauthtype
if ipauserobjectclasses is not None:
config['ipauserobjectclasses'] = ipauserobjectclasses
if ipausersearchfields is not None:
config['ipausersearchfields'] = ','.join(ipausersearchfields)
@@ -283,6 +302,7 @@ def ensure(module, client):
ipadefaultloginshell=module.params.get('ipadefaultloginshell'),
ipadefaultemaildomain=module.params.get('ipadefaultemaildomain'),
ipadefaultprimarygroup=module.params.get('ipadefaultprimarygroup'),
ipagroupobjectclasses=module.params.get('ipagroupobjectclasses'),
ipagroupsearchfields=module.params.get('ipagroupsearchfields'),
ipahomesrootdir=module.params.get('ipahomesrootdir'),
ipakrbauthzdata=module.params.get('ipakrbauthzdata'),
@@ -293,6 +313,7 @@ def ensure(module, client):
ipaselinuxusermaporder=module.params.get('ipaselinuxusermaporder'),
ipauserauthtype=module.params.get('ipauserauthtype'),
ipausersearchfields=module.params.get('ipausersearchfields'),
ipauserobjectclasses=module.params.get('ipauserobjectclasses'),
)
ipa_config = client.config_show()
diff = get_config_diff(client, ipa_config, module_config)
@@ -322,6 +343,8 @@ def main():
ipadefaultloginshell=dict(type='str', aliases=['loginshell']),
ipadefaultemaildomain=dict(type='str', aliases=['emaildomain']),
ipadefaultprimarygroup=dict(type='str', aliases=['primarygroup']),
ipagroupobjectclasses=dict(type='list', elements='str',
aliases=['groupobjectclasses']),
ipagroupsearchfields=dict(type='list', elements='str',
aliases=['groupsearchfields']),
ipahomesrootdir=dict(type='str', aliases=['homesrootdir']),
@@ -337,9 +360,11 @@ def main():
ipauserauthtype=dict(type='list', elements='str',
aliases=['userauthtype'],
choices=["password", "radius", "otp", "pkinit",
"hardened", "disabled"]),
"hardened", "idp", "disabled"]),
ipausersearchfields=dict(type='list', elements='str',
aliases=['usersearchfields']),
ipauserobjectclasses=dict(type='list', elements='str',
aliases=['userobjectclasses']),
)
module = AnsibleModule(

View File

@@ -27,7 +27,7 @@ options:
group:
type: str
description:
- Name of the Jenkins group on the OS.
- GID or name of the Jenkins group on the OS.
default: jenkins
jenkins_home:
type: path
@@ -47,7 +47,7 @@ options:
owner:
type: str
description:
- Name of the Jenkins user on the OS.
- UID or name of the Jenkins user on the OS.
default: jenkins
state:
type: str
@@ -195,6 +195,29 @@ EXAMPLES = '''
url_password: p4ssw0rd
url: http://localhost:8888
#
# Example of how to authenticate with serverless deployment
#
- name: Update plugins on ECS Fargate Jenkins instance
community.general.jenkins_plugin:
# plugin name and version
name: ws-cleanup
version: '0.45'
# Jenkins home path mounted on ec2-helper VM (example)
jenkins_home: "/mnt/{{ jenkins_instance }}"
# matching the UID/GID to one in official Jenkins image
owner: 1000
group: 1000
# Jenkins instance URL and admin credentials
url: "https://{{ jenkins_instance }}.com/"
url_username: admin
url_password: p4ssw0rd
# make module work from EC2 which has local access
# to EFS mount as well as Jenkins URL
delegate_to: ec2-helper
vars:
jenkins_instance: foobar
#
# Example of a Play which handles Jenkins restarts during the state changes
#

View File

@@ -247,6 +247,7 @@ options:
protocol:
description:
- Type of client.
- At creation only, default value will be V(openid-connect) if O(protocol) is omitted.
type: str
choices: ['openid-connect', 'saml']
@@ -721,6 +722,10 @@ from ansible.module_utils.basic import AnsibleModule
import copy
PROTOCOL_OPENID_CONNECT = 'openid-connect'
PROTOCOL_SAML = 'saml'
def normalise_cr(clientrep, remove_ids=False):
""" Re-sorts any properties where the order so that diff's is minimised, and adds default values where appropriate so that the
the change detection is more effective.
@@ -779,7 +784,7 @@ def main():
consentText=dict(type='str'),
id=dict(type='str'),
name=dict(type='str'),
protocol=dict(type='str', choices=['openid-connect', 'saml']),
protocol=dict(type='str', choices=[PROTOCOL_OPENID_CONNECT, PROTOCOL_SAML]),
protocolMapper=dict(type='str'),
config=dict(type='dict'),
)
@@ -813,7 +818,7 @@ def main():
authorization_services_enabled=dict(type='bool', aliases=['authorizationServicesEnabled']),
public_client=dict(type='bool', aliases=['publicClient']),
frontchannel_logout=dict(type='bool', aliases=['frontchannelLogout']),
protocol=dict(type='str', choices=['openid-connect', 'saml']),
protocol=dict(type='str', choices=[PROTOCOL_OPENID_CONNECT, PROTOCOL_SAML]),
attributes=dict(type='dict'),
full_scope_allowed=dict(type='bool', aliases=['fullScopeAllowed']),
node_re_registration_timeout=dict(type='int', aliases=['nodeReRegistrationTimeout']),
@@ -911,6 +916,8 @@ def main():
if 'clientId' not in desired_client:
module.fail_json(msg='client_id needs to be specified when creating a new client')
if 'protocol' not in desired_client:
desired_client['protocol'] = PROTOCOL_OPENID_CONNECT
if module._diff:
result['diff'] = dict(before='', after=sanitize_cr(desired_client))

View File

@@ -41,13 +41,13 @@ options:
description:
- The size of the logical volume, according to lvcreate(8) --size, by
default in megabytes or optionally with one of [bBsSkKmMgGtTpPeE] units; or
according to lvcreate(8) --extents as a percentage of [VG|PVS|FREE];
according to lvcreate(8) --extents as a percentage of [VG|PVS|FREE|ORIGIN];
Float values must begin with a digit.
- When resizing, apart from specifying an absolute size you may, according to
lvextend(8)|lvreduce(8) C(--size), specify the amount to extend the logical volume with
the prefix V(+) or the amount to reduce the logical volume by with prefix V(-).
- Resizing using V(+) or V(-) was not supported prior to community.general 3.0.0.
- Please note that when using V(+) or V(-), the module is B(not idempotent).
- Please note that when using V(+), V(-), or percentage of FREE, the module is B(not idempotent).
state:
type: str
description:
@@ -73,7 +73,7 @@ options:
snapshot:
type: str
description:
- The name of the snapshot volume
- The name of a snapshot volume to be configured. When creating a snapshot volume, the O(lv) parameter specifies the origin volume.
pvs:
type: str
description:
@@ -368,10 +368,10 @@ def main():
if size_percent > 100:
module.fail_json(msg="Size percentage cannot be larger than 100%")
size_whole = size_parts[1]
if size_whole == 'ORIGIN':
module.fail_json(msg="Snapshot Volumes are not supported")
elif size_whole not in ['VG', 'PVS', 'FREE']:
module.fail_json(msg="Specify extents as a percentage of VG|PVS|FREE")
if size_whole == 'ORIGIN' and snapshot is None:
module.fail_json(msg="Percentage of ORIGIN supported only for snapshot volumes")
elif size_whole not in ['VG', 'PVS', 'FREE', 'ORIGIN']:
module.fail_json(msg="Specify extents as a percentage of VG|PVS|FREE|ORIGIN")
size_opt = 'l'
size_unit = ''

View File

@@ -150,6 +150,7 @@ import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, cmd_runner_fmt
class Npm(object):
@@ -172,33 +173,29 @@ class Npm(object):
else:
self.executable = [module.get_bin_path('npm', True)]
if kwargs['version'] and self.state != 'absent':
self.name_version = self.name + '@' + str(self.version)
if kwargs['version'] and kwargs['state'] != 'absent':
self.name_version = self.name + '@' + str(kwargs['version'])
else:
self.name_version = self.name
self.runner = CmdRunner(
module,
command=self.executable,
arg_formats=dict(
exec_args=cmd_runner_fmt.as_list(),
global_=cmd_runner_fmt.as_bool('--global'),
production=cmd_runner_fmt.as_bool('--production'),
ignore_scripts=cmd_runner_fmt.as_bool('--ignore-scripts'),
unsafe_perm=cmd_runner_fmt.as_bool('--unsafe-perm'),
name_version=cmd_runner_fmt.as_list(),
registry=cmd_runner_fmt.as_opt_val('--registry'),
no_optional=cmd_runner_fmt.as_bool('--no-optional'),
no_bin_links=cmd_runner_fmt.as_bool('--no-bin-links'),
)
)
def _exec(self, args, run_in_check_mode=False, check_rc=True, add_package_name=True):
if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):
cmd = self.executable + args
if self.glbl:
cmd.append('--global')
if self.production and ('install' in cmd or 'update' in cmd or 'ci' in cmd):
cmd.append('--production')
if self.ignore_scripts:
cmd.append('--ignore-scripts')
if self.unsafe_perm:
cmd.append('--unsafe-perm')
if self.name_version and add_package_name:
cmd.append(self.name_version)
if self.registry:
cmd.append('--registry')
cmd.append(self.registry)
if self.no_optional:
cmd.append('--no-optional')
if self.no_bin_links:
cmd.append('--no-bin-links')
# If path is specified, cd into that path and run the command.
cwd = None
if self.path:
@@ -208,8 +205,19 @@ class Npm(object):
self.module.fail_json(msg="path %s is not a directory" % self.path)
cwd = self.path
rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)
params = dict(self.module.params)
params['exec_args'] = args
params['global_'] = self.glbl
params['production'] = self.production and ('install' in args or 'update' in args or 'ci' in args)
params['name_version'] = self.name_version if add_package_name else None
with self.runner(
"exec_args global_ production ignore_scripts unsafe_perm name_version registry no_optional no_bin_links",
check_rc=check_rc, cwd=cwd
) as ctx:
rc, out, err = ctx.run(**params)
return out
return ''
def list(self):
@@ -269,12 +277,12 @@ class Npm(object):
def main():
arg_spec = dict(
name=dict(default=None, type='str'),
path=dict(default=None, type='path'),
version=dict(default=None, type='str'),
name=dict(type='str'),
path=dict(type='path'),
version=dict(type='str'),
production=dict(default=False, type='bool'),
executable=dict(default=None, type='path'),
registry=dict(default=None, type='str'),
executable=dict(type='path'),
registry=dict(type='str'),
state=dict(default='present', choices=['present', 'absent', 'latest']),
ignore_scripts=dict(default=False, type='bool'),
unsafe_perm=dict(default=False, type='bool'),
@@ -293,25 +301,27 @@ def main():
path = module.params['path']
version = module.params['version']
glbl = module.params['global']
production = module.params['production']
executable = module.params['executable']
registry = module.params['registry']
state = module.params['state']
ignore_scripts = module.params['ignore_scripts']
unsafe_perm = module.params['unsafe_perm']
ci = module.params['ci']
no_optional = module.params['no_optional']
no_bin_links = module.params['no_bin_links']
if not path and not glbl:
module.fail_json(msg='path must be specified when not using global')
npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production,
executable=executable, registry=registry, ignore_scripts=ignore_scripts,
unsafe_perm=unsafe_perm, state=state, no_optional=no_optional, no_bin_links=no_bin_links)
npm = Npm(module,
name=name,
path=path,
version=version,
glbl=glbl,
production=module.params['production'],
executable=module.params['executable'],
registry=module.params['registry'],
ignore_scripts=module.params['ignore_scripts'],
unsafe_perm=module.params['unsafe_perm'],
state=state,
no_optional=module.params['no_optional'],
no_bin_links=module.params['no_bin_links'])
changed = False
if ci:
if module.params['ci']:
npm.ci_install()
changed = True
elif state == 'present':

View File

@@ -64,6 +64,7 @@ options:
boot:
description:
- Specify the boot order -> boot on floppy V(a), hard disk V(c), CD-ROM V(d), or network V(n).
- For newer versions of Proxmox VE, use a boot order like V(order=scsi0;net0;hostpci0).
- You can combine to set order.
- This option has no default unless O(proxmox_default_behavior) is set to V(compatiblity); then the default is V(cnd).
type: str
@@ -287,8 +288,9 @@ options:
type: int
name:
description:
- Specifies the VM name. Only used on the configuration web interface.
- Specifies the VM name. Name could be non-unique across the cluster.
- Required only for O(state=present).
- With O(state=present) if O(vmid) not provided and VM with name exists in the cluster then no changes will be made.
type: str
nameservers:
description:
@@ -1110,11 +1112,11 @@ class ProxmoxKvmAnsible(ProxmoxAnsible):
return False
return True
def restart_vm(self, vm, **status):
def restart_vm(self, vm, force, **status):
vmid = vm['vmid']
try:
proxmox_node = self.proxmox_api.nodes(vm['node'])
taskid = proxmox_node.qemu(vmid).status.reboot.post()
taskid = proxmox_node.qemu(vmid).status.reset.post() if force else proxmox_node.qemu(vmid).status.reboot.post()
if not self.wait_for_task(vm['node'], taskid):
self.module.fail_json(msg='Reached timeout while waiting for rebooting VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
@@ -1289,10 +1291,14 @@ def main():
# the cloned vm name or retrieve the next free VM id from ProxmoxAPI.
if not vmid:
if state == 'present' and not update and not clone and not delete and not revert and not migrate:
try:
vmid = proxmox.get_nextvmid()
except Exception:
module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name))
existing_vmid = proxmox.get_vmid(name, ignore_missing=True)
if existing_vmid:
vmid = existing_vmid
else:
try:
vmid = proxmox.get_nextvmid()
except Exception:
module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name))
else:
clone_target = clone or name
vmid = proxmox.get_vmid(clone_target, ignore_missing=True)
@@ -1488,7 +1494,7 @@ def main():
if vm['status'] == 'stopped':
module.exit_json(changed=False, vmid=vmid, msg="VM %s is not running" % vmid, **status)
if proxmox.restart_vm(vm):
if proxmox.restart_vm(vm, force=module.params['force']):
module.exit_json(changed=True, vmid=vmid, msg="VM %s is restarted" % vmid, **status)
elif state == 'absent':

View File

@@ -193,14 +193,14 @@ class ProxmoxUser:
self.user[k] = v
elif k in ['groups', 'tokens'] and (v == '' or v is None):
self.user[k] = []
elif k == 'groups' and type(v) == str:
elif k == 'groups' and isinstance(v, str):
self.user['groups'] = v.split(',')
elif k == 'tokens' and type(v) == list:
elif k == 'tokens' and isinstance(v, list):
for token in v:
if 'privsep' in token:
token['privsep'] = proxmox_to_ansible_bool(token['privsep'])
self.user['tokens'] = v
elif k == 'tokens' and type(v) == dict:
elif k == 'tokens' and isinstance(v, dict):
self.user['tokens'] = list()
for tokenid, tokenvalues in v.items():
t = tokenvalues

View File

@@ -20,8 +20,7 @@ author: 'Sergei Antipov (@UnderGreen) <greendayonfire at gmail dot com>'
options:
node:
description:
- Node where to get virtual machines info.
required: true
- Restrict results to a specific Proxmox VE node.
type: str
type:
description:
@@ -35,11 +34,12 @@ options:
vmid:
description:
- Restrict results to a specific virtual machine by using its ID.
- If VM with the specified vmid does not exist in a cluster then resulting list will be empty.
type: int
name:
description:
- Restrict results to a specific virtual machine by using its name.
- If multiple virtual machines have the same name then vmid must be used instead.
- Restrict results to a specific virtual machine(s) by using their name.
- If VM(s) with the specified name do not exist in a cluster then the resulting list will be empty.
type: str
extends_documentation_fragment:
- community.general.proxmox.documentation
@@ -97,14 +97,18 @@ proxmox_vms:
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/100",
"maxcpu": 1,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35158379,
"name": "pxe.home.arpa",
"netin": 99715803,
"netout": 14237835,
"node": "pve",
"pid": 1947197,
"status": "running",
"template": False,
"type": "qemu",
"uptime": 135530,
"vmid": 100
@@ -115,13 +119,17 @@ proxmox_vms:
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/101",
"maxcpu": 1,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"node": "pve",
"status": "stopped",
"template": False,
"type": "qemu",
"uptime": 0,
"vmid": 101
@@ -133,30 +141,55 @@ from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec,
ProxmoxAnsible,
proxmox_to_ansible_bool,
)
class ProxmoxVmInfoAnsible(ProxmoxAnsible):
def get_qemu_vms(self, node, vmid=None):
def get_vms_from_cluster_resources(self):
try:
vms = self.proxmox_api.nodes(node).qemu().get()
for vm in vms:
vm["vmid"] = int(vm["vmid"])
vm["type"] = "qemu"
if vmid is None:
return vms
return [vm for vm in vms if vm["vmid"] == vmid]
return self.proxmox_api.cluster().resources().get(type="vm")
except Exception as e:
self.module.fail_json(
msg="Failed to retrieve VMs information from cluster resources: %s" % e
)
def get_vms_from_nodes(self, vms_unfiltered, type, vmid=None, name=None, node=None):
vms = []
for vm in vms_unfiltered:
if (
type != vm["type"]
or (node and vm["node"] != node)
or (vmid and int(vm["vmid"]) != vmid)
or (name is not None and vm["name"] != name)
):
continue
vms.append(vm)
nodes = frozenset([vm["node"] for vm in vms])
for node in nodes:
if type == "qemu":
vms_from_nodes = self.proxmox_api.nodes(node).qemu().get()
else:
vms_from_nodes = self.proxmox_api.nodes(node).lxc().get()
for vmn in vms_from_nodes:
for vm in vms:
if int(vm["vmid"]) == int(vmn["vmid"]):
vm.update(vmn)
vm["vmid"] = int(vm["vmid"])
vm["template"] = proxmox_to_ansible_bool(vm["template"])
break
return vms
def get_qemu_vms(self, vms_unfiltered, vmid=None, name=None, node=None):
try:
return self.get_vms_from_nodes(vms_unfiltered, "qemu", vmid, name, node)
except Exception as e:
self.module.fail_json(msg="Failed to retrieve QEMU VMs information: %s" % e)
def get_lxc_vms(self, node, vmid=None):
def get_lxc_vms(self, vms_unfiltered, vmid=None, name=None, node=None):
try:
vms = self.proxmox_api.nodes(node).lxc().get()
for vm in vms:
vm["vmid"] = int(vm["vmid"])
if vmid is None:
return vms
return [vm for vm in vms if vm["vmid"] == vmid]
return self.get_vms_from_nodes(vms_unfiltered, "lxc", vmid, name, node)
except Exception as e:
self.module.fail_json(msg="Failed to retrieve LXC VMs information: %s" % e)
@@ -164,7 +197,7 @@ class ProxmoxVmInfoAnsible(ProxmoxAnsible):
def main():
module_args = proxmox_auth_argument_spec()
vm_info_args = dict(
node=dict(type="str", required=True),
node=dict(type="str", required=False),
type=dict(
type="str", choices=["lxc", "qemu", "all"], default="all", required=False
),
@@ -188,28 +221,26 @@ def main():
result = dict(changed=False)
if proxmox.get_node(node) is None:
if node and proxmox.get_node(node) is None:
module.fail_json(msg="Node %s doesn't exist in PVE cluster" % node)
if not vmid and name:
vmid = int(proxmox.get_vmid(name, ignore_missing=False))
vms_cluster_resources = proxmox.get_vms_from_cluster_resources()
vms = []
vms = None
if type == "lxc":
vms = proxmox.get_lxc_vms(node, vmid=vmid)
vms = proxmox.get_lxc_vms(vms_cluster_resources, vmid, name, node)
elif type == "qemu":
vms = proxmox.get_qemu_vms(node, vmid=vmid)
vms = proxmox.get_qemu_vms(vms_cluster_resources, vmid, name, node)
else:
vms = proxmox.get_qemu_vms(node, vmid=vmid) + proxmox.get_lxc_vms(
node, vmid=vmid
)
vms = proxmox.get_qemu_vms(
vms_cluster_resources,
vmid,
name,
node,
) + proxmox.get_lxc_vms(vms_cluster_resources, vmid, name, node)
if vms or vmid is None:
result["proxmox_vms"] = vms
module.exit_json(**result)
else:
result["msg"] = "VM with vmid %s doesn't exist on node %s" % (vmid, node)
module.fail_json(**result)
result["proxmox_vms"] = vms
module.exit_json(**result)
if __name__ == "__main__":

View File

@@ -747,7 +747,7 @@ from ansible.module_utils.common.text.converters import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["PowerOn", "PowerForceOff", "PowerForceRestart", "PowerGracefulRestart",
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride",
"PowerGracefulShutdown", "PowerReboot", "PowerCycle", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride",
"IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink", "VirtualMediaInsert", "VirtualMediaEject", "VerifyBiosAttributes"],
"Chassis": ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",

View File

@@ -130,7 +130,21 @@ options:
type: dict
default: {}
version_added: '5.7.0'
storage_subsystem_id:
required: false
description:
- Id of the Storage Subsystem on which the volume is to be created.
type: str
default: ''
version_added: '7.3.0'
volume_ids:
required: false
description:
- List of IDs of volumes to be deleted.
type: list
default: []
elements: str
version_added: '7.3.0'
author:
- "Jose Delarosa (@jose-delarosa)"
- "T S Kushal (@TSKushal)"
@@ -272,6 +286,16 @@ EXAMPLES = '''
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Delete All Volumes
community.general.redfish_config:
category: Systems
command: DeleteVolumes
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
storage_subsystem_id: "DExxxxxx"
volume_ids: ["volume1", "volume2"]
'''
RETURN = '''
@@ -290,7 +314,7 @@ from ansible.module_utils.common.text.converters import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["SetBiosDefaultSettings", "SetBiosAttributes", "SetBootOrder",
"SetDefaultBootOrder", "EnableSecureBoot"],
"SetDefaultBootOrder", "EnableSecureBoot", "DeleteVolumes"],
"Manager": ["SetNetworkProtocols", "SetManagerNic", "SetHostInterface"],
"Sessions": ["SetSessionService"],
}
@@ -323,6 +347,8 @@ def main():
hostinterface_config=dict(type='dict', default={}),
hostinterface_id=dict(),
sessions_config=dict(type='dict', default={}),
storage_subsystem_id=dict(type='str', default=''),
volume_ids=dict(type='list', default=[], elements='str')
),
required_together=[
('username', 'password'),
@@ -372,6 +398,10 @@ def main():
# Sessions config options
sessions_config = module.params['sessions_config']
# Volume deletion options
storage_subsystem_id = module.params['storage_subsystem_id']
volume_ids = module.params['volume_ids']
# Build root URI
root_uri = "https://" + module.params['baseuri']
rf_utils = RedfishUtils(creds, root_uri, timeout, module,
@@ -405,6 +435,8 @@ def main():
result = rf_utils.set_default_boot_order()
elif command == "EnableSecureBoot":
result = rf_utils.enable_secure_boot()
elif command == "DeleteVolumes":
result = rf_utils.delete_volumes(storage_subsystem_id, volume_ids)
elif category == "Manager":
# execute only if we find a Manager service resource

View File

@@ -303,7 +303,10 @@ class Snap(StateModuleHelper):
return [name]
def process_many(rc, out, err):
outputs = out.split("---")
# This needs to be "\n---" instead of just "---" because otherwise
# if a snap uses "---" in its description then that will incorrectly
# be interpreted as a separator between snaps in the output.
outputs = out.split("\n---")
res = []
for sout in outputs:
res.extend(process_one(rc, sout, ""))

View File

@@ -1,7 +1,7 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015-2016, Vlad Glagolev <scm@vaygr.net>
# Copyright (c) 2015-2023, Vlad Glagolev <scm@vaygr.net>
#
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -10,7 +10,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: sorcery
short_description: Package manager for Source Mage GNU/Linux
@@ -20,8 +20,7 @@ author: "Vlad Glagolev (@vaygr)"
notes:
- When all three components are selected, the update goes by the sequence --
Sorcery -> Grimoire(s) -> Spell(s); you cannot override it.
- grimoire handling (i.e. add/remove, including SCM/rsync versions) is not
yet supported.
- Grimoire handling is supported since community.general 7.3.0.
requirements:
- bash
extends_documentation_fragment:
@@ -34,21 +33,31 @@ attributes:
options:
name:
description:
- Name of the spell
- multiple names can be given, separated by commas
- special value '*' in conjunction with states V(latest) or
- Name of the spell or grimoire.
- Multiple names can be given, separated by commas.
- Special value V(*) in conjunction with states V(latest) or
V(rebuild) will update or rebuild the whole system respectively
aliases: ["spell"]
- The alias O(grimoire) was added in community.general 7.3.0.
aliases: ["spell", "grimoire"]
type: list
elements: str
repository:
description:
- Repository location.
- If specified, O(name) represents grimoire(s) instead of spell(s).
- Special value V(*) will pull grimoire from the official location.
- Only single item in O(name) in conjunction with V(*) can be used.
- O(state=absent) must be used with a special value V(*).
type: str
version_added: 7.3.0
state:
description:
- Whether to cast, dispel or rebuild a package
- state V(cast) is an equivalent of V(present), not V(latest)
- state V(latest) always triggers O(update_cache=true)
- state V(rebuild) implies cast of all specified spells, not only
those existed before
- Whether to cast, dispel or rebuild a package.
- State V(cast) is an equivalent of V(present), not V(latest).
- State V(rebuild) implies cast of all specified spells, not only
those existed before.
choices: ["present", "latest", "absent", "cast", "dispelled", "rebuild"]
default: "present"
type: str
@@ -56,12 +65,12 @@ options:
depends:
description:
- Comma-separated list of _optional_ dependencies to build a spell
(or make sure it is built) with; use +/- in front of dependency
to turn it on/off ('+' is optional though).
- this option is ignored if O(name) parameter is equal to V(*) or
(or make sure it is built) with; use V(+)/V(-) in front of dependency
to turn it on/off (V(+) is optional though).
- This option is ignored if O(name) parameter is equal to V(*) or
contains more than one spell.
- providers must be supplied in the form recognized by Sorcery, for example
'openssl(SSL)'.
- Providers must be supplied in the form recognized by Sorcery,
for example 'V(openssl(SSL\))'.
type: str
update:
@@ -148,6 +157,30 @@ EXAMPLES = '''
update_codex: true
cache_valid_time: 86400
- name: Make sure stable grimoire is present
community.general.sorcery:
name: stable
repository: '*'
state: present
- name: Make sure binary and stable-rc grimoires are removed
community.general.sorcery:
grimoire: binary,stable-rc
repository: '*'
state: absent
- name: Make sure games grimoire is pulled from rsync
community.general.sorcery:
grimoire: games
repository: "rsync://download.sourcemage.org::codex/games"
state: present
- name: Make sure a specific branch of stable grimoire is pulled from git
community.general.sorcery:
grimoire: stable.git
repository: "git://download.sourcemage.org/smgl/grimoire.git:stable.git:stable-0.62"
state: present
- name: Update only Sorcery itself
community.general.sorcery:
update: true
@@ -180,6 +213,8 @@ SORCERY = {
SORCERY_LOG_DIR = "/var/log/sorcery"
SORCERY_STATE_DIR = "/var/state/sorcery"
NA = "N/A"
def get_sorcery_ver(module):
""" Get Sorcery version. """
@@ -220,9 +255,11 @@ def codex_fresh(codex, module):
return True
def codex_list(module):
def codex_list(module, skip_new=False):
""" List valid grimoire collection. """
params = module.params
codex = {}
cmd_scribe = "%s index" % SORCERY['scribe']
@@ -241,6 +278,10 @@ def codex_list(module):
if match:
codex[match.group('grim')] = match.group('ver')
# return only specified grimoires unless requested to skip new
if params['repository'] and not skip_new:
codex = dict((x, codex.get(x, NA)) for x in params['name'])
if not codex:
module.fail_json(msg="no grimoires to operate on; add at least one")
@@ -258,8 +299,7 @@ def update_sorcery(module):
changed = False
if module.check_mode:
if not module.params['name'] and not module.params['update_cache']:
module.exit_json(changed=True, msg="would have updated Sorcery")
return (True, "would have updated Sorcery")
else:
sorcery_ver = get_sorcery_ver(module)
@@ -273,9 +313,7 @@ def update_sorcery(module):
if sorcery_ver != get_sorcery_ver(module):
changed = True
if not module.params['name'] and not module.params['update_cache']:
module.exit_json(changed=changed,
msg="successfully updated Sorcery")
return (changed, "successfully updated Sorcery")
def update_codex(module):
@@ -294,28 +332,29 @@ def update_codex(module):
fresh = codex_fresh(codex, module)
if module.check_mode:
if not params['name']:
if not fresh:
changed = True
module.exit_json(changed=changed, msg="would have updated Codex")
elif not fresh or params['name'] and params['state'] == 'latest':
# SILENT is required as a workaround for query() in libgpg
module.run_command_environ_update.update(dict(SILENT='1'))
cmd_scribe = "%s update" % SORCERY['scribe']
rc, stdout, stderr = module.run_command(cmd_scribe)
if rc != 0:
module.fail_json(msg="unable to update Codex: " + stdout)
if codex != codex_list(module):
if not fresh:
changed = True
if not params['name']:
module.exit_json(changed=changed,
msg="successfully updated Codex")
return (changed, "would have updated Codex")
else:
if not fresh:
# SILENT is required as a workaround for query() in libgpg
module.run_command_environ_update.update(dict(SILENT='1'))
cmd_scribe = "%s update" % SORCERY['scribe']
if params['repository']:
cmd_scribe += ' %s' % ' '.join(codex.keys())
rc, stdout, stderr = module.run_command(cmd_scribe)
if rc != 0:
module.fail_json(msg="unable to update Codex: " + stdout)
if codex != codex_list(module):
changed = True
return (changed, "successfully updated Codex")
def match_depends(module):
@@ -448,6 +487,65 @@ def match_depends(module):
return depends_ok
def manage_grimoires(module):
""" Add or remove grimoires. """
params = module.params
grimoires = params['name']
url = params['repository']
codex = codex_list(module, True)
if url == '*':
if params['state'] in ('present', 'latest', 'absent'):
if params['state'] == 'absent':
action = "remove"
todo = set(grimoires) & set(codex)
else:
action = "add"
todo = set(grimoires) - set(codex)
if not todo:
return (False, "all grimoire(s) are already %sed" % action[:5])
if module.check_mode:
return (True, "would have %sed grimoire(s)" % action[:5])
cmd_scribe = "%s %s %s" % (SORCERY['scribe'], action, ' '.join(todo))
rc, stdout, stderr = module.run_command(cmd_scribe)
if rc != 0:
module.fail_json(msg="failed to %s one or more grimoire(s): %s" % (action, stdout))
return (True, "successfully %sed one or more grimoire(s)" % action[:5])
else:
module.fail_json(msg="unsupported operation on '*' repository value")
else:
if params['state'] in ('present', 'latest'):
if len(grimoires) > 1:
module.fail_json(msg="using multiple items with repository is invalid")
grimoire = grimoires[0]
if grimoire in codex:
return (False, "grimoire %s already exists" % grimoire)
if module.check_mode:
return (True, "would have added grimoire %s from %s" % (grimoire, url))
cmd_scribe = "%s add %s from %s" % (SORCERY['scribe'], grimoire, url)
rc, stdout, stderr = module.run_command(cmd_scribe)
if rc != 0:
module.fail_json(msg="failed to add grimoire %s from %s: %s" % (grimoire, url, stdout))
return (True, "successfully added grimoire %s from %s" % (grimoire, url))
else:
module.fail_json(msg="unsupported operation on repository value")
def manage_spells(module):
""" Cast or dispel spells.
@@ -473,7 +571,7 @@ def manage_spells(module):
# see update_codex()
module.run_command_environ_update.update(dict(SILENT='1'))
cmd_sorcery = "%s queue"
cmd_sorcery = "%s queue" % SORCERY['sorcery']
rc, stdout, stderr = module.run_command(cmd_sorcery)
@@ -492,7 +590,7 @@ def manage_spells(module):
except IOError:
module.fail_json(msg="failed to restore the update queue")
module.exit_json(changed=True, msg="would have updated the system")
return (True, "would have updated the system")
cmd_cast = "%s --queue" % SORCERY['cast']
@@ -501,12 +599,12 @@ def manage_spells(module):
if rc != 0:
module.fail_json(msg="failed to update the system")
module.exit_json(changed=True, msg="successfully updated the system")
return (True, "successfully updated the system")
else:
module.exit_json(changed=False, msg="the system is already up to date")
return (False, "the system is already up to date")
elif params['state'] == 'rebuild':
if module.check_mode:
module.exit_json(changed=True, msg="would have rebuilt the system")
return (True, "would have rebuilt the system")
cmd_sorcery = "%s rebuild" % SORCERY['sorcery']
@@ -515,7 +613,7 @@ def manage_spells(module):
if rc != 0:
module.fail_json(msg="failed to rebuild the system: " + stdout)
module.exit_json(changed=True, msg="successfully rebuilt the system")
return (True, "successfully rebuilt the system")
else:
module.fail_json(msg="unsupported operation on '*' name value")
else:
@@ -577,39 +675,40 @@ def manage_spells(module):
if cast_queue:
if module.check_mode:
module.exit_json(changed=True, msg="would have cast spell(s)")
return (True, "would have cast spell(s)")
cmd_cast = "%s -c %s" % (SORCERY['cast'], ' '.join(cast_queue))
rc, stdout, stderr = module.run_command(cmd_cast)
if rc != 0:
module.fail_json(msg="failed to cast spell(s): %s" + stdout)
module.fail_json(msg="failed to cast spell(s): " + stdout)
module.exit_json(changed=True, msg="successfully cast spell(s)")
return (True, "successfully cast spell(s)")
elif params['state'] != 'absent':
module.exit_json(changed=False, msg="spell(s) are already cast")
return (False, "spell(s) are already cast")
if dispel_queue:
if module.check_mode:
module.exit_json(changed=True, msg="would have dispelled spell(s)")
return (True, "would have dispelled spell(s)")
cmd_dispel = "%s %s" % (SORCERY['dispel'], ' '.join(dispel_queue))
rc, stdout, stderr = module.run_command(cmd_dispel)
if rc != 0:
module.fail_json(msg="failed to dispel spell(s): %s" + stdout)
module.fail_json(msg="failed to dispel spell(s): " + stdout)
module.exit_json(changed=True, msg="successfully dispelled spell(s)")
return (True, "successfully dispelled spell(s)")
else:
module.exit_json(changed=False, msg="spell(s) are already dispelled")
return (False, "spell(s) are already dispelled")
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(default=None, aliases=['spell'], type='list', elements='str'),
name=dict(default=None, aliases=['spell', 'grimoire'], type='list', elements='str'),
repository=dict(default=None, type='str'),
state=dict(default='present', choices=['present', 'latest',
'absent', 'cast', 'dispelled', 'rebuild']),
depends=dict(default=None),
@@ -638,14 +737,33 @@ def main():
elif params['state'] in ('absent', 'dispelled'):
params['state'] = 'absent'
changed = {
'sorcery': (False, NA),
'grimoires': (False, NA),
'codex': (False, NA),
'spells': (False, NA)
}
if params['update']:
update_sorcery(module)
changed['sorcery'] = update_sorcery(module)
if params['update_cache'] or params['state'] == 'latest':
update_codex(module)
if params['name'] and params['repository']:
changed['grimoires'] = manage_grimoires(module)
if params['name']:
manage_spells(module)
if params['update_cache']:
changed['codex'] = update_codex(module)
if params['name'] and not params['repository']:
changed['spells'] = manage_spells(module)
if any(x[0] for x in changed.values()):
state_msg = "state changed"
state_changed = True
else:
state_msg = "no change in state"
state_changed = False
module.exit_json(changed=state_changed, msg=state_msg + ": " + '; '.join(x[1] for x in changed.values()))
if __name__ == '__main__':

View File

@@ -26,6 +26,7 @@ def main():
arg_values=dict(type="dict", default={}),
check_mode_skip=dict(type="bool", default=False),
aa=dict(type="raw"),
tt=dict(),
),
supports_check_mode=True,
)

View File

@@ -121,3 +121,20 @@ cmd_echo_tests:
- test_result.rc == None
- test_result.out == None
- test_result.err == None
- name: set aa and tt value
arg_formats:
aa:
func: as_opt_eq_val
args: [--answer]
tt:
func: as_opt_val
args: [--tt-arg]
arg_order: 'aa tt'
arg_values:
tt: potatoes
aa: 11
assertions:
- test_result.rc == 0
- test_result.out == "-- --answer=11 --tt-arg potatoes\n"
- test_result.err == ""

View File

@@ -0,0 +1,11 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
azp/posix/3
skip/osx
skip/macos
skip/freebsd
skip/alpine
skip/rhel
destructive

View File

@@ -0,0 +1,9 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
---
- name: Remove ejabberd
ansible.builtin.package:
name: ejabberd
state: absent

View File

@@ -0,0 +1,7 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
dependencies:
- setup_pkg_mgr

View File

@@ -0,0 +1,106 @@
---
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Bail out if not supported
ansible.builtin.meta: end_play
when: ansible_distribution in ('Alpine', 'openSUSE Leap', 'CentOS', 'Fedora')
- name: Remove ejabberd
ansible.builtin.package:
name: ejabberd
state: absent
- name: Create user without ejabberdctl installed
community.general.ejabberd_user:
host: localhost
username: alice
password: pa$$w0rd
state: present
register: user_no_ejabberdctl
ignore_errors: true
- name: Install ejabberd
ansible.builtin.package:
name: ejabberd
state: present
notify: Remove ejabberd
- ansible.builtin.service:
name: ejabberd
state: started
- name: Create user alice (check)
community.general.ejabberd_user:
host: localhost
username: alice
password: pa$$w0rd
state: present
check_mode: true
register: user_alice_check
- name: Create user alice
community.general.ejabberd_user:
host: localhost
username: alice
password: pa$$w0rd
state: present
register: user_alice
- name: Create user alice (idempotency)
community.general.ejabberd_user:
host: localhost
username: alice
password: pa$$w0rd
state: present
register: user_alice_idempot
- name: Create user alice (change password)
community.general.ejabberd_user:
host: localhost
username: alice
password: different_pa$$w0rd
state: present
register: user_alice_chgpw
- name: Remove user alice (check)
community.general.ejabberd_user:
host: localhost
username: alice
state: absent
register: remove_alice_check
check_mode: true
- name: Remove user alice
community.general.ejabberd_user:
host: localhost
username: alice
state: absent
register: remove_alice
- name: Remove user alice (idempotency)
community.general.ejabberd_user:
host: localhost
username: alice
state: absent
register: remove_alice_idempot
- name: Assertions
ansible.builtin.assert:
that:
- user_no_ejabberdctl is failed
- "'Failed to find required executable' in user_no_ejabberdctl.msg"
- user_alice_check is changed
- user_alice is changed
- user_alice_idempot is not changed
- user_alice_chgpw is changed
- remove_alice_check is changed
- remove_alice is changed
- remove_alice_idempot is not changed

View File

@@ -9,3 +9,4 @@ skip/osx
skip/macos
skip/freebsd
needs/root
skip/rhel # FIXME: keytool seems to be broken on newer RHELs

View File

@@ -9,3 +9,4 @@ skip/osx
skip/macos
skip/freebsd
needs/root
skip/rhel # FIXME: keytool seems to be broken on newer RHELs

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: GPL-3.0-or-later
- name: "Create files to use as a disk devices"
command: "dd if=/dev/zero of={{ remote_tmp_dir }}/img{{ item }} bs=1M count=10"
command: "dd if=/dev/zero of={{ remote_tmp_dir }}/img{{ item }} bs=1M count=36"
with_sequence: 'count=4'
- name: "Show next free loop device"

View File

@@ -17,6 +17,16 @@
lv: "{{ item }}"
size: 2m
- name: Create snapshot volumes of origin logical volumes
loop:
- lv1
- lv2
lvol:
vg: testvg
lv: "{{ item }}"
snapshot: "{{ item }}_snap"
size: 50%ORIGIN
- name: Collect all lv active status in testvg
shell: vgs -olv_active --noheadings testvg | xargs -n1
register: initial_lv_status_result

View File

@@ -12,10 +12,10 @@
shell: vgs -v testvg -o pv_size --noheading --units b | xargs
register: cmd_result
- name: Assert the testvg size is 8388608B
- name: Assert the testvg size is 33554432B
assert:
that:
- "'8388608B' == cmd_result.stdout"
- "'33554432B' == cmd_result.stdout"
- name: Increases size in file
command: "dd if=/dev/zero bs=8MiB count=1 of={{ remote_tmp_dir }}/img1 conv=notrunc oflag=append"
@@ -38,10 +38,10 @@
shell: vgs -v testvg -o pv_size --noheading --units b | xargs
register: cmd_result
- name: Assert the testvg size is still 8388608B
- name: Assert the testvg size is still 33554432B
assert:
that:
- "'8388608B' == cmd_result.stdout"
- "'33554432B' == cmd_result.stdout"
- name: "Reruns lvg with pvresize:yes and check_mode:yes"
lvg:
@@ -60,10 +60,10 @@
shell: vgs -v testvg -o pv_size --noheading --units b | xargs
register: cmd_result
- name: Assert the testvg size is still 8388608B
- name: Assert the testvg size is still 33554432B
assert:
that:
- "'8388608B' == cmd_result.stdout"
- "'33554432B' == cmd_result.stdout"
- name: "Reruns lvg with pvresize:yes"
lvg:
@@ -75,7 +75,7 @@
shell: vgs -v testvg -o pv_size --noheading --units b | xargs
register: cmd_result
- name: Assert the testvg size is now 16777216B
- name: Assert the testvg size is now 41943040B
assert:
that:
- "'16777216B' == cmd_result.stdout"
- "'41943040B' == cmd_result.stdout"

View File

@@ -3,17 +3,29 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Remove python requests
ansible.builtin.pip:
name:
- requests
state: absent
- name: Stop docker service
become: true
ansible.builtin.service:
name: docker
state: stopped
- name: Remove Docker packages
package:
ansible.builtin.package:
name: "{{ docker_packages }}"
state: absent
- name: "D-Fedora : Remove repository"
file:
ansible.builtin.file:
path: /etc/yum.repos.d/docker-ce.repo
state: absent
- name: "D-Fedora : Remove dnf-plugins-core"
package:
ansible.builtin.package:
name: dnf-plugins-core
state: absent

View File

@@ -41,6 +41,7 @@
ansible.builtin.service:
name: docker
state: started
notify: Stop docker service
- name: Cheat on the docker socket permissions
become: true
@@ -53,3 +54,4 @@
name:
- requests
state: present
notify: Remove python requests

View File

@@ -77,8 +77,8 @@
- name: Verify shutdown delay is present in seconds in FreeBSD
assert:
that:
- '"-h +100s" in shutdown_result["shutdown_command"]'
- '"-h +0s" in shutdown_result_minus["shutdown_command"]'
- '"-p +100s" in shutdown_result["shutdown_command"]'
- '"-p +0s" in shutdown_result_minus["shutdown_command"]'
when: ansible_system == 'FreeBSD'
- name: Verify shutdown delay is present in seconds in Solaris, SunOS

View File

@@ -17,3 +17,5 @@
ansible.builtin.include_tasks: test_channel.yml
- name: Include test_dangerous
ansible.builtin.include_tasks: test_dangerous.yml
- name: Include test_3dash
ansible.builtin.include_tasks: test_3dash.yml

View File

@@ -0,0 +1,31 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Make sure packages are not installed (3 dashes)
community.general.snap:
name:
- bw
- shellcheck
state: absent
- name: Install package with 3 dashes in description (check)
community.general.snap:
name:
- bw
- shellcheck
state: present
check_mode: true
register: install_3dash_check
- name: Remove packages (3 dashes)
community.general.snap:
name:
- bw
- shellcheck
state: absent
- assert:
that:
- install_3dash_check is changed

View File

@@ -3,6 +3,7 @@ plugins/modules/consul.py validate-modules:undocumented-parameter
plugins/modules/consul_session.py validate-modules:parameter-state-invalid-choice
plugins/modules/gconftool2.py validate-modules:parameter-state-invalid-choice # state=get - removed in 8.0.0
plugins/modules/homectl.py import-3.11 # Uses deprecated stdlib library 'crypt'
plugins/modules/homectl.py import-3.12 # Uses deprecated stdlib library 'crypt'
plugins/modules/iptables_state.py validate-modules:undocumented-parameter # params _back and _timeout used by action plugin
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/manageiq_policies.py validate-modules:parameter-state-invalid-choice # state=list - removed in 8.0.0
@@ -18,4 +19,5 @@ plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt'
plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt'
plugins/modules/xfconf.py validate-modules:return-syntax-error

View File

@@ -17,7 +17,7 @@ class DictDataLoader(DataLoader):
def __init__(self, file_mapping=None):
file_mapping = {} if file_mapping is None else file_mapping
assert type(file_mapping) == dict
assert isinstance(file_mapping, dict)
super(DictDataLoader, self).__init__()

View File

@@ -41,7 +41,7 @@ def patch_keycloak_api(get_user_by_username=None,
with patch.object(obj, 'get_user_groups', side_effect=get_user_groups) as mock_get_user_groups:
with patch.object(obj, 'delete_user', side_effect=delete_user) as mock_delete_user:
with patch.object(obj, 'update_user', side_effect=update_user) as mock_update_user:
yield mock_get_user_by_username, mock_create_user, mock_update_user_groups_membership,\
yield mock_get_user_by_username, mock_create_user, mock_update_user_groups_membership, \
mock_get_user_groups, mock_delete_user, mock_update_user

View File

@@ -48,8 +48,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'install', '--global', 'coffee-script'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'install', '--global', 'coffee-script'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_missing(self):
@@ -67,8 +67,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'install', '--global', 'coffee-script'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'install', '--global', 'coffee-script'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_version(self):
@@ -87,8 +87,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'install', '--global', 'coffee-script@2.5.1'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'install', '--global', 'coffee-script@2.5.1'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_version_update(self):
@@ -107,8 +107,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'install', '--global', 'coffee-script@2.5.1'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'install', '--global', 'coffee-script@2.5.1'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_version_exists(self):
@@ -127,7 +127,7 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertFalse(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_absent(self):
@@ -145,8 +145,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'uninstall', '--global', 'coffee-script'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'uninstall', '--global', 'coffee-script'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_absent_version(self):
@@ -165,8 +165,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'uninstall', '--global', 'coffee-script'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'uninstall', '--global', 'coffee-script'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_absent_version_different(self):
@@ -185,8 +185,8 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None),
call(['/testbin/npm', 'uninstall', '--global', 'coffee-script'], check_rc=True, cwd=None),
call(['/testbin/npm', 'list', '--json', '--long', '--global'], check_rc=False, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
call(['/testbin/npm', 'uninstall', '--global', 'coffee-script'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_package_json(self):
@@ -203,7 +203,7 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'install', '--global'], check_rc=True, cwd=None),
call(['/testbin/npm', 'install', '--global'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_package_json_production(self):
@@ -221,7 +221,7 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'install', '--global', '--production'], check_rc=True, cwd=None),
call(['/testbin/npm', 'install', '--global', '--production'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_package_json_ci(self):
@@ -239,7 +239,7 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'ci', '--global'], check_rc=True, cwd=None),
call(['/testbin/npm', 'ci', '--global'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])
def test_present_package_json_ci_production(self):
@@ -258,5 +258,5 @@ class NPMModuleTestCase(ModuleTestCase):
self.assertTrue(result['changed'])
self.module_main_command.assert_has_calls([
call(['/testbin/npm', 'ci', '--global', '--production'], check_rc=True, cwd=None),
call(['/testbin/npm', 'ci', '--global', '--production'], check_rc=True, cwd=None, environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'}),
])

View File

@@ -12,14 +12,17 @@ import sys
import pytest
proxmoxer = pytest.importorskip('proxmoxer')
proxmoxer = pytest.importorskip("proxmoxer")
mandatory_py_version = pytest.mark.skipif(
sys.version_info < (2, 7),
reason='The proxmoxer dependency requires python2.7 or higher'
reason="The proxmoxer dependency requires python2.7 or higher",
)
from ansible_collections.community.general.plugins.modules import proxmox_kvm
from ansible_collections.community.general.tests.unit.compat.mock import patch
from ansible_collections.community.general.tests.unit.compat.mock import (
patch,
DEFAULT,
)
from ansible_collections.community.general.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,
@@ -36,10 +39,19 @@ class TestProxmoxKvmModule(ModuleTestCase):
self.module = proxmox_kvm
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect"
)
self.connect_mock.start()
).start()
self.get_node_mock = patch.object(
proxmox_utils.ProxmoxAnsible, "get_node"
).start()
self.get_vm_mock = patch.object(proxmox_utils.ProxmoxAnsible, "get_vm").start()
self.create_vm_mock = patch.object(
proxmox_kvm.ProxmoxKvmAnsible, "create_vm"
).start()
def tearDown(self):
self.create_vm_mock.stop()
self.get_vm_mock.stop()
self.get_node_mock.stop()
self.connect_mock.stop()
super(TestProxmoxKvmModule, self).tearDown()
@@ -58,18 +70,16 @@ class TestProxmoxKvmModule(ModuleTestCase):
"node": "pve",
}
)
with patch.object(proxmox_utils.ProxmoxAnsible, "get_vm") as get_vm_mock:
get_vm_mock.return_value = [{"vmid": "100"}]
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
self.get_vm_mock.return_value = [{"vmid": "100"}]
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert get_vm_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is False
assert result["msg"] == "VM with vmid <100> already exists"
assert self.get_vm_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is False
assert result["msg"] == "VM with vmid <100> already exists"
@patch.object(proxmox_kvm.ProxmoxKvmAnsible, "create_vm")
def test_vm_created_when_vmid_not_exist_but_name_already_exist(self, create_vm_mock):
def test_vm_created_when_vmid_not_exist_but_name_already_exist(self):
set_module_args(
{
"api_host": "host",
@@ -80,23 +90,79 @@ class TestProxmoxKvmModule(ModuleTestCase):
"node": "pve",
}
)
with patch.object(proxmox_utils.ProxmoxAnsible, "get_vm") as get_vm_mock:
with patch.object(proxmox_utils.ProxmoxAnsible, "get_node") as get_node_mock:
get_vm_mock.return_value = None
get_node_mock.return_value = {"node": "pve", "status": "online"}
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
self.get_vm_mock.return_value = None
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert get_vm_mock.call_count == 1
assert get_node_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "VM existing.vm.local with vmid 100 deployed"
assert self.get_vm_mock.call_count == 1
assert self.get_node_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "VM existing.vm.local with vmid 100 deployed"
def test_vm_not_created_when_name_already_exist_and_vmid_not_set(self):
set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"name": "existing.vm.local",
"node": "pve",
}
)
with patch.object(proxmox_utils.ProxmoxAnsible, "get_vmid") as get_vmid_mock:
get_vmid_mock.return_value = {
"vmid": 100,
"name": "existing.vm.local",
}
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert get_vmid_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is False
def test_vm_created_when_name_doesnt_exist_and_vmid_not_set(self):
set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"name": "existing.vm.local",
"node": "pve",
}
)
self.get_vm_mock.return_value = None
with patch.multiple(
proxmox_utils.ProxmoxAnsible, get_vmid=DEFAULT, get_nextvmid=DEFAULT
) as utils_mock:
utils_mock["get_vmid"].return_value = None
utils_mock["get_nextvmid"].return_value = 101
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert utils_mock["get_vmid"].call_count == 1
assert utils_mock["get_nextvmid"].call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "VM existing.vm.local with vmid 101 deployed"
def test_parse_mac(self):
assert proxmox_kvm.parse_mac("virtio=00:11:22:AA:BB:CC,bridge=vmbr0,firewall=1") == "00:11:22:AA:BB:CC"
assert (
proxmox_kvm.parse_mac("virtio=00:11:22:AA:BB:CC,bridge=vmbr0,firewall=1")
== "00:11:22:AA:BB:CC"
)
def test_parse_dev(self):
assert proxmox_kvm.parse_dev("local-lvm:vm-1000-disk-0,format=qcow2") == "local-lvm:vm-1000-disk-0"
assert proxmox_kvm.parse_dev("local-lvm:vm-101-disk-1,size=8G") == "local-lvm:vm-101-disk-1"
assert proxmox_kvm.parse_dev("local-zfs:vm-1001-disk-0") == "local-zfs:vm-1001-disk-0"
assert (
proxmox_kvm.parse_dev("local-lvm:vm-1000-disk-0,format=qcow2")
== "local-lvm:vm-1000-disk-0"
)
assert (
proxmox_kvm.parse_dev("local-lvm:vm-101-disk-1,size=8G")
== "local-lvm:vm-101-disk-1"
)
assert (
proxmox_kvm.parse_dev("local-zfs:vm-1001-disk-0")
== "local-zfs:vm-1001-disk-0"
)

View File

@@ -28,107 +28,404 @@ from ansible_collections.community.general.tests.unit.plugins.modules.utils impo
)
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
NODE = "pve"
LXC_VMS = [
NODE1 = "pve"
NODE2 = "pve2"
RAW_CLUSTER_OUTPUT = [
{
"uptime": 47,
"maxswap": 536870912,
"diskread": 0,
"name": "test-lxc.home.arpa",
"status": "running",
"vmid": "102",
"type": "lxc",
"swap": 0,
"cpus": 2,
"mem": 29134848,
"maxdisk": 10737418240,
"diskwrite": 0,
"netin": 35729,
"netout": 446,
"pid": 1412780,
"maxmem": 536870912,
"disk": 307625984,
"cpu": 0,
},
{
"netin": 0,
"netout": 0,
"cpu": 0,
"maxmem": 536870912,
"cpu": 0.174069059487628,
"disk": 0,
"name": "test1-lxc.home.arpa",
"diskread": 0,
"status": "stopped",
"vmid": "103",
"type": "lxc",
"swap": 0,
"uptime": 0,
"maxswap": 536870912,
"diskread": 6656,
"diskwrite": 0,
"cpus": 2,
"mem": 0,
"maxdisk": 10737418240,
},
]
QEMU_VMS = [
{
"vmid": 101,
"diskread": 0,
"status": "stopped",
"name": "test1",
"uptime": 0,
"diskwrite": 0,
"cpus": 1,
"mem": 0,
"maxdisk": 0,
"netout": 0,
"netin": 0,
"cpu": 0,
"maxmem": 536870912,
"disk": 0,
},
{
"netout": 4113,
"netin": 22738,
"pid": 1947197,
"maxmem": 4294967296,
"disk": 0,
"cpu": 0.0795350949559682,
"uptime": 41,
"vmid": 100,
"status": "running",
"diskread": 0,
"name": "pxe.home.arpa",
"cpus": 1,
"mem": 35315629,
"id": "qemu/100",
"maxcpu": 1,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35304543,
"name": "pxe.home.arpa",
"netin": 416956,
"netout": 17330,
"node": NODE1,
"status": "running",
"template": 0,
"type": "qemu",
"uptime": 669,
"vmid": 100,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/101",
"maxcpu": 1,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "qemu",
"uptime": 0,
"vmid": 101,
},
{
"cpu": 0,
"disk": 352190464,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/102",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 28192768,
"name": "test-lxc.home.arpa",
"netin": 102757,
"netout": 446,
"node": NODE1,
"status": "running",
"template": 0,
"type": "lxc",
"uptime": 161,
"vmid": 102,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/103",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 0,
"name": "test1-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "lxc",
"uptime": 0,
"vmid": 103,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/104",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 0,
"name": "test-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "lxc",
"uptime": 0,
"vmid": 104,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/105",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 0,
"name": "",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "lxc",
"uptime": 0,
"vmid": 105,
},
]
RAW_LXC_OUTPUT = [
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test1-lxc.home.arpa",
"netin": 0,
"netout": 0,
"status": "stopped",
"swap": 0,
"type": "lxc",
"uptime": 0,
"vmid": "103",
},
{
"cpu": 0,
"cpus": 2,
"disk": 352190464,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 28192768,
"name": "test-lxc.home.arpa",
"netin": 102757,
"netout": 446,
"pid": 4076752,
"status": "running",
"swap": 0,
"type": "lxc",
"uptime": 161,
"vmid": "102",
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test-lxc.home.arpa",
"netin": 0,
"netout": 0,
"status": "stopped",
"swap": 0,
"type": "lxc",
"uptime": 0,
"vmid": "104",
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "",
"netin": 0,
"netout": 0,
"status": "stopped",
"swap": 0,
"type": "lxc",
"uptime": 0,
"vmid": "105",
},
]
RAW_QEMU_OUTPUT = [
{
"cpu": 0,
"cpus": 1,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"status": "stopped",
"uptime": 0,
"vmid": 101,
},
{
"cpu": 0.174069059487628,
"cpus": 1,
"disk": 0,
"diskread": 6656,
"diskwrite": 0,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35304543,
"name": "pxe.home.arpa",
"netin": 416956,
"netout": 17330,
"pid": 4076688,
"status": "running",
"uptime": 669,
"vmid": 100,
},
]
EXPECTED_VMS_OUTPUT = [
{
"cpu": 0.174069059487628,
"cpus": 1,
"disk": 0,
"diskread": 6656,
"diskwrite": 0,
"id": "qemu/100",
"maxcpu": 1,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35304543,
"name": "pxe.home.arpa",
"netin": 416956,
"netout": 17330,
"node": NODE1,
"pid": 4076688,
"status": "running",
"template": False,
"type": "qemu",
"uptime": 669,
"vmid": 100,
},
{
"cpu": 0,
"cpus": 1,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/101",
"maxcpu": 1,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": False,
"type": "qemu",
"uptime": 0,
"vmid": 101,
},
{
"cpu": 0,
"cpus": 2,
"disk": 352190464,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/102",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 28192768,
"name": "test-lxc.home.arpa",
"netin": 102757,
"netout": 446,
"node": NODE1,
"pid": 4076752,
"status": "running",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 161,
"vmid": 102,
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/103",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test1-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 0,
"vmid": 103,
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/104",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 0,
"vmid": 104,
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/105",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 0,
"vmid": 105,
},
]
def get_module_args(type="all", vmid=None, name=None):
def get_module_args(type="all", node=None, vmid=None, name=None):
return {
"api_host": "host",
"api_user": "user",
"api_password": "password",
"node": NODE,
"node": node,
"type": type,
"vmid": vmid,
"name": name,
}
def normalized_expected_vms_output(vms):
result = [vm.copy() for vm in vms]
for vm in result:
if "type" not in vm:
# response for QEMU VMs doesn't contain type field, adding it
vm["type"] = "qemu"
vm["vmid"] = int(vm["vmid"])
return result
class TestProxmoxVmInfoModule(ModuleTestCase):
def setUp(self):
super(TestProxmoxVmInfoModule, self).setUp()
@@ -138,12 +435,15 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect",
).start()
self.connect_mock.return_value.nodes.return_value.lxc.return_value.get.return_value = (
LXC_VMS
RAW_LXC_OUTPUT
)
self.connect_mock.return_value.nodes.return_value.qemu.return_value.get.return_value = (
QEMU_VMS
RAW_QEMU_OUTPUT
)
self.connect_mock.return_value.nodes.get.return_value = [{"node": NODE}]
self.connect_mock.return_value.cluster.return_value.resources.return_value.get.return_value = (
RAW_CLUSTER_OUTPUT
)
self.connect_mock.return_value.nodes.get.return_value = [{"node": NODE1}]
def tearDown(self):
self.connect_mock.stop()
@@ -155,7 +455,7 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
self.module.main()
result = exc_info.value.args[0]
assert result["msg"] == "missing required arguments: api_host, api_user, node"
assert result["msg"] == "missing required arguments: api_host, api_user"
def test_get_lxc_vms_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
@@ -164,36 +464,34 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
result = exc_info.value.args[0]
assert result["changed"] is False
assert result["proxmox_vms"] == LXC_VMS
assert result["proxmox_vms"] == [
vm for vm in EXPECTED_VMS_OUTPUT if vm["type"] == "lxc"
]
def test_get_qemu_vms_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = normalized_expected_vms_output(QEMU_VMS)
set_module_args(get_module_args(type="qemu"))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert result["proxmox_vms"] == [
vm for vm in EXPECTED_VMS_OUTPUT if vm["type"] == "qemu"
]
def test_get_all_vms_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
qemu_output = normalized_expected_vms_output(QEMU_VMS)
expected_output = qemu_output + LXC_VMS
set_module_args(get_module_args())
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert result["proxmox_vms"] == EXPECTED_VMS_OUTPUT
def test_vmid_is_converted_to_int(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = normalized_expected_vms_output(LXC_VMS)
set_module_args(get_module_args(type="lxc"))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert isinstance(result["proxmox_vms"][0]["vmid"], int)
def test_get_specific_lxc_vm_information(self):
@@ -201,8 +499,8 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
vmid = 102
expected_output = [
vm
for vm in normalized_expected_vms_output(LXC_VMS)
if vm["vmid"] == vmid
for vm in EXPECTED_VMS_OUTPUT
if vm["vmid"] == vmid and vm["type"] == "lxc"
]
set_module_args(get_module_args(type="lxc", vmid=vmid))
self.module.main()
@@ -216,8 +514,8 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
vmid = 100
expected_output = [
vm
for vm in normalized_expected_vms_output(QEMU_VMS)
if vm["vmid"] == vmid
for vm in EXPECTED_VMS_OUTPUT
if vm["vmid"] == vmid and vm["type"] == "qemu"
]
set_module_args(get_module_args(type="qemu", vmid=vmid))
self.module.main()
@@ -229,11 +527,7 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
def test_get_specific_vm_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 100
expected_output = [
vm
for vm in normalized_expected_vms_output(QEMU_VMS + LXC_VMS)
if vm["vmid"] == vmid
]
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["vmid"] == vmid]
set_module_args(get_module_args(type="all", vmid=vmid))
self.module.main()
@@ -242,17 +536,13 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
assert len(result["proxmox_vms"]) == 1
def test_get_specific_vm_information_by_using_name(self):
name = "test-lxc.home.arpa"
name = "test1-lxc.home.arpa"
self.connect_mock.return_value.cluster.resources.get.return_value = [
{"name": name, "vmid": "102"}
{"name": name, "vmid": "103"}
]
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [
vm
for vm in normalized_expected_vms_output(QEMU_VMS + LXC_VMS)
if vm["name"] == name
]
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["name"] == name]
set_module_args(get_module_args(type="all", name=name))
self.module.main()
@@ -260,14 +550,83 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_module_fail_when_vm_does_not_exist_on_node(self):
with pytest.raises(AnsibleFailJson) as exc_info:
def test_get_multiple_vms_with_the_same_name(self):
name = "test-lxc.home.arpa"
self.connect_mock.return_value.cluster.resources.get.return_value = [
{"name": name, "vmid": "102"},
{"name": name, "vmid": "104"},
]
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["name"] == name]
set_module_args(get_module_args(type="all", name=name))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 2
def test_get_multiple_vms_with_the_same_name(self):
name = ""
self.connect_mock.return_value.cluster.resources.get.return_value = [
{"name": name, "vmid": "105"},
]
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["name"] == name]
set_module_args(get_module_args(type="all", name=name))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_all_lxc_vms_from_specific_node(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [
vm
for vm in EXPECTED_VMS_OUTPUT
if vm["node"] == NODE1 and vm["type"] == "lxc"
]
set_module_args(get_module_args(type="lxc", node=NODE1))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_all_qemu_vms_from_specific_node(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [
vm
for vm in EXPECTED_VMS_OUTPUT
if vm["node"] == NODE1 and vm["type"] == "qemu"
]
set_module_args(get_module_args(type="qemu", node=NODE1))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_all_vms_from_specific_node(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["node"] == NODE1]
set_module_args(get_module_args(node=NODE1))
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 2
def test_module_returns_empty_list_when_vm_does_not_exist(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 200
set_module_args(get_module_args(type="all", vmid=vmid))
self.module.main()
result = exc_info.value.args[0]
assert result["msg"] == "VM with vmid 200 doesn't exist on node pve"
assert result["proxmox_vms"] == []
def test_module_fail_when_qemu_request_fails(self):
self.connect_mock.return_value.nodes.return_value.qemu.return_value.get.side_effect = IOError(
@@ -291,10 +650,24 @@ class TestProxmoxVmInfoModule(ModuleTestCase):
result = exc_info.value.args[0]
assert "Failed to retrieve LXC VMs information:" in result["msg"]
def test_module_fail_when_cluster_resources_request_fails(self):
self.connect_mock.return_value.cluster.return_value.resources.return_value.get.side_effect = IOError(
"Some mocked connection error."
)
with pytest.raises(AnsibleFailJson) as exc_info:
set_module_args(get_module_args())
self.module.main()
result = exc_info.value.args[0]
assert (
"Failed to retrieve VMs information from cluster resources:"
in result["msg"]
)
def test_module_fail_when_node_does_not_exist(self):
self.connect_mock.return_value.nodes.get.return_value = []
with pytest.raises(AnsibleFailJson) as exc_info:
set_module_args(get_module_args(type="all"))
set_module_args(get_module_args(type="all", node=NODE1))
self.module.main()
result = exc_info.value.args[0]