Compare commits

..

43 Commits
3.0.1 ... 3.1.0

Author SHA1 Message Date
Felix Fontein
658e95c5ca Release 3.1.0. 2021-05-18 13:09:47 +02:00
patchback[bot]
26c2876f50 pacman: add 'executable' option to use an alternative pacman binary (#2524) (#2554)
* Add 'bin' option to use an alternative pacman binary

* Add changelog entry

* Incorporate recommendations

* Update plugins/modules/packaging/os/pacman.py

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c4624d3ad8)

Co-authored-by: Andre Lehmann <aisberg@posteo.de>
2021-05-18 13:08:06 +02:00
patchback[bot]
62043463f3 iptables_state: fix per-table initialization command (#2525) (#2553)
* refactor initialize_from_null_state()

* Use a more neutral command (iptables -L) to load per-table needed modules.

* fix 'FutureWarning: Possible nested set at position ...' (re.sub)

* fix pylints (module + action plugin)

* unsubscriptable-object
* superfluous-parens
* consider-using-in
* unused-variable
* unused-import
* no-else-break

* cleanup other internal module_args if they exist

* add changelog fragment

* Apply suggestions from code review (changelog fragment)

Co-authored-by: Felix Fontein <felix@fontein.de>

* Remove useless plugin type in changelog fragment

Co-authored-by: Amin Vakil <info@aminvakil.com>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Amin Vakil <info@aminvakil.com>
(cherry picked from commit 2c1ab2d384)

Co-authored-by: quidame <quidame@poivron.org>
2021-05-18 12:27:31 +02:00
Felix Fontein
f1dab6d4a7 Prepare 3.1.0 release. 2021-05-18 11:57:35 +02:00
patchback[bot]
d43764da79 filesystem: revamp module (#2472) (#2550)
* revamp filesystem module to prepare next steps

* pass all commands to module.run_command() as lists
* refactor grow() and grow_cmd() to not need to override them so much
* refactor all existing get_fs_size() overrides to raise a ValueError if
  not able to parse command output and return an integer.
* override MKFS_FORCE_FLAGS the same way for all fstypes that require it
* improve documentation of limitations of the module regarding FreeBSD
* fix indentation in DOCUMENTATION
* add/update function/method docstrings
* fix pylint hints

filesystem: refactor integration tests

* Include *reiserfs* and *swap* in tests.
* Fix reiserfs related code and tests accordingly.
* Replace "other fs" (unhandled by this module), from *swap* to *minix*
  (both mkswap and mkfs.minix being provided by util-linux).
* Replace *dd* commands by *filesize* dedicated module.
* Use FQCNs and name the tasks.
* Update main tests conditionals.

* add a changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* declare variables as lists when lists are needed

* fix construction without useless conversion

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f6db0745fc)

Co-authored-by: quidame <quidame@poivron.org>
2021-05-18 10:49:12 +02:00
patchback[bot]
de2feb2567 ModuleHelper - cmd params now taken from self.vars instead of self.module.params (#2517) (#2549)
* cmd params now taken from self.vars instead of self.module.params

* added changelog fragment

* Update changelogs/fragments/2517-cmd-params-from-vars.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit d24fc92466)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-18 06:36:02 +02:00
patchback[bot]
6e56bae0f3 influxdb_user: allow creation of first user with auth enabled (#2364) (#2368) (#2548)
* influxdb_user: allow creation of first user with auth enabled (#2364)

* handle potential exceptions while parsing influxdb client error

* fix changelog

Co-authored-by: Felix Fontein <felix@fontein.de>

* influxdb_user: use generic exceptions to be compatible with python 2.7

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit b89eb87ad6)

Co-authored-by: Xabier Napal <xabiernapal@pm.me>
2021-05-17 19:18:33 +00:00
patchback[bot]
1f7047e725 ModuleHelper - better mechanism for customizing "changed" behaviour (#2514) (#2546)
* better mechanism for customizing "changed" behaviour

* dont drink and code: silly mistake from late at night

* added changelog fragment

(cherry picked from commit 2a376642dd)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-17 20:50:43 +02:00
patchback[bot]
b2e4485567 java_keystore: pass in secret to keytool via stdin (#2526) (#2545)
* java_keystore: pass in secret to keytool via stdin

* add changelog fragment

(cherry picked from commit 2b1eff2783)

Co-authored-by: quidame <quidame@poivron.org>
2021-05-17 20:24:09 +02:00
patchback[bot]
b78254fe24 zfs_delegate_admin: drop choices from permissions (#2540) (#2544)
instead of whitelisting some subset of known existing permissions, just
allow any string to be used as permissions. this way, any permission
supported by the underlying zfs commands can be used, eg. 'bookmark',
'load-key', 'change-key' and all property permissions, which were
missing from the choices list.

(cherry picked from commit dc0a56141f)

Co-authored-by: Lauri Tirkkonen <lauri@hacktheplanet.fi>
2021-05-17 18:02:13 +00:00
patchback[bot]
38aa0ec8ad Add option missing to passwordstore lookup (#2500) (#2541)
Add ability to ignore error on missing pass file to allow processing the
output further via another filters (mainly the default filter) without
updating the pass file itself.

It also contains the option to create the pass file, like the option
create=true does.

Finally, it also allows to issue a warning only, if the pass file is not
found.

(cherry picked from commit 350380ba8c)

Co-authored-by: Jan Baier <7996094+baierjan@users.noreply.github.com>
2021-05-17 14:14:44 +02:00
patchback[bot]
42f28048a8 yum_versionlock: disable fedora34 integration test (#2536) (#2538)
* Disable yum_versionlock integration test on Fedora 34

* Remove --assumeyes and add a comment regarding this

* Update update task name

(cherry picked from commit da7e4e1dc2)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-05-17 10:36:36 +02:00
patchback[bot]
b699aaff7b Use --assumeyes with explicit yum call. (#2533) (#2535)
(cherry picked from commit 2cc848fe1a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-17 08:27:25 +02:00
patchback[bot]
af85b6c203 fix error when cache is disabled (#2518) (#2532)
(cherry picked from commit 448b8cbcda)

Co-authored-by: Dennis Israelsson <dennis.israelsson@gmail.com>
2021-05-17 08:09:58 +02:00
patchback[bot]
ec2e7cad3e Update influxdb_user.py Fixed Multiple No Privileges (#2499) (#2530)
* Update influxdb_user.py

Fixed Multiple No Privileges

* Update influxdb_user.py

Fixed line spaces

* Update influxdb_user.py

Fixed whitespace

* Create 2499-influxdb_user-fix-multiple-no-privileges.yml

Added changelog

(cherry picked from commit ea200c9d8c)

Co-authored-by: sgalea87 <43749726+sgalea87@users.noreply.github.com>
2021-05-17 08:09:48 +02:00
patchback[bot]
7753fa4219 1085 updating the hcl whitelist to include all supported options (#2495) (#2528)
* 1085 updating the hcl whitelist to include all supported options

* Update changelogs/fragments/1085-consul-acl-hcl-whitelist-update.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Dillon Gilmore <dgilmor@rei.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 5b77515308)

Co-authored-by: iridian <442359+iridian-ks@users.noreply.github.com>
2021-05-17 08:09:21 +02:00
patchback[bot]
69ea487005 Cleanup connections plugins (#2520) (#2522)
* minor refactors

* minor refactors in plugins/connection/saltstack.py

* minor refactors in plugins/connection/qubes.py

* minor refactor in plugins/connection/lxc.py

* minor refactors in plugins/connection/chroot.py

* minor refactors in plugins/connection/funcd.py

* minor refactors in plugins/connection/iocage.py

* minor refactors in plugins/connection/jail.py

* added changelog fragment

(cherry picked from commit c8f402806f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-16 13:45:26 +02:00
patchback[bot]
048f15fe68 java_keystore: New ssl_backend option for cryptography (#2485) (#2513)
* Adding cryptography as a backend for OpenSSL operations

* Updating unit tests and adding changelog fragment

* Allowing private key password option when using unprotected key

* Incorporating suggestions from initial review

* Centralizing module exit path

(cherry picked from commit a385cbb11d)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-05-14 22:47:26 +02:00
patchback[bot]
aa1aa1d540 random_pet: Random pet name generator (#2479) (#2509)
A lookup plugin to generate random pet names based
upon criteria.

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 5d0a7f40f2)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-05-14 16:25:40 +02:00
patchback[bot]
e78517ca93 proxmox_nic: set mtu on interface even if it's not virtio (#2505) (#2507)
* Set mtu on interface whatsoever

* add changelog fragment

* Revert "add changelog fragment"

This reverts commit 5f2f1e7feb.

(cherry picked from commit e2dfd42dd4)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-05-14 16:24:47 +02:00
patchback[bot]
bf185573a6 gitlab_user: add expires_at option (#2450) (#2506)
* gitlab_user: add expires_at option

* Add changelog

* Add integration test

* Add expires_at to addSshKeyToUser function

* password is required if state is set to present

* Check expires_at will not be added to a present ssh key

* add documentation about present ssh key

* add expires_at to unit tests

* Improve documentation

Co-authored-by: Felix Fontein <felix@fontein.de>

* Only pass expires_at to api when it is not None

* Emphasize on SSH public key

* Apply felixfontein suggestion

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 054eb90ae5)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-05-14 10:34:47 +02:00
patchback[bot]
145435cdd9 Deprecate nios content (#2458) (#2504)
* Deprecate nios content.

* Make 2.9's ansible-test happy.

* Add module_utils deprecation.

(cherry picked from commit ee9770cff7)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-14 09:55:46 +02:00
patchback[bot]
6013c77c2b Add groupby_as_dict filter (#2323) (#2503)
* Add groupby_as_dict filter.

* Test all error cases.

(cherry picked from commit 384655e15c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-14 09:47:53 +02:00
patchback[bot]
ad5482f63d Add proxmox_nic module (#2449) (#2502)
* Add proxmox_nic module

Add proxmox_nic module to manage NIC's on Qemu(KVM) VM's in a Proxmox VE
cluster.
Update proxmox integration tests and add tests for proxmox_nic module.

This partially solves https://github.com/ansible-collections/community.general/issues/1964#issuecomment-790499397
and allows for adding/updating/deleting network interface cards after
creating/cloning a VM.

The proxmox_nic module will keep MAC-addresses the same when updating a
NIC. It only changes when explicitly setting a MAC-address.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add check_mode and implement review comments

- check_mode added
- some documentation updates
- when MTU is set, check if the model is virtio, else fail
- trunks can now be provided as list of ints instead of vlanid[;vlanid...]

* Make returns on update_nic and delete_nic more readable

Co-authored-by: Felix Fontein <felix@fontein.de>

* Increase readability on update_nic and delete_nic

* Implement check in get_vmid

- get_vmid will now fail when multiple vmid's are returned as proxmox
  doesn't guarantee uniqueness
- remove an unused import
- fix a typo in an error message

* Add some error checking to get_vmid

- get_vmid will now return the error message when proxmoxer fails
- get_vmid will return the vmid directly instead of a list of one
- Some minor documentation updates

* Warn instead of fail when setting mtu on unsupported nic

- When setting the MTU on an unsupported NIC model (virtio is the only
  supported model) this module will now print a warning instead of
  failing.
- Some minor documentation updates.

* Take advantage of proxmox_auth_argument_spec

Make use of proxmox_auth_argument_spec from plugins/module_utils/proxmox.py
This provides some extra environment fallbacks.

* Add blank line to conform with pep8

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 23dda56913)

Co-authored-by: Kogelvis <github@ar-ix.net>
2021-05-14 09:47:39 +02:00
patchback[bot]
f5594aefd5 influxdb_retention_policy - add state argument to module spec (#2383) (#2385) (#2497)
* influxdb_retention_policy: add state option to module argument spec

* influxdb_retention_policy: simplify duration parsing logic (suggested in #2284)

* add changelog

* fix documentation and changelog

* add constants for duration and sgduration validations

* restyle ansible module spec

Co-authored-by: Felix Fontein <felix@fontein.de>

* improve changelog

Co-authored-by: Felix Fontein <felix@fontein.de>

* set changed result in check mode for state absent

* remove required flag in optional module arguments

* influxdb_retention_policy: improve examples readability

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 83a0c32269)

Co-authored-by: Xabier Napal <xabiernapal@pm.me>
2021-05-12 18:19:03 +02:00
patchback[bot]
ab5b379b30 linode - docs/validation changes + minor refactorings (#2410) (#2498)
* multiple changes:

- documentation fixes
- minor refactorings

* added param deprecation note to the documentation

* added changelog fragment

* Update changelogs/fragments/2410-linode-improvements.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/2410-linode-improvements.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/linode/linode.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 265d034e31)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-12 18:17:40 +02:00
patchback[bot]
1c5e44c649 nmcli: Remove dead code, 'options' never contains keys from 'param_alias' (#2417) (#2494)
* nmcli: Remove dead code, 'options' never contains keys from 'param_alias'

* Update changelogs/fragments/2417-nmcli_remove_dead_code.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit b9fa9116c1)

Co-authored-by: spike77453 <spike77453@users.noreply.github.com>
2021-05-11 20:22:13 +02:00
patchback[bot]
23da67cc72 Add dependent lookup plugin (#2164) (#2490)
* Add dependent lookup plugin.

* Use correct YAML booleans.

* Began complete rewrite.

* Only match start of error msg.

* Improve tests.

* Work around old Jinja2 versions.

* Fix metadata.

* Fix filter name.

(cherry picked from commit eea4f45965)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-11 20:14:30 +02:00
patchback[bot]
4032dd6b08 discord.py: Add new module for discord notifications (#2398) (#2493)
* first push: add discord module and test for notifications

* fix the yaml docs and edit the result output

* add link

* fix link

* fix docs and remove required=False in argument spec

* add elements specd and more info about embeds

* called str...

* elements for embeds oc.

* fix typo's in description and set checkmode to false

* edit docs and module return

* support checkmode with get method

* fix unit test

* handle exception and add new example for embeds

* quote line

* fix typos

* fix yaml

(cherry picked from commit 0912e8cc7a)

Co-authored-by: CWollinger <CWollinger@web.de>
2021-05-11 20:05:59 +02:00
patchback[bot]
4cb6f39a80 module_helper.py Breakdown (#2393) (#2492)
* break down of module_helper into smaller pieces, keeping compatibility

* removed abc.ABC (py3 only) from code + fixed reference to vars.py

* multiple changes:

- mh.base - moved more functionalities to ModuleHelperBase
- mh.mixins.(cmd, state) - CmdMixin no longer inherits from ModuleHelperBase
- mh.mixins.deps - DependencyMixin now overrides run() method to test dependency
- mh.mixins.vars - created class VarsMixin
- mh.module_helper - moved functions to base class, added VarsMixin
- module_helper - importing AnsibleModule as well, for backward compatibility in test

* removed unnecessary __all__

* make pylint happy

* PR adjustments + bot config + changelog frag

* Update plugins/module_utils/mh/module_helper.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/module_utils/mh/module_helper.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit d22dd5056e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-11 20:05:42 +02:00
patchback[bot]
3539957bac modified redfish_config and idrac_redfish_config to skip incorrect attributes (#2334) (#2491)
* modified redfish_config and idrac_redfish_config to skip incorrect attributes

Signed-off-by: Trevor Squillario Trevor_Squillario@Dell.com

* modified redfish_utils.py and idrac_redfish_config.py to return empty warning message

* modified redfish_config.py and idrac_redfish_config.py to use module.warn()

* updated changelog fragment for pr 2334

(cherry picked from commit 9d46ccf1b2)

Co-authored-by: TrevorSquillario <72882537+TrevorSquillario@users.noreply.github.com>
2021-05-11 20:05:24 +02:00
Felix Fontein
e05769d4bf Deprecate vendored ipaddress copy. (#2459) 2021-05-11 19:27:46 +02:00
Felix Fontein
19c03cff96 Revert "Revert "spotinst_aws_elastigroup - fixed elements for many lists (#2355) (#2363)"" (#2428)
This reverts commit 5b15e4089a.
2021-05-11 19:27:36 +02:00
patchback[bot]
703660c81d Run unit tests also with Python 3.10. (#2486) (#2488)
ci_complete

(cherry picked from commit 624eb7171e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-11 08:27:35 +02:00
Felix Fontein
fd32af1ac3 Next expected release will be 3.1.0. 2021-05-11 08:15:11 +02:00
Felix Fontein
80fbcf2f98 Release 3.0.2. 2021-05-11 07:08:11 +02:00
patchback[bot]
a722e038cc Avoid incorrectly marking zfs tasks as changed (#2454) (#2484)
* Avoid incorrectly marking zfs tasks as changed

The zfs module will incorrectly mark certain tasks as having been
changed. For example, if a dataset has a quota of "1G" and the user
changes it to "1024M", the actual quota vale has not changed, but since
the module is doing a simple string comparison between "1G" and "1024M",
it marks the step as "changed".

Instead of trying to handle all the corner cases of zfs (another example
is when the zpool "altroot" property has been set), this change simply
compares the output of "zfs-get" from before and after "zfs-set" is
called

* update changelog format

* Update changelogs/fragments/2454-detect_zfs_changed.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* add note about check_mode

* Update plugins/modules/storage/zfs/zfs.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/storage/zfs/zfs.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* clarify check mode qualifications

* rephrase to avoid hypothetical

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8e7aff00b5)

Co-authored-by: sam-lunt <samuel.j.lunt@gmail.com>
2021-05-10 18:17:03 +02:00
Felix Fontein
19c8d2164d Prepare 3.0.2 release. 2021-05-10 18:00:08 +02:00
patchback[bot]
d4656ffca2 Clarify Windows (non-)support. (#2476) (#2482)
(cherry picked from commit 2e58dfe52a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-10 17:14:34 +02:00
patchback[bot]
b49607f12d fix stackpath_compute validate_config (#2448) (#2475)
* fix stackpath_compute validate_config

get the lenght for the client_id / client_secret to validate inventory configuration

* Add changelog fragment.

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 4cdff8654a)

Co-authored-by: vbarba <victor.barba@gmail.com>
2021-05-09 22:46:17 +02:00
patchback[bot]
af0ce4284f Small Documentation Example Of Cask Leveraging (#2462) (#2470)
* Small Documentation Example Of Cask Leveraging

- Just a lil' demo showing that we can utilize homebrew/cask/foo syntax
for given name of package to grab associated cask pacakge

Resolves: patch/sml-doc-example-update

* Slight Documentation Example Edit

- adjusting documentation example to provide better info surrounding installing
a given formula from brew via cask

Resolves: patch/sml-doc-example-update

* Small Edits To Make PEP8 Happy

- format code with autopep8 in vs code

Resolves: patch/sml-doc-example-update

* Only Making Small PEP8 Change

- reverting previous mass PEP8 format, focus on trimming whitespace on
doc example entry

Resolves: patch/sml-doc-example-update

* Remove Trailing Whitespace PEP8

- removed trailing whitespace on doc example chunk

Resolves: patch/sml-doc-example-update
(cherry picked from commit 7386326258)

Co-authored-by: Mike Russell <michael.j.russell.email@gmail.com>
2021-05-08 12:18:28 +02:00
patchback[bot]
f5f862617a Add more plugin authors to BOTMETA. (#2451) (#2453)
(cherry picked from commit 188a4eeb0c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-05 07:52:09 +02:00
Felix Fontein
a1a4ba4337 Next expected release is 3.0.2 next week. 2021-05-04 13:21:05 +02:00
108 changed files with 4078 additions and 1397 deletions

View File

@@ -124,6 +124,7 @@ stages:
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- stage: Units_2_11
displayName: Units 2.11
dependsOn: []

7
.github/BOTMETA.yml vendored
View File

@@ -88,6 +88,8 @@ files:
maintainers: $team_linode
labels: cloud linode
keywords: linode dynamic inventory script
$inventories/lxd.py:
maintainers: conloos
$inventories/proxmox.py:
maintainers: $team_virt ilijamt
$inventories/scaleway.py:
@@ -140,6 +142,9 @@ files:
$module_utils/memset.py:
maintainers: glitchcrab
labels: cloud memset
$module_utils/mh/:
maintainers: russoz
labels: module_helper
$module_utils/module_helper.py:
maintainers: russoz
labels: module_helper
@@ -373,6 +378,8 @@ files:
maintainers: $team_keycloak
$modules/identity/keycloak/keycloak_group.py:
maintainers: adamgoossens
$modules/identity/keycloak/keycloak_realm.py:
maintainers: kris2kris
$modules/identity/onepassword_info.py:
maintainers: Rylon
$modules/identity/opendj/opendj_backendprop.py:

View File

@@ -6,6 +6,109 @@ Community General Release Notes
This changelog describes changes after version 2.0.0.
v3.1.0
======
Release Summary
---------------
Regular feature and bugfix release.
Minor Changes
-------------
- ModuleHelper module utils - improved mechanism for customizing the calculation of ``changed`` (https://github.com/ansible-collections/community.general/pull/2514).
- chroot connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- cmd (Module Helper) module utils - ``CmdMixin`` now pulls the value for ``run_command()`` params from ``self.vars``, as opposed to previously retrieving those from ``self.module.params`` (https://github.com/ansible-collections/community.general/pull/2517).
- filesystem - cleanup and revamp module, tests and doc. Pass all commands to ``module.run_command()`` as lists. Move the device-vs-mountpoint logic to ``grow()`` method. Give to all ``get_fs_size()`` the same logic and error handling. (https://github.com/ansible-collections/community.general/pull/2472).
- funcd connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- gitlab_user - add ``expires_at`` option (https://github.com/ansible-collections/community.general/issues/2325).
- idrac_redfish_config - modified set_manager_attributes function to skip invalid attribute instead of returning. Added skipped attributes to output. Modified module exit to add warning variable (https://github.com/ansible-collections/community.general/issues/1995).
- influxdb_retention_policy - add ``state`` parameter with allowed values ``present`` and ``absent`` to support deletion of existing retention policies (https://github.com/ansible-collections/community.general/issues/2383).
- influxdb_retention_policy - simplify duration logic parsing (https://github.com/ansible-collections/community.general/pull/2385).
- iocage connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- jail connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- java_keystore - added ``ssl_backend`` parameter for using the cryptography library instead of the OpenSSL binary (https://github.com/ansible-collections/community.general/pull/2485).
- java_keystore - replace envvar by stdin to pass secret to ``keytool`` (https://github.com/ansible-collections/community.general/pull/2526).
- linode - added proper traceback when failing due to exceptions (https://github.com/ansible-collections/community.general/pull/2410).
- linode - parameter ``additional_disks`` is now validated as a list of dictionaries (https://github.com/ansible-collections/community.general/pull/2410).
- lxc connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- module_helper module utils - break down of the long file into smaller pieces (https://github.com/ansible-collections/community.general/pull/2393).
- nmcli - remove dead code, ``options`` never contains keys from ``param_alias`` (https://github.com/ansible-collections/community.general/pull/2417).
- pacman - add ``executable`` option to use an alternative pacman binary (https://github.com/ansible-collections/community.general/issues/2524).
- passwordstore lookup - add option ``missing`` to choose what to do if the password file is missing (https://github.com/ansible-collections/community.general/pull/2500).
- qubes connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- redfish_config - modified module exit to add warning variable (https://github.com/ansible-collections/community.general/issues/1995).
- redfish_utils module utils - modified set_bios_attributes function to skip invalid attribute instead of returning. Added skipped attributes to output (https://github.com/ansible-collections/community.general/issues/1995).
- saltstack connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- spotinst_aws_elastigroup - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2355).
- zfs_delegate_admin - drop choices from permissions, allowing any permission supported by the underlying zfs commands (https://github.com/ansible-collections/community.general/pull/2540).
- zone connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
Deprecated Features
-------------------
- The nios, nios_next_ip, nios_next_network lookup plugins, the nios documentation fragment, and the nios_host_record, nios_ptr_record, nios_mx_record, nios_fixed_address, nios_zone, nios_member, nios_a_record, nios_aaaa_record, nios_network, nios_dns_view, nios_txt_record, nios_naptr_record, nios_srv_record, nios_cname_record, nios_nsgroup, and nios_network_view module have been deprecated and will be removed from community.general 5.0.0. Please install the `infoblox.nios_modules <https://galaxy.ansible.com/infoblox/nios_modules>`_ collection instead and use its plugins and modules (https://github.com/ansible-collections/community.general/pull/2458).
- The vendored copy of ``ipaddress`` will be removed in community.general 4.0.0. Please switch to ``ipaddress`` from the Python 3 standard library, or `from pypi <https://pypi.org/project/ipaddress/>`_, if your code relies on the vendored version of ``ipaddress`` (https://github.com/ansible-collections/community.general/pull/2459).
- linode - parameter ``backupsenabled`` is deprecated and will be removed in community.general 5.0.0 (https://github.com/ansible-collections/community.general/pull/2410).
- lxd inventory plugin - the plugin will require ``ipaddress`` installed when used with Python 2 from community.general 4.0.0 on. ``ipaddress`` is part of the Python 3 standard library, but can be installed for Python 2 from pypi (https://github.com/ansible-collections/community.general/pull/2459).
- scaleway_security_group_rule - the module will require ``ipaddress`` installed when used with Python 2 from community.general 4.0.0 on. ``ipaddress`` is part of the Python 3 standard library, but can be installed for Python 2 from pypi (https://github.com/ansible-collections/community.general/pull/2459).
Bugfixes
--------
- consul_acl - update the hcl allowlist to include all supported options (https://github.com/ansible-collections/community.general/pull/2495).
- filesystem - repair ``reiserfs`` fstype support after adding it to integration tests (https://github.com/ansible-collections/community.general/pull/2472).
- influxdb_user - allow creation of admin users when InfluxDB authentication is enabled but no other user exists on the database. In this scenario, InfluxDB 1.x allows only ``CREATE USER`` queries and rejects any other query (https://github.com/ansible-collections/community.general/issues/2364).
- influxdb_user - fix bug where an influxdb user has no privileges for 2 or more databases (https://github.com/ansible-collections/community.general/pull/2499).
- iptables_state - fix a 'FutureWarning' in a regex and do some basic code clean up (https://github.com/ansible-collections/community.general/pull/2525).
- iptables_state - fix initialization of iptables from null state when adressing more than one table (https://github.com/ansible-collections/community.general/issues/2523).
- nmap inventory plugin - fix local variable error when cache is disabled (https://github.com/ansible-collections/community.general/issues/2512).
New Plugins
-----------
Filter
~~~~~~
- groupby_as_dict - Transform a sequence of dictionaries to a dictionary where the dictionaries are indexed by an attribute
Lookup
~~~~~~
- dependent - Composes a list with nested elements of other lists or dicts which can depend on previous loop variables
- random_pet - Generates random pet names
New Modules
-----------
Cloud
~~~~~
misc
^^^^
- proxmox_nic - Management of a NIC of a Qemu(KVM) VM in a Proxmox VE cluster.
Notification
~~~~~~~~~~~~
- discord - Send Discord messages
v3.0.2
======
Release Summary
---------------
Bugfix release for the first Ansible 4.0.0 release candidate.
Bugfixes
--------
- stackpath_compute inventory script - fix broken validation checks for client ID and client secret (https://github.com/ansible-collections/community.general/pull/2448).
- zfs - certain ZFS properties, especially sizes, would lead to a task being falsely marked as "changed" even when no actual change was made (https://github.com/ansible-collections/community.general/issues/975, https://github.com/ansible-collections/community.general/pull/2454).
v3.0.1
======

View File

@@ -7,6 +7,8 @@ This repo contains the `community.general` Ansible Collection. The collection in
You can find [documentation for this collection on the Ansible docs site](https://docs.ansible.com/ansible/latest/collections/community/general/).
Please note that this collection does **not** support Windows targets. Only connection plugins included in this collection might support Windows targets, and will explicitly mention that in their documentation if they do so.
## Tested with Ansible
Tested with the current Ansible 2.9, ansible-base 2.10 and ansible-core 2.11 releases and the current development version of ansible-core. Ansible versions before 2.9.10 are not supported.

View File

@@ -1010,3 +1010,149 @@ releases:
- 2435-one_vm-fix_missing_keys.yml
- 3.0.1.yml
release_date: '2021-05-04'
3.0.2:
changes:
bugfixes:
- stackpath_compute inventory script - fix broken validation checks for client
ID and client secret (https://github.com/ansible-collections/community.general/pull/2448).
- zfs - certain ZFS properties, especially sizes, would lead to a task being
falsely marked as "changed" even when no actual change was made (https://github.com/ansible-collections/community.general/issues/975,
https://github.com/ansible-collections/community.general/pull/2454).
release_summary: Bugfix release for the first Ansible 4.0.0 release candidate.
fragments:
- 2448-stackpath_compute-fix.yml
- 2454-detect_zfs_changed.yml
- 3.0.2.yml
release_date: '2021-05-11'
3.1.0:
changes:
bugfixes:
- consul_acl - update the hcl allowlist to include all supported options (https://github.com/ansible-collections/community.general/pull/2495).
- filesystem - repair ``reiserfs`` fstype support after adding it to integration
tests (https://github.com/ansible-collections/community.general/pull/2472).
- influxdb_user - allow creation of admin users when InfluxDB authentication
is enabled but no other user exists on the database. In this scenario, InfluxDB
1.x allows only ``CREATE USER`` queries and rejects any other query (https://github.com/ansible-collections/community.general/issues/2364).
- influxdb_user - fix bug where an influxdb user has no privileges for 2 or
more databases (https://github.com/ansible-collections/community.general/pull/2499).
- iptables_state - fix a 'FutureWarning' in a regex and do some basic code clean
up (https://github.com/ansible-collections/community.general/pull/2525).
- iptables_state - fix initialization of iptables from null state when adressing
more than one table (https://github.com/ansible-collections/community.general/issues/2523).
- nmap inventory plugin - fix local variable error when cache is disabled (https://github.com/ansible-collections/community.general/issues/2512).
deprecated_features:
- The nios, nios_next_ip, nios_next_network lookup plugins, the nios documentation
fragment, and the nios_host_record, nios_ptr_record, nios_mx_record, nios_fixed_address,
nios_zone, nios_member, nios_a_record, nios_aaaa_record, nios_network, nios_dns_view,
nios_txt_record, nios_naptr_record, nios_srv_record, nios_cname_record, nios_nsgroup,
and nios_network_view module have been deprecated and will be removed from
community.general 5.0.0. Please install the `infoblox.nios_modules <https://galaxy.ansible.com/infoblox/nios_modules>`_
collection instead and use its plugins and modules (https://github.com/ansible-collections/community.general/pull/2458).
- The vendored copy of ``ipaddress`` will be removed in community.general 4.0.0.
Please switch to ``ipaddress`` from the Python 3 standard library, or `from
pypi <https://pypi.org/project/ipaddress/>`_, if your code relies on the vendored
version of ``ipaddress`` (https://github.com/ansible-collections/community.general/pull/2459).
- linode - parameter ``backupsenabled`` is deprecated and will be removed in
community.general 5.0.0 (https://github.com/ansible-collections/community.general/pull/2410).
- lxd inventory plugin - the plugin will require ``ipaddress`` installed when
used with Python 2 from community.general 4.0.0 on. ``ipaddress`` is part
of the Python 3 standard library, but can be installed for Python 2 from pypi
(https://github.com/ansible-collections/community.general/pull/2459).
- scaleway_security_group_rule - the module will require ``ipaddress`` installed
when used with Python 2 from community.general 4.0.0 on. ``ipaddress`` is
part of the Python 3 standard library, but can be installed for Python 2 from
pypi (https://github.com/ansible-collections/community.general/pull/2459).
minor_changes:
- ModuleHelper module utils - improved mechanism for customizing the calculation
of ``changed`` (https://github.com/ansible-collections/community.general/pull/2514).
- chroot connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- cmd (Module Helper) module utils - ``CmdMixin`` now pulls the value for ``run_command()``
params from ``self.vars``, as opposed to previously retrieving those from
``self.module.params`` (https://github.com/ansible-collections/community.general/pull/2517).
- filesystem - cleanup and revamp module, tests and doc. Pass all commands to
``module.run_command()`` as lists. Move the device-vs-mountpoint logic to
``grow()`` method. Give to all ``get_fs_size()`` the same logic and error
handling. (https://github.com/ansible-collections/community.general/pull/2472).
- funcd connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- gitlab_user - add ``expires_at`` option (https://github.com/ansible-collections/community.general/issues/2325).
- idrac_redfish_config - modified set_manager_attributes function to skip invalid
attribute instead of returning. Added skipped attributes to output. Modified
module exit to add warning variable (https://github.com/ansible-collections/community.general/issues/1995).
- influxdb_retention_policy - add ``state`` parameter with allowed values ``present``
and ``absent`` to support deletion of existing retention policies (https://github.com/ansible-collections/community.general/issues/2383).
- influxdb_retention_policy - simplify duration logic parsing (https://github.com/ansible-collections/community.general/pull/2385).
- iocage connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- jail connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- java_keystore - added ``ssl_backend`` parameter for using the cryptography
library instead of the OpenSSL binary (https://github.com/ansible-collections/community.general/pull/2485).
- java_keystore - replace envvar by stdin to pass secret to ``keytool`` (https://github.com/ansible-collections/community.general/pull/2526).
- linode - added proper traceback when failing due to exceptions (https://github.com/ansible-collections/community.general/pull/2410).
- linode - parameter ``additional_disks`` is now validated as a list of dictionaries
(https://github.com/ansible-collections/community.general/pull/2410).
- lxc connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- module_helper module utils - break down of the long file into smaller pieces
(https://github.com/ansible-collections/community.general/pull/2393).
- nmcli - remove dead code, ``options`` never contains keys from ``param_alias``
(https://github.com/ansible-collections/community.general/pull/2417).
- pacman - add ``executable`` option to use an alternative pacman binary (https://github.com/ansible-collections/community.general/issues/2524).
- passwordstore lookup - add option ``missing`` to choose what to do if the
password file is missing (https://github.com/ansible-collections/community.general/pull/2500).
- qubes connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- redfish_config - modified module exit to add warning variable (https://github.com/ansible-collections/community.general/issues/1995).
- redfish_utils module utils - modified set_bios_attributes function to skip
invalid attribute instead of returning. Added skipped attributes to output
(https://github.com/ansible-collections/community.general/issues/1995).
- saltstack connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
- spotinst_aws_elastigroup - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2355).
- zfs_delegate_admin - drop choices from permissions, allowing any permission
supported by the underlying zfs commands (https://github.com/ansible-collections/community.general/pull/2540).
- zone connection - minor refactor to make lints and IDEs happy (https://github.com/ansible-collections/community.general/pull/2520).
release_summary: Regular feature and bugfix release.
fragments:
- 1085-consul-acl-hcl-whitelist-update.yml
- 2323-groupby_as_dict-filter.yml
- 2334-redfish_config-skip-incorrect-attributes.yml
- 2355-spotinst_aws_elastigroup-list-elements.yml
- 2364-influxdb_user-first_user.yml
- 2383-influxdb_retention_policy-add-state-option.yml
- 2393-module_helper-breakdown.yml
- 2410-linode-improvements.yml
- 2417-nmcli_remove_dead_code.yml
- 2450-gitlab_user-add_expires_at_option.yaml
- 2472_filesystem_module_revamp.yml
- 2485-java_keystore-ssl_backend-parameter.yml
- 2499-influxdb_user-fix-multiple-no-privileges.yml
- 2500-passwordstore-add_option_ignore_missing.yml
- 2514-mh-improved-changed.yml
- 2517-cmd-params-from-vars.yml
- 2518-nmap-fix-cache-disabled.yml
- 2520-connection-refactors.yml
- 2524-pacman_add_bin_option.yml
- 2525-iptables_state-fix-initialization-command.yml
- 2526-java_keystore-password-via-stdin.yml
- 2540-zfs-delegate-choices.yml
- 3.1.0.yml
- deprecate-ipaddress.yml
- nios-deprecation.yml
modules:
- description: Send Discord messages
name: discord
namespace: notification
- description: Management of a NIC of a Qemu(KVM) VM in a Proxmox VE cluster.
name: proxmox_nic
namespace: cloud.misc
plugins:
filter:
- description: Transform a sequence of dictionaries to a dictionary where the
dictionaries are indexed by an attribute
name: groupby_as_dict
namespace: null
lookup:
- description: Composes a list with nested elements of other lists or dicts
which can depend on previous loop variables
name: dependent
namespace: null
- description: Generates random pet names
name: random_pet
namespace: null
release_date: '2021-05-18'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 3.0.1
version: 3.1.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -37,6 +37,18 @@ plugin_routing:
redirect: community.google.gcp_storage_file
hashi_vault:
redirect: community.hashi_vault.hashi_vault
nios:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios lookup plugin has been deprecated. Please use infoblox.nios_modules.nios_lookup instead.
nios_next_ip:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_next_ip lookup plugin has been deprecated. Please use infoblox.nios_modules.nios_next_ip instead.
nios_next_network:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_next_network lookup plugin has been deprecated. Please use infoblox.nios_modules.nios_next_network instead.
modules:
ali_instance_facts:
tombstone:
@@ -283,6 +295,70 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use netapp.ontap.na_ontap_info instead.
nios_a_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_a_record module has been deprecated. Please use infoblox.nios_modules.nios_a_record instead.
nios_aaaa_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_aaaa_record module has been deprecated. Please use infoblox.nios_modules.nios_aaaa_record instead.
nios_cname_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_cname_record module has been deprecated. Please use infoblox.nios_modules.nios_cname_record instead.
nios_dns_view:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_dns_view module has been deprecated. Please use infoblox.nios_modules.nios_dns_view instead.
nios_fixed_address:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_fixed_address module has been deprecated. Please use infoblox.nios_modules.nios_fixed_address instead.
nios_host_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_host_record module has been deprecated. Please use infoblox.nios_modules.nios_host_record instead.
nios_member:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_member module has been deprecated. Please use infoblox.nios_modules.nios_member instead.
nios_mx_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_mx_record module has been deprecated. Please use infoblox.nios_modules.nios_mx_record instead.
nios_naptr_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_naptr_record module has been deprecated. Please use infoblox.nios_modules.nios_naptr_record instead.
nios_network:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_network module has been deprecated. Please use infoblox.nios_modules.nios_network instead.
nios_network_view:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_network_view module has been deprecated. Please use infoblox.nios_modules.nios_network_view instead.
nios_nsgroup:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_nsgroup module has been deprecated. Please use infoblox.nios_modules.nios_nsgroup instead.
nios_ptr_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_ptr_record module has been deprecated. Please use infoblox.nios_modules.nios_ptr_record instead.
nios_srv_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_srv_record module has been deprecated. Please use infoblox.nios_modules.nios_srv_record instead.
nios_txt_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_txt_record module has been deprecated. Please use infoblox.nios_modules.nios_txt_record instead.
nios_zone:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_zone module has been deprecated. Please use infoblox.nios_modules.nios_zone instead.
nginx_status_facts:
tombstone:
removal_version: 3.0.0
@@ -568,11 +644,13 @@ plugin_routing:
redirect: community.kubevirt.kubevirt_common_options
kubevirt_vm_options:
redirect: community.kubevirt.kubevirt_vm_options
nios:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios document fragment has been deprecated. Please use infoblox.nios_modules.nios instead.
postgresql:
redirect: community.postgresql.postgresql
module_utils:
remote_management.dellemc.dellemc_idrac:
redirect: dellemc.openmanage.dellemc_idrac
docker.common:
redirect: community.docker.common
docker.swarm:
@@ -587,6 +665,12 @@ plugin_routing:
redirect: community.hrobot.robot
kubevirt:
redirect: community.kubevirt.kubevirt
net_tools.nios.api:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.net_tools.nios.api module_utils has been deprecated. Please use infoblox.nios_modules.api instead.
remote_management.dellemc.dellemc_idrac:
redirect: dellemc.openmanage.dellemc_idrac
remote_management.dellemc.ome:
redirect: dellemc.openmanage.ome
postgresql:

View File

@@ -7,7 +7,7 @@ __metaclass__ = type
import time
from ansible.plugins.action import ActionBase
from ansible.errors import AnsibleError, AnsibleActionFail, AnsibleConnectionFailure
from ansible.errors import AnsibleActionFail, AnsibleConnectionFailure
from ansible.utils.vars import merge_hash
from ansible.utils.display import Display
@@ -46,7 +46,7 @@ class ActionModule(ActionBase):
the async wrapper results (those with the ansible_job_id key).
'''
# At least one iteration is required, even if timeout is 0.
for i in range(max(1, timeout)):
for dummy in range(max(1, timeout)):
async_result = self._execute_module(
module_name='ansible.builtin.async_status',
module_args=module_args,
@@ -76,7 +76,6 @@ class ActionModule(ActionBase):
task_async = self._task.async_val
check_mode = self._play_context.check_mode
max_timeout = self._connection._play_context.timeout
module_name = self._task.action
module_args = self._task.args
if module_args.get('state', None) == 'restored':
@@ -133,7 +132,7 @@ class ActionModule(ActionBase):
# The module is aware to not process the main iptables-restore
# command before finding (and deleting) the 'starter' cookie on
# the host, so the previous query will not reach ssh timeout.
garbage = self._low_level_execute_command(starter_cmd, sudoable=self.DEFAULT_SUDOABLE)
dummy = self._low_level_execute_command(starter_cmd, sudoable=self.DEFAULT_SUDOABLE)
# As the main command is not yet executed on the target, here
# 'finished' means 'failed before main command be executed'.
@@ -143,7 +142,7 @@ class ActionModule(ActionBase):
except AttributeError:
pass
for x in range(max_timeout):
for dummy in range(max_timeout):
time.sleep(1)
remaining_time -= 1
# - AnsibleConnectionFailure covers rejected requests (i.e.
@@ -151,7 +150,7 @@ class ActionModule(ActionBase):
# - ansible_timeout is able to cover dropped requests (due
# to a rule or policy DROP) if not lower than async_val.
try:
garbage = self._low_level_execute_command(confirm_cmd, sudoable=self.DEFAULT_SUDOABLE)
dummy = self._low_level_execute_command(confirm_cmd, sudoable=self.DEFAULT_SUDOABLE)
break
except AnsibleConnectionFailure:
continue
@@ -164,12 +163,12 @@ class ActionModule(ActionBase):
del result[key]
if result.get('invocation', {}).get('module_args'):
if '_timeout' in result['invocation']['module_args']:
del result['invocation']['module_args']['_back']
del result['invocation']['module_args']['_timeout']
for key in ('_back', '_timeout', '_async_dir', 'jid'):
if result['invocation']['module_args'].get(key):
del result['invocation']['module_args'][key]
async_status_args['mode'] = 'cleanup'
garbage = self._execute_module(
dummy = self._execute_module(
module_name='ansible.builtin.async_status',
module_args=async_status_args,
task_vars=task_vars,

View File

@@ -62,7 +62,7 @@ display = Display()
class Connection(ConnectionBase):
''' Local chroot based connections '''
""" Local chroot based connections """
transport = 'community.general.chroot'
has_pipelining = True
@@ -95,7 +95,7 @@ class Connection(ConnectionBase):
raise AnsibleError("%s does not look like a chrootable dir (/bin/sh missing)" % self.chroot)
def _connect(self):
''' connect to the chroot '''
""" connect to the chroot """
if os.path.isabs(self.get_option('chroot_exe')):
self.chroot_cmd = self.get_option('chroot_exe')
else:
@@ -110,17 +110,17 @@ class Connection(ConnectionBase):
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
''' run a command on the chroot. This is only needed for implementing
""" run a command on the chroot. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
'''
"""
executable = self.get_option('executable')
local_cmd = [self.chroot_cmd, self.chroot, executable, '-c', cmd]
display.vvv("EXEC %s" % (local_cmd), host=self.chroot)
display.vvv("EXEC %s" % local_cmd, host=self.chroot)
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
@@ -128,16 +128,17 @@ class Connection(ConnectionBase):
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
''' run a command on the chroot '''
""" run a command on the chroot """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
stdout, stderr = p.communicate(in_data)
return (p.returncode, stdout, stderr)
return p.returncode, stdout, stderr
def _prefix_login_path(self, remote_path):
''' Make sure that we put files into a standard path
@staticmethod
def _prefix_login_path(remote_path):
""" Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
@@ -145,13 +146,13 @@ class Connection(ConnectionBase):
This also happens to be the former default.
Can revisit using $HOME instead if it's a problem
'''
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
''' transfer a file from local to chroot '''
""" transfer a file from local to chroot """
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.chroot)
@@ -177,7 +178,7 @@ class Connection(ConnectionBase):
raise AnsibleError("file or module does not exist at: %s" % in_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from chroot to local '''
""" fetch a file from chroot to local """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.chroot)
@@ -201,6 +202,6 @@ class Connection(ConnectionBase):
raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
def close(self):
''' terminate the connection; nothing to do here '''
""" terminate the connection; nothing to do here """
super(Connection, self).close()
self._connected = False

View File

@@ -44,7 +44,7 @@ display = Display()
class Connection(ConnectionBase):
''' Func-based connections '''
""" Func-based connections """
has_pipelining = False
@@ -53,6 +53,7 @@ class Connection(ConnectionBase):
self.host = host
# port is unused, this go on func
self.port = port
self.client = None
def connect(self, port=None):
if not HAVE_FUNC:
@@ -62,31 +63,32 @@ class Connection(ConnectionBase):
return self
def exec_command(self, cmd, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the remote minion '''
""" run a command on the remote minion """
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
# totally ignores privlege escalation
display.vvv("EXEC %s" % (cmd), host=self.host)
display.vvv("EXEC %s" % cmd, host=self.host)
p = self.client.command.run(cmd)[self.host]
return (p[0], p[1], p[2])
return p[0], p[1], p[2]
def _normalize_path(self, path, prefix):
@staticmethod
def _normalize_path(path, prefix):
if not path.startswith(os.path.sep):
path = os.path.join(os.path.sep, path)
normpath = os.path.normpath(path)
return os.path.join(prefix, normpath[1:])
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
""" transfer a file from local to remote """
out_path = self._normalize_path(out_path, '/')
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.host)
self.client.local.copyfile.send(in_path, out_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
""" fetch a file from remote to local """
in_path = self._normalize_path(in_path, '/')
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.host)
@@ -99,5 +101,5 @@ class Connection(ConnectionBase):
shutil.rmtree(tmpdir)
def close(self):
''' terminate the connection; nothing to do here '''
""" terminate the connection; nothing to do here """
pass

View File

@@ -40,7 +40,7 @@ display = Display()
class Connection(Jail):
''' Local iocage based connections '''
""" Local iocage based connections """
transport = 'community.general.iocage'

View File

@@ -35,7 +35,6 @@ import os
import os.path
import subprocess
import traceback
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.module_utils.six.moves import shlex_quote
@@ -47,7 +46,7 @@ display = Display()
class Connection(ConnectionBase):
''' Local BSD Jail based connections '''
""" Local BSD Jail based connections """
modified_jailname_key = 'conn_jail_name'
@@ -90,20 +89,20 @@ class Connection(ConnectionBase):
return to_text(stdout, errors='surrogate_or_strict').split()
def _connect(self):
''' connect to the jail; nothing to do here '''
""" connect to the jail; nothing to do here """
super(Connection, self)._connect()
if not self._connected:
display.vvv(u"ESTABLISH JAIL CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self.jail)
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
''' run a command on the jail. This is only needed for implementing
""" run a command on the jail. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
'''
"""
local_cmd = [self.jexec_cmd]
set_env = ''
@@ -123,16 +122,17 @@ class Connection(ConnectionBase):
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
''' run a command on the jail '''
""" run a command on the jail """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
stdout, stderr = p.communicate(in_data)
return (p.returncode, stdout, stderr)
return p.returncode, stdout, stderr
def _prefix_login_path(self, remote_path):
''' Make sure that we put files into a standard path
@staticmethod
def _prefix_login_path(remote_path):
""" Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
@@ -140,13 +140,13 @@ class Connection(ConnectionBase):
This also happens to be the former default.
Can revisit using $HOME instead if it's a problem
'''
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
''' transfer a file from local to jail '''
""" transfer a file from local to jail """
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.jail)
@@ -172,7 +172,7 @@ class Connection(ConnectionBase):
raise AnsibleError("file or module does not exist at: %s" % in_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from jail to local '''
""" fetch a file from jail to local """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.jail)
@@ -196,6 +196,6 @@ class Connection(ConnectionBase):
raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, to_native(stdout), to_native(stderr)))
def close(self):
''' terminate the connection; nothing to do here '''
""" terminate the connection; nothing to do here """
super(Connection, self).close()
self._connected = False

View File

@@ -42,14 +42,13 @@ try:
except ImportError:
pass
from ansible import constants as C
from ansible import errors
from ansible.module_utils._text import to_bytes, to_native
from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
''' Local lxc based connections '''
""" Local lxc based connections """
transport = 'community.general.lxc'
has_pipelining = True
@@ -62,7 +61,7 @@ class Connection(ConnectionBase):
self.container = None
def _connect(self):
''' connect to the lxc; nothing to do here '''
""" connect to the lxc; nothing to do here """
super(Connection, self)._connect()
if not HAS_LIBLXC:
@@ -77,7 +76,8 @@ class Connection(ConnectionBase):
if self.container.state == "STOPPED":
raise errors.AnsibleError("%s is not running" % self.container_name)
def _communicate(self, pid, in_data, stdin, stdout, stderr):
@staticmethod
def _communicate(pid, in_data, stdin, stdout, stderr):
buf = {stdout: [], stderr: []}
read_fds = [stdout, stderr]
if in_data:
@@ -111,7 +111,7 @@ class Connection(ConnectionBase):
return fd
def exec_command(self, cmd, in_data=None, sudoable=False):
''' run a command on the chroot '''
""" run a command on the chroot """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
# python2-lxc needs bytes. python3-lxc needs text.

View File

@@ -37,15 +37,9 @@ DOCUMENTATION = '''
# - name: hosts
'''
import shlex
import shutil
import os
import base64
import subprocess
import ansible.constants as C
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils._text import to_bytes
from ansible.plugins.connection import ConnectionBase, ensure_connect
from ansible.errors import AnsibleConnectionFailure
from ansible.utils.display import Display

View File

@@ -16,14 +16,11 @@ DOCUMENTATION = '''
- This allows you to use existing Saltstack infrastructure to connect to targets.
'''
import re
import os
import pty
import codecs
import subprocess
import base64
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six.moves import cPickle
from ansible import errors
from ansible.plugins.connection import ConnectionBase
HAVE_SALTSTACK = False
try:
@@ -32,13 +29,9 @@ try:
except ImportError:
pass
import os
from ansible import errors
from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
''' Salt-based connections '''
""" Salt-based connections """
has_pipelining = False
# while the name of the product is salt, naming that module salt cause
@@ -58,29 +51,30 @@ class Connection(ConnectionBase):
return self
def exec_command(self, cmd, sudoable=False, in_data=None):
''' run a command on the remote minion '''
""" run a command on the remote minion """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
self._display.vvv("EXEC %s" % (cmd), host=self.host)
self._display.vvv("EXEC %s" % cmd, host=self.host)
# need to add 'true;' to work around https://github.com/saltstack/salt/issues/28077
res = self.client.cmd(self.host, 'cmd.exec_code_all', ['bash', 'true;' + cmd])
if self.host not in res:
raise errors.AnsibleError("Minion %s didn't answer, check if salt-minion is running and the name is correct" % self.host)
p = res[self.host]
return (p['retcode'], p['stdout'], p['stderr'])
return p['retcode'], p['stdout'], p['stderr']
def _normalize_path(self, path, prefix):
@staticmethod
def _normalize_path(path, prefix):
if not path.startswith(os.path.sep):
path = os.path.join(os.path.sep, path)
normpath = os.path.normpath(path)
return os.path.join(prefix, normpath[1:])
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
""" transfer a file from local to remote """
super(Connection, self).put_file(in_path, out_path)
@@ -88,11 +82,11 @@ class Connection(ConnectionBase):
self._display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.host)
with open(in_path, 'rb') as in_fh:
content = in_fh.read()
self.client.cmd(self.host, 'hashutil.base64_decodefile', [codecs.encode(content, 'base64'), out_path])
self.client.cmd(self.host, 'hashutil.base64_decodefile', [base64.b64encode(content), out_path])
# TODO test it
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
""" fetch a file from remote to local """
super(Connection, self).fetch_file(in_path, out_path)
@@ -102,5 +96,5 @@ class Connection(ConnectionBase):
open(out_path, 'wb').write(content)
def close(self):
''' terminate the connection; nothing to do here '''
""" terminate the connection; nothing to do here """
pass

View File

@@ -31,7 +31,6 @@ import os.path
import subprocess
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes
@@ -42,7 +41,7 @@ display = Display()
class Connection(ConnectionBase):
''' Local zone based connections '''
""" Local zone based connections """
transport = 'community.general.zone'
has_pipelining = True
@@ -75,9 +74,9 @@ class Connection(ConnectionBase):
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
zones = []
for l in process.stdout.readlines():
for line in process.stdout.readlines():
# 1:work:running:/zones/work:3126dc59-9a07-4829-cde9-a816e4c5040e:native:shared
s = l.split(':')
s = line.split(':')
if s[1] != 'global':
zones.append(s[1])
@@ -95,20 +94,20 @@ class Connection(ConnectionBase):
return path + '/root'
def _connect(self):
''' connect to the zone; nothing to do here '''
""" connect to the zone; nothing to do here """
super(Connection, self)._connect()
if not self._connected:
display.vvv("THIS IS A LOCAL ZONE DIR", host=self.zone)
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
''' run a command on the zone. This is only needed for implementing
""" run a command on the zone. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
'''
"""
# NOTE: zlogin invokes a shell (just like ssh does) so we do not pass
# this through /bin/sh -c here. Instead it goes through the shell
# that zlogin selects.
@@ -122,16 +121,16 @@ class Connection(ConnectionBase):
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
''' run a command on the zone '''
""" run a command on the zone """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
stdout, stderr = p.communicate(in_data)
return (p.returncode, stdout, stderr)
return p.returncode, stdout, stderr
def _prefix_login_path(self, remote_path):
''' Make sure that we put files into a standard path
""" Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
@@ -139,13 +138,13 @@ class Connection(ConnectionBase):
This also happens to be the former default.
Can revisit using $HOME instead if it's a problem
'''
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
''' transfer a file from local to zone '''
""" transfer a file from local to zone """
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.zone)
@@ -171,7 +170,7 @@ class Connection(ConnectionBase):
raise AnsibleError("file or module does not exist at: %s" % in_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from zone to local '''
""" fetch a file from zone to local """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.zone)
@@ -195,6 +194,6 @@ class Connection(ConnectionBase):
raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
def close(self):
''' terminate the connection; nothing to do here '''
""" terminate the connection; nothing to do here """
super(Connection, self).close()
self._connected = False

42
plugins/filter/groupby.py Normal file
View File

@@ -0,0 +1,42 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.errors import AnsibleFilterError
from ansible.module_utils.common._collections_compat import Mapping, Sequence
def groupby_as_dict(sequence, attribute):
'''
Given a sequence of dictionaries and an attribute name, returns a dictionary mapping
the value of this attribute to the dictionary.
If multiple dictionaries in the sequence have the same value for this attribute,
the filter will fail.
'''
if not isinstance(sequence, Sequence):
raise AnsibleFilterError('Input is not a sequence')
result = dict()
for list_index, element in enumerate(sequence):
if not isinstance(element, Mapping):
raise AnsibleFilterError('Sequence element #{0} is not a mapping'.format(list_index))
if attribute not in element:
raise AnsibleFilterError('Attribute not contained in element #{0} of sequence'.format(list_index))
result_index = element[attribute]
if result_index in result:
raise AnsibleFilterError('Multiple sequence entries have attribute value {0!r}'.format(result_index))
result[result_index] = element
return result
class FilterModule(object):
''' Ansible list filters '''
def filters(self):
return {
'groupby_as_dict': groupby_as_dict,
}

View File

@@ -130,7 +130,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
# This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated
cache_needs_update = True
if cache_needs_update:
if not user_cache_setting or cache_needs_update:
# setup command
cmd = [self._nmap]
if not self._options['ports']:
@@ -207,6 +207,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
except Exception as e:
raise AnsibleParserError("failed to parse %s: %s " % (to_native(path), to_native(e)))
if cache_needs_update:
self._cache[cache_key] = results
self._populate(results)

View File

@@ -102,13 +102,13 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
raise AnsibleError("plugin doesn't match this plugin")
try:
client_id = config['client_id']
if client_id != 32:
if len(client_id) != 32:
raise AnsibleError("client_id must be 32 characters long")
except KeyError:
raise AnsibleError("config missing client_id, a required option")
try:
client_secret = config['client_secret']
if client_secret != 64:
if len(client_secret) != 64:
raise AnsibleError("client_secret must be 64 characters long")
except KeyError:
raise AnsibleError("config missing client_id, a required option")

208
plugins/lookup/dependent.py Normal file
View File

@@ -0,0 +1,208 @@
# (c) 2015-2021, Felix Fontein <felix@fontein.de>
# (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: dependent
short_description: Composes a list with nested elements of other lists or dicts which can depend on previous loop variables
version_added: 3.1.0
description:
- "Takes the input lists and returns a list with elements that are lists, dictionaries,
or template expressions which evaluate to lists or dicts, composed of the elements of
the input evaluated lists and dictionaries."
options:
_raw:
description:
- A list where the elements are one-element dictionaries, mapping a name to a string, list, or dictionary.
The name is the index that is used in the result object. The value is iterated over as described below.
- If the value is a list, it is simply iterated over.
- If the value is a dictionary, it is iterated over and returned as if they would be processed by the
R(ansible.builtin.dict2items filter,ansible_collections.ansible.builtin.dict2items_filter).
- If the value is a string, it is evaluated as Jinja2 expressions which can access the previously chosen
elements with C(item.<index_name>). The result must be a list or a dictionary.
type: list
elements: dict
required: true
"""
EXAMPLES = """
- name: Install/remove public keys for active admin users
ansible.posix.authorized_key:
user: "{{ item.admin.key }}"
key: "{{ lookup('file', item.key.public_key) }}"
state: "{{ 'present' if item.key.active else 'absent' }}"
when: item.admin.value.active
with_community.general.dependent:
- admin: admin_user_data
- key: admin_ssh_keys[item.admin.key]
loop_control:
# Makes the output readable, so that it doesn't contain the whole subdictionaries and lists
label: "{{ [item.admin.key, 'active' if item.key.active else 'inactive', item.key.public_key] }}"
vars:
admin_user_data:
admin1:
name: Alice
active: true
admin2:
name: Bob
active: true
admin_ssh_keys:
admin1:
- private_key: keys/private_key_admin1.pem
public_key: keys/private_key_admin1.pub
active: true
admin2:
- private_key: keys/private_key_admin2.pem
public_key: keys/private_key_admin2.pub
active: true
- private_key: keys/private_key_admin2-old.pem
public_key: keys/private_key_admin2-old.pub
active: false
- name: Update DNS records
community.aws.route53:
zone: "{{ item.zone.key }}"
record: "{{ item.prefix.key ~ '.' if item.prefix.key else '' }}{{ item.zone.key }}"
type: "{{ item.entry.key }}"
ttl: "{{ item.entry.value.ttl | default(3600) }}"
value: "{{ item.entry.value.value }}"
state: "{{ 'absent' if (item.entry.value.absent | default(False)) else 'present' }}"
overwrite: true
loop_control:
# Makes the output readable, so that it doesn't contain the whole subdictionaries and lists
label: |-
{{ [item.zone.key, item.prefix.key, item.entry.key,
item.entry.value.ttl | default(3600),
item.entry.value.absent | default(False), item.entry.value.value] }}
with_community.general.dependent:
- zone: dns_setup
- prefix: item.zone.value
- entry: item.prefix.value
vars:
dns_setup:
example.com:
'':
A:
value:
- 1.2.3.4
AAAA:
value:
- "2a01:1:2:3::1"
'test._domainkey':
TXT:
ttl: 300
value:
- '"k=rsa; t=s; p=MIGfMA..."'
example.org:
'www':
A:
value:
- 1.2.3.4
- 5.6.7.8
"""
RETURN = """
_list:
description:
- A list composed of dictionaries whose keys are the variable names from the input list.
type: list
elements: dict
sample:
- key1: a
key2: test
- key1: a
key2: foo
- key1: b
key2: bar
"""
from ansible.errors import AnsibleLookupError
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.six import string_types
from ansible.plugins.lookup import LookupBase
from ansible.template import Templar
class LookupModule(LookupBase):
def __evaluate(self, expression, templar, variables):
"""Evaluate expression with templar.
``expression`` is the expression to evaluate.
``variables`` are the variables to use.
"""
templar.available_variables = variables or {}
return templar.template("{0}{1}{2}".format("{{", expression, "}}"), cache=False)
def __process(self, result, terms, index, current, templar, variables):
"""Fills ``result`` list with evaluated items.
``result`` is a list where the resulting items are placed.
``terms`` is the parsed list of terms
``index`` is the current index to be processed in the list.
``current`` is a dictionary where the first ``index`` values are filled in.
``variables`` are the variables currently available.
"""
# If we are done, add to result list:
if index == len(terms):
result.append(current.copy())
return
key, expression, values = terms[index]
if expression is not None:
# Evaluate expression in current context
vars = variables.copy()
vars['item'] = current.copy()
try:
values = self.__evaluate(expression, templar, variables=vars)
except Exception as e:
raise AnsibleLookupError(
'Caught "{error}" while evaluating {key!r} with item == {item!r}'.format(
error=e, key=key, item=current))
if isinstance(values, Mapping):
for idx, val in sorted(values.items()):
current[key] = dict([('key', idx), ('value', val)])
self.__process(result, terms, index + 1, current, templar, variables)
elif isinstance(values, Sequence):
for elt in values:
current[key] = elt
self.__process(result, terms, index + 1, current, templar, variables)
else:
raise AnsibleLookupError(
'Did not obtain dictionary or list while evaluating {key!r} with item == {item!r}, but {type}'.format(
key=key, item=current, type=type(values)))
def run(self, terms, variables=None, **kwargs):
"""Generate list."""
result = []
if len(terms) > 0:
templar = Templar(loader=self._templar._loader)
data = []
vars_so_far = set()
for index, term in enumerate(terms):
if not isinstance(term, Mapping):
raise AnsibleLookupError(
'Parameter {index} must be a dictionary, got {type}'.format(
index=index, type=type(term)))
if len(term) != 1:
raise AnsibleLookupError(
'Parameter {index} must be a one-element dictionary, got {count} elements'.format(
index=index, count=len(term)))
k, v = list(term.items())[0]
if k in vars_so_far:
raise AnsibleLookupError(
'The variable {key!r} appears more than once'.format(key=k))
vars_so_far.add(k)
if isinstance(v, string_types):
data.append((k, v, None))
elif isinstance(v, (Sequence, Mapping)):
data.append((k, None, v))
else:
raise AnsibleLookupError(
'Parameter {key!r} (index {index}) must have a value of type string, dictionary or list, got type {type}'.format(
index=index, key=k, type=type(v)))
self.__process(result, data, 0, {}, templar, variables)
return result

View File

@@ -25,6 +25,10 @@ DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
name: nios
short_description: Query Infoblox NIOS objects
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding lookup from it.
alternative: infoblox.nios_modules.nios_lookup
removed_in: 5.0.0
description:
- Uses the Infoblox WAPI API to fetch NIOS specified objects. This lookup
supports adding additional keywords to filter the return data and specify

View File

@@ -25,6 +25,10 @@ DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
name: nios_next_ip
short_description: Return the next available IP address for a network
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding lookup from it.
alternative: infoblox.nios_modules.nios_next_ip
removed_in: 5.0.0
description:
- Uses the Infoblox WAPI API to return the next available IP addresses
for a given network CIDR

View File

@@ -25,6 +25,10 @@ DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
name: nios_next_network
short_description: Return the next available network range for a network-container
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding lookup from it.
alternative: infoblox.nios_modules.nios_next_network
removed_in: 5.0.0
description:
- Uses the Infoblox WAPI API to return the next available network addresses for
a given network CIDR

View File

@@ -25,9 +25,9 @@ DOCUMENTATION = '''
env:
- name: PASSWORD_STORE_DIR
create:
description: Create the password if it does not already exist.
description: Create the password if it does not already exist. Takes precedence over C(missing).
type: bool
default: 'no'
default: false
overwrite:
description: Overwrite the password if it does already exist.
type: bool
@@ -60,6 +60,22 @@ DOCUMENTATION = '''
description: use alphanumeric characters.
type: bool
default: 'no'
missing:
description:
- List of preference about what to do if the password file is missing.
- If I(create=true), the value for this option is ignored and assumed to be C(create).
- If set to C(error), the lookup will error out if the passname does not exist.
- If set to C(create), the passname will be created with the provided length I(length) if it does not exist.
- If set to C(empty) or C(warn), will return a C(none) in case the passname does not exist.
When using C(lookup) and not C(query), this will be translated to an empty string.
version_added: 3.1.0
type: str
default: error
choices:
- error
- warn
- empty
- create
'''
EXAMPLES = """
# Debug is used for examples, BAD IDEA to show passwords on screen
@@ -67,12 +83,28 @@ EXAMPLES = """
ansible.builtin.debug:
msg: "{{ lookup('community.general.passwordstore', 'example/test')}}"
- name: Basic lookup. Warns if example/test does not exist and returns empty string
ansible.builtin.debug:
msg: "{{ lookup('community.general.passwordstore', 'example/test missing=warn')}}"
- name: Create pass with random 16 character password. If password exists just give the password
ansible.builtin.debug:
var: mypassword
vars:
mypassword: "{{ lookup('community.general.passwordstore', 'example/test create=true')}}"
- name: Create pass with random 16 character password. If password exists just give the password
ansible.builtin.debug:
var: mypassword
vars:
mypassword: "{{ lookup('community.general.passwordstore', 'example/test missing=create')}}"
- name: Prints 'abc' if example/test does not exist, just give the password otherwise
ansible.builtin.debug:
var: mypassword
vars:
mypassword: "{{ lookup('community.general.passwordstore', 'example/test missing=empty') | default('abc', true) }}"
- name: Different size password
ansible.builtin.debug:
msg: "{{ lookup('community.general.passwordstore', 'example/test create=true length=42')}}"
@@ -111,10 +143,13 @@ import yaml
from distutils import util
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils.display import Display
from ansible.utils.encrypt import random_password
from ansible.plugins.lookup import LookupBase
from ansible import constants as C
display = Display()
# backhacked check_output with input for python 2.7
# http://stackoverflow.com/questions/10103551/passing-data-to-subprocess-check-output
@@ -178,12 +213,17 @@ class LookupModule(LookupBase):
self.paramvals[key] = util.strtobool(self.paramvals[key])
except (ValueError, AssertionError) as e:
raise AnsibleError(e)
if self.paramvals['missing'] not in ['error', 'warn', 'create', 'empty']:
raise AnsibleError("{0} is not a valid option for missing".format(self.paramvals['missing']))
if not isinstance(self.paramvals['length'], int):
if self.paramvals['length'].isdigit():
self.paramvals['length'] = int(self.paramvals['length'])
else:
raise AnsibleError("{0} is not a correct value for length".format(self.paramvals['length']))
if self.paramvals['create']:
self.paramvals['missing'] = 'create'
# Collect pass environment variables from the plugin's parameters.
self.env = os.environ.copy()
@@ -224,9 +264,11 @@ class LookupModule(LookupBase):
if e.returncode != 0 and 'not in the password store' in e.output:
# if pass returns 1 and return string contains 'is not in the password store.'
# We need to determine if this is valid or Error.
if not self.paramvals['create']:
raise AnsibleError('passname: {0} not found, use create=True'.format(self.passname))
if self.paramvals['missing'] == 'error':
raise AnsibleError('passwordstore: passname {0} not found and missing=error is set'.format(self.passname))
else:
if self.paramvals['missing'] == 'warn':
display.warning('passwordstore: passname {0} not found'.format(self.passname))
return False
else:
raise AnsibleError(e)
@@ -294,6 +336,7 @@ class LookupModule(LookupBase):
'userpass': '',
'length': 16,
'backup': False,
'missing': 'error',
}
for term in terms:
@@ -304,6 +347,9 @@ class LookupModule(LookupBase):
else:
result.append(self.get_passresult())
else: # password does not exist
if self.paramvals['create']:
if self.paramvals['missing'] == 'create':
result.append(self.generate_password())
else:
result.append(None)
return result

View File

@@ -0,0 +1,99 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Abhijeet Kasurde <akasurde@redhat.com>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
name: random_pet
author:
- Abhijeet Kasurde (@Akasurde)
short_description: Generates random pet names
version_added: '3.1.0'
requirements:
- petname U(https://github.com/dustinkirkland/python-petname)
description:
- Generates random pet names that can be used as unique identifiers for the resources.
options:
words:
description:
- The number of words in the pet name.
default: 2
type: int
length:
description:
- The maximal length of every component of the pet name.
- Values below 3 will be set to 3 by petname.
default: 6
type: int
prefix:
description: A string to prefix with the name.
type: str
separator:
description: The character to separate words in the pet name.
default: "-"
type: str
'''
EXAMPLES = r'''
- name: Generate pet name
ansible.builtin.debug:
var: lookup('community.general.random_pet')
# Example result: 'loving-raptor'
- name: Generate pet name with 3 words
ansible.builtin.debug:
var: lookup('community.general.random_pet', words=3)
# Example result: 'fully-fresh-macaw'
- name: Generate pet name with separator
ansible.builtin.debug:
var: lookup('community.general.random_pet', separator="_")
# Example result: 'causal_snipe'
- name: Generate pet name with length
ansible.builtin.debug:
var: lookup('community.general.random_pet', length=7)
# Example result: 'natural-peacock'
'''
RETURN = r'''
_raw:
description: A one-element list containing a random pet name
type: list
elements: str
'''
try:
import petname
HAS_PETNAME = True
except ImportError:
HAS_PETNAME = False
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
if not HAS_PETNAME:
raise AnsibleError('Python petname library is required. '
'Please install using "pip install petname"')
self.set_options(var_options=variables, direct=kwargs)
words = self.get_option('words')
length = self.get_option('length')
prefix = self.get_option('prefix')
separator = self.get_option('separator')
values = petname.Generate(words=words, separator=separator, letters=length)
if prefix:
values = "%s%s%s" % (prefix, separator, values)
return [values]

View File

View File

@@ -0,0 +1,62 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException as _MHE
from ansible_collections.community.general.plugins.module_utils.mh.deco import module_fails_on_exception
class ModuleHelperBase(object):
module = None
ModuleHelperException = _MHE
def __init__(self, module=None):
self._changed = False
if module:
self.module = module
if not isinstance(self.module, AnsibleModule):
self.module = AnsibleModule(**self.module)
def __init_module__(self):
pass
def __run__(self):
raise NotImplementedError()
def __quit_module__(self):
pass
def __changed__(self):
raise NotImplementedError()
@property
def changed(self):
try:
return self.__changed__()
except NotImplementedError:
return self._changed
@changed.setter
def changed(self, value):
self._changed = value
def has_changed(self):
raise NotImplementedError()
@property
def output(self):
raise NotImplementedError()
@module_fails_on_exception
def run(self):
self.__init_module__()
self.__run__()
self.__quit_module__()
self.module.exit_json(changed=self.has_changed(), **self.output)

View File

@@ -0,0 +1,54 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
from functools import wraps
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException
def cause_changes(on_success=None, on_failure=None):
def deco(func):
if on_success is None and on_failure is None:
return func
@wraps(func)
def wrapper(*args, **kwargs):
try:
self = args[0]
func(*args, **kwargs)
if on_success is not None:
self.changed = on_success
except Exception:
if on_failure is not None:
self.changed = on_failure
raise
return wrapper
return deco
def module_fails_on_exception(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
try:
func(self, *args, **kwargs)
except SystemExit:
raise
except ModuleHelperException as e:
if e.update_output:
self.update_output(e.update_output)
self.module.fail_json(msg=e.msg, exception=traceback.format_exc(),
output=self.output, vars=self.vars.output(), **self.output)
except Exception as e:
msg = "Module failed with exception: {0}".format(str(e).strip())
self.module.fail_json(msg=msg, exception=traceback.format_exc(),
output=self.output, vars=self.vars.output(), **self.output)
return wrapper

View File

@@ -0,0 +1,22 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleHelperException(Exception):
@staticmethod
def _get_remove(key, kwargs):
if key in kwargs:
result = kwargs[key]
del kwargs[key]
return result
return None
def __init__(self, *args, **kwargs):
self.msg = self._get_remove('msg', kwargs) or "Module failed with exception: {0}".format(self)
self.update_output = self._get_remove('update_output', kwargs) or {}
super(ModuleHelperException, self).__init__(*args)

View File

@@ -0,0 +1,167 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from functools import partial
class ArgFormat(object):
"""
Argument formatter for use as a command line parameter. Used in CmdMixin.
"""
BOOLEAN = 0
PRINTF = 1
FORMAT = 2
@staticmethod
def stars_deco(num):
if num == 1:
def deco(f):
return lambda v: f(*v)
return deco
elif num == 2:
def deco(f):
return lambda v: f(**v)
return deco
return lambda f: f
def __init__(self, name, fmt=None, style=FORMAT, stars=0):
"""
Creates a CLI-formatter for one specific argument. The argument may be a module parameter or just a named parameter for
the CLI command execution.
:param name: Name of the argument to be formatted
:param fmt: Either a str to be formatted (using or not printf-style) or a callable that does that
:param style: Whether arg_format (as str) should use printf-style formatting.
Ignored if arg_format is None or not a str (should be callable).
:param stars: A int with 0, 1 or 2 value, indicating to formatting the value as: value, *value or **value
"""
def printf_fmt(_fmt, v):
try:
return [_fmt % v]
except TypeError as e:
if e.args[0] != 'not all arguments converted during string formatting':
raise
return [_fmt]
_fmts = {
ArgFormat.BOOLEAN: lambda _fmt, v: ([_fmt] if bool(v) else []),
ArgFormat.PRINTF: printf_fmt,
ArgFormat.FORMAT: lambda _fmt, v: [_fmt.format(v)],
}
self.name = name
self.stars = stars
if fmt is None:
fmt = "{0}"
style = ArgFormat.FORMAT
if isinstance(fmt, str):
func = _fmts[style]
self.arg_format = partial(func, fmt)
elif isinstance(fmt, list) or isinstance(fmt, tuple):
self.arg_format = lambda v: [_fmts[style](f, v)[0] for f in fmt]
elif hasattr(fmt, '__call__'):
self.arg_format = fmt
else:
raise TypeError('Parameter fmt must be either: a string, a list/tuple of '
'strings or a function: type={0}, value={1}'.format(type(fmt), fmt))
if stars:
self.arg_format = (self.stars_deco(stars))(self.arg_format)
def to_text(self, value):
if value is None:
return []
func = self.arg_format
return [str(p) for p in func(value)]
class CmdMixin(object):
"""
Mixin for mapping module options to running a CLI command with its arguments.
"""
command = None
command_args_formats = {}
run_command_fixed_options = {}
check_rc = False
force_lang = "C"
@property
def module_formats(self):
result = {}
for param in self.module.params.keys():
result[param] = ArgFormat(param)
return result
@property
def custom_formats(self):
result = {}
for param, fmt_spec in self.command_args_formats.items():
result[param] = ArgFormat(param, **fmt_spec)
return result
def _calculate_args(self, extra_params=None, params=None):
def add_arg_formatted_param(_cmd_args, arg_format, _value):
args = list(arg_format.to_text(_value))
return _cmd_args + args
def find_format(_param):
return self.custom_formats.get(_param, self.module_formats.get(_param))
extra_params = extra_params or dict()
cmd_args = list([self.command]) if isinstance(self.command, str) else list(self.command)
try:
cmd_args[0] = self.module.get_bin_path(cmd_args[0], required=True)
except ValueError:
pass
param_list = params if params else self.vars.keys()
for param in param_list:
if isinstance(param, dict):
if len(param) != 1:
raise self.ModuleHelperException("run_command parameter as a dict must "
"contain only one key: {0}".format(param))
_param = list(param.keys())[0]
fmt = find_format(_param)
value = param[_param]
elif isinstance(param, str):
if param in self.vars.keys():
fmt = find_format(param)
value = self.vars[param]
elif param in extra_params:
fmt = find_format(param)
value = extra_params[param]
else:
self.module.deprecate("Cannot determine value for parameter: {0}. "
"From version 4.0.0 onwards this will generate an exception".format(param),
version="4.0.0", collection_name="community.general")
continue
else:
raise self.ModuleHelperException("run_command parameter must be either a str or a dict: {0}".format(param))
cmd_args = add_arg_formatted_param(cmd_args, fmt, value)
return cmd_args
def process_command_output(self, rc, out, err):
return rc, out, err
def run_command(self, extra_params=None, params=None, *args, **kwargs):
self.vars.cmd_args = self._calculate_args(extra_params, params)
options = dict(self.run_command_fixed_options)
env_update = dict(options.get('environ_update', {}))
options['check_rc'] = options.get('check_rc', self.check_rc)
if self.force_lang:
env_update.update({'LANGUAGE': self.force_lang})
self.update_output(force_lang=self.force_lang)
options['environ_update'] = env_update
options.update(kwargs)
rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)
self.update_output(rc=rc, stdout=out, stderr=err)
return self.process_command_output(rc, out, err)

View File

@@ -0,0 +1,58 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
from ansible_collections.community.general.plugins.module_utils.mh.base import ModuleHelperBase
from ansible_collections.community.general.plugins.module_utils.mh.deco import module_fails_on_exception
class DependencyCtxMgr(object):
def __init__(self, name, msg=None):
self.name = name
self.msg = msg
self.has_it = False
self.exc_type = None
self.exc_val = None
self.exc_tb = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.has_it = exc_type is None
self.exc_type = exc_type
self.exc_val = exc_val
self.exc_tb = exc_tb
return not self.has_it
@property
def text(self):
return self.msg or str(self.exc_val)
class DependencyMixin(ModuleHelperBase):
_dependencies = []
@classmethod
def dependency(cls, name, msg):
cls._dependencies.append(DependencyCtxMgr(name, msg))
return cls._dependencies[-1]
def fail_on_missing_deps(self):
for d in self._dependencies:
if not d.has_it:
self.module.fail_json(changed=False,
exception="\n".join(traceback.format_exception(d.exc_type, d.exc_val, d.exc_tb)),
msg=d.text,
**self.output)
@module_fails_on_exception
def run(self):
self.fail_on_missing_deps()
super(DependencyMixin, self).run()

View File

@@ -0,0 +1,39 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class StateMixin(object):
state_param = 'state'
default_state = None
def _state(self):
state = self.module.params.get(self.state_param)
return self.default_state if state is None else state
def _method(self, state):
return "{0}_{1}".format(self.state_param, state)
def __run__(self):
state = self._state()
self.vars.state = state
# resolve aliases
if state not in self.module.params:
aliased = [name for name, param in self.module.argument_spec.items() if state in param.get('aliases', [])]
if aliased:
state = aliased[0]
self.vars.effective_state = state
method = self._method(state)
if not hasattr(self, method):
return self.__state_fallback__()
func = getattr(self, method)
return func()
def __state_fallback__(self):
raise ValueError("Cannot find method: {0}".format(self._method(self._state())))

View File

@@ -0,0 +1,132 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class VarMeta(object):
NOTHING = object()
def __init__(self, diff=False, output=True, change=None, fact=False):
self.init = False
self.initial_value = None
self.value = None
self.diff = diff
self.change = diff if change is None else change
self.output = output
self.fact = fact
def set(self, diff=None, output=None, change=None, fact=None, initial_value=NOTHING):
if diff is not None:
self.diff = diff
if output is not None:
self.output = output
if change is not None:
self.change = change
if fact is not None:
self.fact = fact
if initial_value is not self.NOTHING:
self.initial_value = initial_value
def set_value(self, value):
if not self.init:
self.initial_value = value
self.init = True
self.value = value
return self
@property
def has_changed(self):
return self.change and (self.initial_value != self.value)
@property
def diff_result(self):
return None if not (self.diff and self.has_changed) else {
'before': self.initial_value,
'after': self.value,
}
def __str__(self):
return "<VarMeta: value={0}, initial={1}, diff={2}, output={3}, change={4}>".format(
self.value, self.initial_value, self.diff, self.output, self.change
)
class VarDict(object):
def __init__(self):
self._data = dict()
self._meta = dict()
def __getitem__(self, item):
return self._data[item]
def __setitem__(self, key, value):
self.set(key, value)
def __getattr__(self, item):
try:
return self._data[item]
except KeyError:
return getattr(self._data, item)
def __setattr__(self, key, value):
if key in ('_data', '_meta'):
super(VarDict, self).__setattr__(key, value)
else:
self.set(key, value)
def meta(self, name):
return self._meta[name]
def set_meta(self, name, **kwargs):
self.meta(name).set(**kwargs)
def set(self, name, value, **kwargs):
if name in ('_data', '_meta'):
raise ValueError("Names _data and _meta are reserved for use by ModuleHelper")
self._data[name] = value
if name in self._meta:
meta = self.meta(name)
else:
meta = VarMeta(**kwargs)
meta.set_value(value)
self._meta[name] = meta
def output(self):
return dict((k, v) for k, v in self._data.items() if self.meta(k).output)
def diff(self):
diff_results = [(k, self.meta(k).diff_result) for k in self._data]
diff_results = [dr for dr in diff_results if dr[1] is not None]
if diff_results:
before = dict((dr[0], dr[1]['before']) for dr in diff_results)
after = dict((dr[0], dr[1]['after']) for dr in diff_results)
return {'before': before, 'after': after}
return None
def facts(self):
facts_result = dict((k, v) for k, v in self._data.items() if self._meta[k].fact)
return facts_result if facts_result else None
def change_vars(self):
return [v for v in self._data if self.meta(v).change]
def has_changed(self, v):
return self._meta[v].has_changed
class VarsMixin(object):
def __init__(self, module=None):
self.vars = VarDict()
super(VarsMixin, self).__init__(module)
def update_vars(self, meta=None, **kwargs):
if meta is None:
meta = {}
for k, v in kwargs.items():
self.vars.set(k, v, **meta)

View File

@@ -0,0 +1,79 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.common.dict_transformations import dict_merge
from ansible_collections.community.general.plugins.module_utils.mh.base import ModuleHelperBase, AnsibleModule
from ansible_collections.community.general.plugins.module_utils.mh.mixins.cmd import CmdMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarsMixin, VarDict as _VD
class ModuleHelper(VarsMixin, DependencyMixin, ModuleHelperBase):
_output_conflict_list = ('msg', 'exception', 'output', 'vars', 'changed')
facts_name = None
output_params = ()
diff_params = ()
change_params = ()
facts_params = ()
VarDict = _VD # for backward compatibility, will be deprecated at some point
def __init__(self, module=None):
super(ModuleHelper, self).__init__(module)
for name, value in self.module.params.items():
self.vars.set(
name, value,
diff=name in self.diff_params,
output=name in self.output_params,
change=None if not self.change_params else name in self.change_params,
fact=name in self.facts_params,
)
def update_output(self, **kwargs):
self.update_vars(meta={"output": True}, **kwargs)
def update_facts(self, **kwargs):
self.update_vars(meta={"fact": True}, **kwargs)
def _vars_changed(self):
return any(self.vars.has_changed(v) for v in self.vars.change_vars())
def has_changed(self):
return self.changed or self._vars_changed()
@property
def output(self):
result = dict(self.vars.output())
if self.facts_name:
facts = self.vars.facts()
if facts is not None:
result['ansible_facts'] = {self.facts_name: facts}
if self.module._diff:
diff = result.get('diff', {})
vars_diff = self.vars.diff() or {}
result['diff'] = dict_merge(dict(diff), vars_diff)
for varname in result:
if varname in self._output_conflict_list:
result["_" + varname] = result[varname]
del result[varname]
return result
class StateModuleHelper(StateMixin, ModuleHelper):
pass
class CmdModuleHelper(CmdMixin, ModuleHelper):
pass
class CmdStateModuleHelper(CmdMixin, StateMixin, ModuleHelper):
pass

View File

@@ -6,506 +6,13 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from functools import partial, wraps
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.dict_transformations import dict_merge
class ModuleHelperException(Exception):
@staticmethod
def _get_remove(key, kwargs):
if key in kwargs:
result = kwargs[key]
del kwargs[key]
return result
return None
def __init__(self, *args, **kwargs):
self.msg = self._get_remove('msg', kwargs) or "Module failed with exception: {0}".format(self)
self.update_output = self._get_remove('update_output', kwargs) or {}
super(ModuleHelperException, self).__init__(*args)
class ArgFormat(object):
"""
Argument formatter for use as a command line parameter. Used in CmdMixin.
"""
BOOLEAN = 0
PRINTF = 1
FORMAT = 2
@staticmethod
def stars_deco(num):
if num == 1:
def deco(f):
return lambda v: f(*v)
return deco
elif num == 2:
def deco(f):
return lambda v: f(**v)
return deco
return lambda f: f
def __init__(self, name, fmt=None, style=FORMAT, stars=0):
"""
Creates a CLI-formatter for one specific argument. The argument may be a module parameter or just a named parameter for
the CLI command execution.
:param name: Name of the argument to be formatted
:param fmt: Either a str to be formatted (using or not printf-style) or a callable that does that
:param style: Whether arg_format (as str) should use printf-style formatting.
Ignored if arg_format is None or not a str (should be callable).
:param stars: A int with 0, 1 or 2 value, indicating to formatting the value as: value, *value or **value
"""
def printf_fmt(_fmt, v):
try:
return [_fmt % v]
except TypeError as e:
if e.args[0] != 'not all arguments converted during string formatting':
raise
return [_fmt]
_fmts = {
ArgFormat.BOOLEAN: lambda _fmt, v: ([_fmt] if bool(v) else []),
ArgFormat.PRINTF: printf_fmt,
ArgFormat.FORMAT: lambda _fmt, v: [_fmt.format(v)],
}
self.name = name
self.stars = stars
if fmt is None:
fmt = "{0}"
style = ArgFormat.FORMAT
if isinstance(fmt, str):
func = _fmts[style]
self.arg_format = partial(func, fmt)
elif isinstance(fmt, list) or isinstance(fmt, tuple):
self.arg_format = lambda v: [_fmts[style](f, v)[0] for f in fmt]
elif hasattr(fmt, '__call__'):
self.arg_format = fmt
else:
raise TypeError('Parameter fmt must be either: a string, a list/tuple of '
'strings or a function: type={0}, value={1}'.format(type(fmt), fmt))
if stars:
self.arg_format = (self.stars_deco(stars))(self.arg_format)
def to_text(self, value):
if value is None:
return []
func = self.arg_format
return [str(p) for p in func(value)]
def cause_changes(on_success=None, on_failure=None):
def deco(func):
if on_success is None and on_failure is None:
return func
@wraps(func)
def wrapper(*args, **kwargs):
try:
self = args[0]
func(*args, **kwargs)
if on_success is not None:
self.changed = on_success
except Exception:
if on_failure is not None:
self.changed = on_failure
raise
return wrapper
return deco
def module_fails_on_exception(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
try:
func(self, *args, **kwargs)
except SystemExit:
raise
except ModuleHelperException as e:
if e.update_output:
self.update_output(e.update_output)
self.module.fail_json(msg=e.msg, exception=traceback.format_exc(),
output=self.output, vars=self.vars.output(), **self.output)
except Exception as e:
msg = "Module failed with exception: {0}".format(str(e).strip())
self.module.fail_json(msg=msg, exception=traceback.format_exc(),
output=self.output, vars=self.vars.output(), **self.output)
return wrapper
class DependencyCtxMgr(object):
def __init__(self, name, msg=None):
self.name = name
self.msg = msg
self.has_it = False
self.exc_type = None
self.exc_val = None
self.exc_tb = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.has_it = exc_type is None
self.exc_type = exc_type
self.exc_val = exc_val
self.exc_tb = exc_tb
return not self.has_it
@property
def text(self):
return self.msg or str(self.exc_val)
class VarMeta(object):
NOTHING = object()
def __init__(self, diff=False, output=True, change=None, fact=False):
self.init = False
self.initial_value = None
self.value = None
self.diff = diff
self.change = diff if change is None else change
self.output = output
self.fact = fact
def set(self, diff=None, output=None, change=None, fact=None, initial_value=NOTHING):
if diff is not None:
self.diff = diff
if output is not None:
self.output = output
if change is not None:
self.change = change
if fact is not None:
self.fact = fact
if initial_value is not self.NOTHING:
self.initial_value = initial_value
def set_value(self, value):
if not self.init:
self.initial_value = value
self.init = True
self.value = value
return self
@property
def has_changed(self):
return self.change and (self.initial_value != self.value)
@property
def diff_result(self):
return None if not (self.diff and self.has_changed) else {
'before': self.initial_value,
'after': self.value,
}
def __str__(self):
return "<VarMeta: value={0}, initial={1}, diff={2}, output={3}, change={4}>".format(
self.value, self.initial_value, self.diff, self.output, self.change
)
class ModuleHelper(object):
_output_conflict_list = ('msg', 'exception', 'output', 'vars', 'changed')
_dependencies = []
module = None
facts_name = None
output_params = ()
diff_params = ()
change_params = ()
facts_params = ()
class VarDict(object):
def __init__(self):
self._data = dict()
self._meta = dict()
def __getitem__(self, item):
return self._data[item]
def __setitem__(self, key, value):
self.set(key, value)
def __getattr__(self, item):
try:
return self._data[item]
except KeyError:
return getattr(self._data, item)
def __setattr__(self, key, value):
if key in ('_data', '_meta'):
super(ModuleHelper.VarDict, self).__setattr__(key, value)
else:
self.set(key, value)
def meta(self, name):
return self._meta[name]
def set_meta(self, name, **kwargs):
self.meta(name).set(**kwargs)
def set(self, name, value, **kwargs):
if name in ('_data', '_meta'):
raise ValueError("Names _data and _meta are reserved for use by ModuleHelper")
self._data[name] = value
if name in self._meta:
meta = self.meta(name)
else:
meta = VarMeta(**kwargs)
meta.set_value(value)
self._meta[name] = meta
def output(self):
return dict((k, v) for k, v in self._data.items() if self.meta(k).output)
def diff(self):
diff_results = [(k, self.meta(k).diff_result) for k in self._data]
diff_results = [dr for dr in diff_results if dr[1] is not None]
if diff_results:
before = dict((dr[0], dr[1]['before']) for dr in diff_results)
after = dict((dr[0], dr[1]['after']) for dr in diff_results)
return {'before': before, 'after': after}
return None
def facts(self):
facts_result = dict((k, v) for k, v in self._data.items() if self._meta[k].fact)
return facts_result if facts_result else None
def change_vars(self):
return [v for v in self._data if self.meta(v).change]
def has_changed(self, v):
return self._meta[v].has_changed
def __init__(self, module=None):
self.vars = ModuleHelper.VarDict()
self._changed = False
if module:
self.module = module
if not isinstance(self.module, AnsibleModule):
self.module = AnsibleModule(**self.module)
for name, value in self.module.params.items():
self.vars.set(
name, value,
diff=name in self.diff_params,
output=name in self.output_params,
change=None if not self.change_params else name in self.change_params,
fact=name in self.facts_params,
)
def update_vars(self, meta=None, **kwargs):
if meta is None:
meta = {}
for k, v in kwargs.items():
self.vars.set(k, v, **meta)
def update_output(self, **kwargs):
self.update_vars(meta={"output": True}, **kwargs)
def update_facts(self, **kwargs):
self.update_vars(meta={"fact": True}, **kwargs)
def __init_module__(self):
pass
def __run__(self):
raise NotImplementedError()
def __quit_module__(self):
pass
def _vars_changed(self):
return any(self.vars.has_changed(v) for v in self.vars.change_vars())
@property
def changed(self):
return self._changed
@changed.setter
def changed(self, value):
self._changed = value
def has_changed(self):
return self.changed or self._vars_changed()
@property
def output(self):
result = dict(self.vars.output())
if self.facts_name:
facts = self.vars.facts()
if facts is not None:
result['ansible_facts'] = {self.facts_name: facts}
if self.module._diff:
diff = result.get('diff', {})
vars_diff = self.vars.diff() or {}
result['diff'] = dict_merge(dict(diff), vars_diff)
for varname in result:
if varname in self._output_conflict_list:
result["_" + varname] = result[varname]
del result[varname]
return result
@module_fails_on_exception
def run(self):
self.fail_on_missing_deps()
self.__init_module__()
self.__run__()
self.__quit_module__()
self.module.exit_json(changed=self.has_changed(), **self.output)
@classmethod
def dependency(cls, name, msg):
cls._dependencies.append(DependencyCtxMgr(name, msg))
return cls._dependencies[-1]
def fail_on_missing_deps(self):
for d in self._dependencies:
if not d.has_it:
self.module.fail_json(changed=False,
exception="\n".join(traceback.format_exception(d.exc_type, d.exc_val, d.exc_tb)),
msg=d.text,
**self.output)
class StateMixin(object):
state_param = 'state'
default_state = None
def _state(self):
state = self.module.params.get(self.state_param)
return self.default_state if state is None else state
def _method(self, state):
return "{0}_{1}".format(self.state_param, state)
def __run__(self):
state = self._state()
self.vars.state = state
# resolve aliases
if state not in self.module.params:
aliased = [name for name, param in self.module.argument_spec.items() if state in param.get('aliases', [])]
if aliased:
state = aliased[0]
self.vars.effective_state = state
method = self._method(state)
if not hasattr(self, method):
return self.__state_fallback__()
func = getattr(self, method)
return func()
def __state_fallback__(self):
raise ValueError("Cannot find method: {0}".format(self._method(self._state())))
class CmdMixin(object):
"""
Mixin for mapping module options to running a CLI command with its arguments.
"""
command = None
command_args_formats = {}
run_command_fixed_options = {}
check_rc = False
force_lang = "C"
@property
def module_formats(self):
result = {}
for param in self.module.params.keys():
result[param] = ArgFormat(param)
return result
@property
def custom_formats(self):
result = {}
for param, fmt_spec in self.command_args_formats.items():
result[param] = ArgFormat(param, **fmt_spec)
return result
def _calculate_args(self, extra_params=None, params=None):
def add_arg_formatted_param(_cmd_args, arg_format, _value):
args = list(arg_format.to_text(_value))
return _cmd_args + args
def find_format(_param):
return self.custom_formats.get(_param, self.module_formats.get(_param))
extra_params = extra_params or dict()
cmd_args = list([self.command]) if isinstance(self.command, str) else list(self.command)
try:
cmd_args[0] = self.module.get_bin_path(cmd_args[0], required=True)
except ValueError:
pass
param_list = params if params else self.module.params.keys()
for param in param_list:
if isinstance(param, dict):
if len(param) != 1:
raise ModuleHelperException("run_command parameter as a dict must "
"contain only one key: {0}".format(param))
_param = list(param.keys())[0]
fmt = find_format(_param)
value = param[_param]
elif isinstance(param, str):
if param in self.module.argument_spec:
fmt = find_format(param)
value = self.module.params[param]
elif param in extra_params:
fmt = find_format(param)
value = extra_params[param]
else:
self.module.deprecate("Cannot determine value for parameter: {0}. "
"From version 4.0.0 onwards this will generate an exception".format(param),
version="4.0.0", collection_name="community.general")
continue
else:
raise ModuleHelperException("run_command parameter must be either a str or a dict: {0}".format(param))
cmd_args = add_arg_formatted_param(cmd_args, fmt, value)
return cmd_args
def process_command_output(self, rc, out, err):
return rc, out, err
def run_command(self, extra_params=None, params=None, *args, **kwargs):
self.vars.cmd_args = self._calculate_args(extra_params, params)
options = dict(self.run_command_fixed_options)
env_update = dict(options.get('environ_update', {}))
options['check_rc'] = options.get('check_rc', self.check_rc)
if self.force_lang:
env_update.update({'LANGUAGE': self.force_lang})
self.update_output(force_lang=self.force_lang)
options['environ_update'] = env_update
options.update(kwargs)
rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)
self.update_output(rc=rc, stdout=out, stderr=err)
return self.process_command_output(rc, out, err)
class StateModuleHelper(StateMixin, ModuleHelper):
pass
class CmdModuleHelper(CmdMixin, ModuleHelper):
pass
class CmdStateModuleHelper(CmdMixin, StateMixin, ModuleHelper):
pass
from ansible_collections.community.general.plugins.module_utils.mh.module_helper import (
ModuleHelper, StateModuleHelper, CmdModuleHelper, CmdStateModuleHelper, AnsibleModule
)
from ansible_collections.community.general.plugins.module_utils.mh.mixins.cmd import CmdMixin, ArgFormat
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyCtxMgr
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException
from ansible_collections.community.general.plugins.module_utils.mh.deco import cause_changes, module_fails_on_exception
from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarMeta, VarDict

View File

@@ -1671,19 +1671,31 @@ class RedfishUtils(object):
# Make a copy of the attributes dict
attrs_to_patch = dict(attributes)
# List to hold attributes not found
attrs_bad = {}
# Check the attributes
for attr in attributes:
if attr not in data[u'Attributes']:
return {'ret': False, 'msg': "BIOS attribute %s not found" % attr}
for attr_name, attr_value in attributes.items():
# Check if attribute exists
if attr_name not in data[u'Attributes']:
# Remove and proceed to next attribute if this isn't valid
attrs_bad.update({attr_name: attr_value})
del attrs_to_patch[attr_name]
continue
# If already set to requested value, remove it from PATCH payload
if data[u'Attributes'][attr] == attributes[attr]:
del attrs_to_patch[attr]
if data[u'Attributes'][attr_name] == attributes[attr_name]:
del attrs_to_patch[attr_name]
warning = ""
if attrs_bad:
warning = "Incorrect attributes %s" % (attrs_bad)
# Return success w/ changed=False if no attrs need to be changed
if not attrs_to_patch:
return {'ret': True, 'changed': False,
'msg': "BIOS attributes already set"}
'msg': "BIOS attributes already set",
'warning': warning}
# Get the SettingsObject URI
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
@@ -1693,7 +1705,9 @@ class RedfishUtils(object):
response = self.patch_request(self.root_uri + set_bios_attr_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"}
return {'ret': True, 'changed': True,
'msg': "Modified BIOS attributes %s" % (attrs_to_patch),
'warning': warning}
def set_boot_order(self, boot_list):
if not boot_list:

View File

@@ -21,8 +21,10 @@ options:
type: str
api_key:
description:
- Linode API key
- Linode API key.
- C(LINODE_API_KEY) env variable can be used instead.
type: str
required: yes
name:
description:
- Name to give the instance (alphanumeric, dashes, underscore).
@@ -46,6 +48,7 @@ options:
- List of dictionaries for creating additional disks that are added to the Linode configuration settings.
- Dictionary takes Size, Label, Type. Size is in MB.
type: list
elements: dict
alert_bwin_enabled:
description:
- Set status of bandwidth in alerts.
@@ -86,9 +89,18 @@ options:
description:
- Set threshold for average IO ops/sec over 2 hour period.
type: int
backupsenabled:
description:
- Deprecated parameter, it will be removed in community.general C(5.0.0).
- To enable backups pass values to either I(backupweeklyday) or I(backupwindow).
type: int
backupweeklyday:
description:
- Integer value for what day of the week to store weekly backups.
- Day of the week to take backups.
type: int
backupwindow:
description:
- The time window in which backups will be taken.
type: int
plan:
description:
@@ -153,7 +165,6 @@ author:
notes:
- Please note, linode-python does not have python 3 support.
- This module uses the now deprecated v3 of the Linode API.
- C(LINODE_API_KEY) env variable can be used instead.
- Please review U(https://www.linode.com/api/linode) for determining the required parameters.
'''
@@ -262,7 +273,6 @@ EXAMPLES = '''
delegate_to: localhost
'''
import os
import time
import traceback
@@ -274,7 +284,7 @@ except ImportError:
LINODE_IMP_ERR = traceback.format_exc()
HAS_LINODE = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.basic import AnsibleModule, missing_required_lib, env_fallback
def randompass():
@@ -358,7 +368,7 @@ def linodeServers(module, api, state, name,
if not servers:
for arg in (name, plan, distribution, datacenter):
if not arg:
module.fail_json(msg='%s is required for %s state' % (arg, state)) # @TODO use required_if instead
module.fail_json(msg='%s is required for %s state' % (arg, state))
# Create linode entity
new_server = True
@@ -383,7 +393,7 @@ def linodeServers(module, api, state, name,
try:
res = api.linode_ip_addprivate(LinodeID=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
if not disks:
for arg in (name, linode_id, distribution):
@@ -428,7 +438,7 @@ def linodeServers(module, api, state, name,
jobs.append(res['JobID'])
except Exception as e:
# TODO: destroy linode ?
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
if not configs:
for arg in (name, linode_id, distribution):
@@ -471,7 +481,7 @@ def linodeServers(module, api, state, name,
Disklist=disks_list, Label='%s config' % name)
configs = api.linode_config_list(LinodeId=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
# Start / Ensure servers are running
for server in servers:
@@ -517,10 +527,7 @@ def linodeServers(module, api, state, name,
instance['password'] = password
instances.append(instance)
elif state in ('stopped'):
if not linode_id:
module.fail_json(msg='linode_id is required for stopped state')
elif state in ('stopped',):
if not servers:
module.fail_json(msg='Server (lid: %s) not found' % (linode_id))
@@ -530,17 +537,14 @@ def linodeServers(module, api, state, name,
try:
res = api.linode_shutdown(LinodeId=linode_id)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
instance['status'] = 'Stopping'
changed = True
else:
instance['status'] = 'Stopped'
instances.append(instance)
elif state in ('restarted'):
if not linode_id:
module.fail_json(msg='linode_id is required for restarted state')
elif state in ('restarted',):
if not servers:
module.fail_json(msg='Server (lid: %s) not found' % (linode_id))
@@ -549,7 +553,7 @@ def linodeServers(module, api, state, name,
try:
res = api.linode_reboot(LinodeId=server['LINODEID'])
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
instance['status'] = 'Restarting'
changed = True
instances.append(instance)
@@ -560,7 +564,7 @@ def linodeServers(module, api, state, name,
try:
api.linode_delete(LinodeId=server['LINODEID'], skipChecks=True)
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
instance['status'] = 'Deleting'
changed = True
instances.append(instance)
@@ -577,7 +581,7 @@ def main():
argument_spec=dict(
state=dict(type='str', default='present',
choices=['absent', 'active', 'deleted', 'present', 'restarted', 'started', 'stopped']),
api_key=dict(type='str', no_log=True),
api_key=dict(type='str', no_log=True, required=True, fallback=(env_fallback, ['LINODE_API_KEY'])),
name=dict(type='str', required=True),
alert_bwin_enabled=dict(type='bool'),
alert_bwin_threshold=dict(type='int'),
@@ -589,12 +593,12 @@ def main():
alert_cpu_threshold=dict(type='int'),
alert_diskio_enabled=dict(type='bool'),
alert_diskio_threshold=dict(type='int'),
backupsenabled=dict(type='int'),
backupsenabled=dict(type='int', removed_in_version='5.0.0', removed_from_collection='community.general'),
backupweeklyday=dict(type='int'),
backupwindow=dict(type='int'),
displaygroup=dict(type='str', default=''),
plan=dict(type='int'),
additional_disks=dict(type='list'),
additional_disks=dict(type='list', elements='dict'),
distribution=dict(type='int'),
datacenter=dict(type='int'),
kernel_id=dict(type='int'),
@@ -608,6 +612,10 @@ def main():
wait_timeout=dict(type='int', default=300),
watchdog=dict(type='bool', default=True),
),
required_if=[
('state', 'restarted', ['linode_id']),
('state', 'stopped', ['linode_id']),
]
)
if not HAS_LINODE:
@@ -626,7 +634,6 @@ def main():
alert_cpu_threshold = module.params.get('alert_cpu_threshold')
alert_diskio_enabled = module.params.get('alert_diskio_enabled')
alert_diskio_threshold = module.params.get('alert_diskio_threshold')
backupsenabled = module.params.get('backupsenabled')
backupweeklyday = module.params.get('backupweeklyday')
backupwindow = module.params.get('backupwindow')
displaygroup = module.params.get('displaygroup')
@@ -642,10 +649,9 @@ def main():
ssh_pub_key = module.params.get('ssh_pub_key')
swap = module.params.get('swap')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
wait_timeout = module.params.get('wait_timeout')
watchdog = int(module.params.get('watchdog'))
kwargs = dict()
check_items = dict(
alert_bwin_enabled=alert_bwin_enabled,
alert_bwin_threshold=alert_bwin_threshold,
@@ -661,23 +667,14 @@ def main():
backupwindow=backupwindow,
)
for key, value in check_items.items():
if value is not None:
kwargs[key] = value
# Setup the api_key
if not api_key:
try:
api_key = os.environ['LINODE_API_KEY']
except KeyError as e:
module.fail_json(msg='Unable to load %s' % e.message)
kwargs = dict((k, v) for k, v in check_items.items() if v is not None)
# setup the auth
try:
api = linode_api.Api(api_key)
api.test_echo()
except Exception as e:
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'])
module.fail_json(msg='%s' % e.value[0]['ERRORMESSAGE'], exception=traceback.format_exc())
linodeServers(module, api, state, name,
displaygroup, plan,

View File

@@ -0,0 +1,348 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Lammert Hellinga (@Kogelvis) <lammert@hellinga.it>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: proxmox_nic
short_description: Management of a NIC of a Qemu(KVM) VM in a Proxmox VE cluster.
version_added: 3.1.0
description:
- Allows you to create/update/delete a NIC on Qemu(KVM) Virtual Machines in a Proxmox VE cluster.
author: "Lammert Hellinga (@Kogelvis) <lammert@hellinga.it>"
options:
bridge:
description:
- Add this interface to the specified bridge device. The Proxmox VE default bridge is called C(vmbr0).
type: str
firewall:
description:
- Whether this interface should be protected by the firewall.
type: bool
default: false
interface:
description:
- Name of the interface, should be C(net[n]) where C(1 ≤ n ≤ 31).
type: str
required: true
link_down:
description:
- Whether this interface should be disconnected (like pulling the plug).
type: bool
default: false
mac:
description:
- C(XX:XX:XX:XX:XX:XX) should be a unique MAC address. This is automatically generated if not specified.
- When not specified this module will keep the MAC address the same when changing an existing interface.
type: str
model:
description:
- The NIC emulator model.
type: str
choices: ['e1000', 'e1000-82540em', 'e1000-82544gc', 'e1000-82545em', 'i82551', 'i82557b', 'i82559er', 'ne2k_isa', 'ne2k_pci', 'pcnet',
'rtl8139', 'virtio', 'vmxnet3']
default: virtio
mtu:
description:
- Force MTU, for C(virtio) model only, setting will be ignored otherwise.
- Set to C(1) to use the bridge MTU.
- Value should be C(1 ≤ n ≤ 65520).
type: int
name:
description:
- Specifies the VM name. Only used on the configuration web interface.
- Required only for I(state=present).
type: str
queues:
description:
- Number of packet queues to be used on the device.
- Value should be C(0 ≤ n ≤ 16).
type: int
rate:
description:
- Rate limit in MBps (MegaBytes per second) as floating point number.
type: float
state:
description:
- Indicates desired state of the NIC.
type: str
choices: ['present', 'absent']
default: present
tag:
description:
- VLAN tag to apply to packets on this interface.
- Value should be C(1 ≤ n ≤ 4094).
type: int
trunks:
description:
- List of VLAN trunks to pass through this interface.
type: list
elements: int
vmid:
description:
- Specifies the instance ID.
type: int
extends_documentation_fragment:
- community.general.proxmox.documentation
'''
EXAMPLES = '''
- name: Create NIC net0 targeting the vm by name
community.general.proxmox_nic:
api_user: root@pam
api_password: secret
api_host: proxmoxhost
name: my_vm
interface: net0
bridge: vmbr0
tag: 3
- name: Create NIC net0 targeting the vm by id
community.general.proxmox_nic:
api_user: root@pam
api_password: secret
api_host: proxmoxhost
vmid: 103
interface: net0
bridge: vmbr0
mac: "12:34:56:C0:FF:EE"
firewall: true
- name: Delete NIC net0 targeting the vm by name
community.general.proxmox_nic:
api_user: root@pam
api_password: secret
api_host: proxmoxhost
name: my_vm
interface: net0
state: absent
'''
RETURN = '''
vmid:
description: The VM vmid.
returned: success
type: int
sample: 115
msg:
description: A short message
returned: always
type: str
sample: "Nic net0 unchanged on VM with vmid 103"
'''
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
from ansible.module_utils.basic import AnsibleModule, env_fallback
from ansible_collections.community.general.plugins.module_utils.proxmox import proxmox_auth_argument_spec
def get_vmid(module, proxmox, name):
try:
vms = [vm['vmid'] for vm in proxmox.cluster.resources.get(type='vm') if vm.get('name') == name]
except Exception as e:
module.fail_json(msg='Error: %s occurred while retrieving VM with name = %s' % (e, name))
if not vms:
module.fail_json(msg='No VM found with name: %s' % name)
elif len(vms) > 1:
module.fail_json(msg='Multiple VMs found with name: %s, provide vmid instead' % name)
return vms[0]
def get_vm(proxmox, vmid):
return [vm for vm in proxmox.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid)]
def update_nic(module, proxmox, vmid, interface, model, **kwargs):
vm = get_vm(proxmox, vmid)
try:
vminfo = proxmox.nodes(vm[0]['node']).qemu(vmid).config.get()
except Exception as e:
module.fail_json(msg='Getting information for VM with vmid = %s failed with exception: %s' % (vmid, e))
if interface in vminfo:
# Convert the current config to a dictionary
config = vminfo[interface].split(',')
config.sort()
config_current = {}
for i in config:
kv = i.split('=')
try:
config_current[kv[0]] = kv[1]
except IndexError:
config_current[kv[0]] = ''
# determine the current model nic and mac-address
models = ['e1000', 'e1000-82540em', 'e1000-82544gc', 'e1000-82545em', 'i82551', 'i82557b',
'i82559er', 'ne2k_isa', 'ne2k_pci', 'pcnet', 'rtl8139', 'virtio', 'vmxnet3']
current_model = set(models) & set(config_current.keys())
current_model = current_model.pop()
current_mac = config_current[current_model]
# build nic config string
config_provided = "{0}={1}".format(model, current_mac)
else:
config_provided = model
if kwargs['mac']:
config_provided = "{0}={1}".format(model, kwargs['mac'])
if kwargs['bridge']:
config_provided += ",bridge={0}".format(kwargs['bridge'])
if kwargs['firewall']:
config_provided += ",firewall=1"
if kwargs['link_down']:
config_provided += ',link_down=1'
if kwargs['mtu']:
config_provided += ",mtu={0}".format(kwargs['mtu'])
if model != 'virtio':
module.warn(
'Ignoring MTU for nic {0} on VM with vmid {1}, '
'model should be set to \'virtio\': '.format(interface, vmid))
if kwargs['queues']:
config_provided += ",queues={0}".format(kwargs['queues'])
if kwargs['rate']:
config_provided += ",rate={0}".format(kwargs['rate'])
if kwargs['tag']:
config_provided += ",tag={0}".format(kwargs['tag'])
if kwargs['trunks']:
config_provided += ",trunks={0}".format(';'.join(str(x) for x in kwargs['trunks']))
net = {interface: config_provided}
vm = get_vm(proxmox, vmid)
if ((interface not in vminfo) or (vminfo[interface] != config_provided)):
if not module.check_mode:
proxmox.nodes(vm[0]['node']).qemu(vmid).config.set(**net)
return True
return False
def delete_nic(module, proxmox, vmid, interface):
vm = get_vm(proxmox, vmid)
vminfo = proxmox.nodes(vm[0]['node']).qemu(vmid).config.get()
if interface in vminfo:
if not module.check_mode:
proxmox.nodes(vm[0]['node']).qemu(vmid).config.set(vmid=vmid, delete=interface)
return True
return False
def main():
module_args = proxmox_auth_argument_spec()
nic_args = dict(
bridge=dict(type='str'),
firewall=dict(type='bool', default=False),
interface=dict(type='str', required=True),
link_down=dict(type='bool', default=False),
mac=dict(type='str'),
model=dict(choices=['e1000', 'e1000-82540em', 'e1000-82544gc', 'e1000-82545em',
'i82551', 'i82557b', 'i82559er', 'ne2k_isa', 'ne2k_pci', 'pcnet',
'rtl8139', 'virtio', 'vmxnet3'], default='virtio'),
mtu=dict(type='int'),
name=dict(type='str'),
queues=dict(type='int'),
rate=dict(type='float'),
state=dict(default='present', choices=['present', 'absent']),
tag=dict(type='int'),
trunks=dict(type='list', elements='int'),
vmid=dict(type='int'),
)
module_args.update(nic_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[('api_token_id', 'api_token_secret')],
required_one_of=[('name', 'vmid'), ('api_password', 'api_token_id')],
supports_check_mode=True,
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
api_host = module.params['api_host']
api_password = module.params['api_password']
api_token_id = module.params['api_token_id']
api_token_secret = module.params['api_token_secret']
api_user = module.params['api_user']
interface = module.params['interface']
model = module.params['model']
name = module.params['name']
state = module.params['state']
validate_certs = module.params['validate_certs']
vmid = module.params['vmid']
auth_args = {'user': api_user}
if not (api_token_id and api_token_secret):
auth_args['password'] = api_password
else:
auth_args['token_name'] = api_token_id
auth_args['token_value'] = api_token_secret
try:
proxmox = ProxmoxAPI(api_host, verify_ssl=validate_certs, **auth_args)
except Exception as e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
# If vmid is not defined then retrieve its value from the vm name,
if not vmid:
vmid = get_vmid(module, proxmox, name)
# Ensure VM id exists
if not get_vm(proxmox, vmid):
module.fail_json(vmid=vmid, msg='VM with vmid = %s does not exist in cluster' % vmid)
if state == 'present':
try:
if update_nic(module, proxmox, vmid, interface, model,
bridge=module.params['bridge'],
firewall=module.params['firewall'],
link_down=module.params['link_down'],
mac=module.params['mac'],
mtu=module.params['mtu'],
queues=module.params['queues'],
rate=module.params['rate'],
tag=module.params['tag'],
trunks=module.params['trunks']):
module.exit_json(changed=True, vmid=vmid, msg="Nic {0} updated on VM with vmid {1}".format(interface, vmid))
else:
module.exit_json(vmid=vmid, msg="Nic {0} unchanged on VM with vmid {1}".format(interface, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to change nic {0} on VM with vmid {1}: '.format(interface, vmid) + str(e))
elif state == 'absent':
try:
if delete_nic(module, proxmox, vmid, interface):
module.exit_json(changed=True, vmid=vmid, msg="Nic {0} deleted on VM with vmid {1}".format(interface, vmid))
else:
module.exit_json(vmid=vmid, msg="Nic {0} does not exist on VM with vmid {1}".format(interface, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to delete nic {0} on VM with vmid {1}: '.format(interface, vmid) + str(e))
if __name__ == '__main__':
main()

View File

@@ -23,26 +23,26 @@ options:
credentials_path:
description:
- (Path) Optional parameter that allows to set a non-default credentials path.
- Optional parameter that allows to set a non-default credentials path.
default: ~/.spotinst/credentials
type: path
account_id:
description:
- (String) Optional parameter that allows to set an account-id inside the module configuration
By default this is retrieved from the credentials path
- Optional parameter that allows to set an account-id inside the module configuration.
By default this is retrieved from the credentials path.
type: str
availability_vs_cost:
description:
- (String) The strategy orientation.
- The strategy orientation.
- "The choices available are: C(availabilityOriented), C(costOriented), C(balanced)."
required: true
type: str
availability_zones:
description:
- (List of Objects) a list of hash/dictionaries of Availability Zones that are configured in the elastigroup;
- A list of hash/dictionaries of Availability Zones that are configured in the elastigroup;
'[{"key":"value", "key":"value"}]';
keys allowed are
name (String),
@@ -50,10 +50,11 @@ options:
placement_group_name (String),
required: true
type: list
elements: dict
block_device_mappings:
description:
- (List of Objects) a list of hash/dictionaries of Block Device Mappings for elastigroup instances;
- A list of hash/dictionaries of Block Device Mappings for elastigroup instances;
You can specify virtual devices and EBS volumes.;
'[{"key":"value", "key":"value"}]';
keys allowed are
@@ -68,10 +69,11 @@ options:
volume_type(String),
volume_size(Integer))
type: list
elements: dict
chef:
description:
- (Object) The Chef integration configuration.;
- The Chef integration configuration.;
Expects the following keys - chef_server (String),
organization (String),
user (String),
@@ -81,92 +83,94 @@ options:
draining_timeout:
description:
- (Integer) Time for instance to be drained from incoming requests and deregistered from ELB before termination.
- Time for instance to be drained from incoming requests and deregistered from ELB before termination.
type: int
ebs_optimized:
description:
- (Boolean) Enable EBS optimization for supported instances which are not enabled by default.;
- Enable EBS optimization for supported instances which are not enabled by default.;
Note - additional charges will be applied.
type: bool
ebs_volume_pool:
description:
- (List of Objects) a list of hash/dictionaries of EBS devices to reattach to the elastigroup when available;
- A list of hash/dictionaries of EBS devices to reattach to the elastigroup when available;
'[{"key":"value", "key":"value"}]';
keys allowed are -
volume_ids (List of Strings),
device_name (String)
type: list
elements: dict
ecs:
description:
- (Object) The ECS integration configuration.;
- The ECS integration configuration.;
Expects the following key -
cluster_name (String)
type: dict
elastic_ips:
description:
- (List of Strings) List of ElasticIps Allocation Ids (Example C(eipalloc-9d4e16f8)) to associate to the group instances
- List of ElasticIps Allocation Ids (Example C(eipalloc-9d4e16f8)) to associate to the group instances
type: list
elements: str
fallback_to_od:
description:
- (Boolean) In case of no spots available, Elastigroup will launch an On-demand instance instead
- In case of no spots available, Elastigroup will launch an On-demand instance instead
type: bool
health_check_grace_period:
description:
- (Integer) The amount of time, in seconds, after the instance has launched to start and check its health.
- The amount of time, in seconds, after the instance has launched to start and check its health.
- If not specified, it defaults to C(300).
type: int
health_check_unhealthy_duration_before_replacement:
description:
- (Integer) Minimal mount of time instance should be unhealthy for us to consider it unhealthy.
- Minimal mount of time instance should be unhealthy for us to consider it unhealthy.
type: int
health_check_type:
description:
- (String) The service to use for the health check.
- The service to use for the health check.
- "The choices available are: C(ELB), C(HCS), C(TARGET_GROUP), C(MLB), C(EC2)."
type: str
iam_role_name:
description:
- (String) The instance profile iamRole name
- The instance profile iamRole name
- Only use iam_role_arn, or iam_role_name
type: str
iam_role_arn:
description:
- (String) The instance profile iamRole arn
- The instance profile iamRole arn
- Only use iam_role_arn, or iam_role_name
type: str
id:
description:
- (String) The group id if it already exists and you want to update, or delete it.
- The group id if it already exists and you want to update, or delete it.
This will not work unless the uniqueness_by field is set to id.
When this is set, and the uniqueness_by field is set, the group will either be updated or deleted, but not created.
type: str
image_id:
description:
- (String) The image Id used to launch the instance.;
- The image Id used to launch the instance.;
In case of conflict between Instance type and image type, an error will be returned
required: true
type: str
key_pair:
description:
- (String) Specify a Key Pair to attach to the instances
- Specify a Key Pair to attach to the instances
type: str
kubernetes:
description:
- (Object) The Kubernetes integration configuration.
- The Kubernetes integration configuration.
Expects the following keys -
api_server (String),
token (String)
@@ -174,47 +178,48 @@ options:
lifetime_period:
description:
- (Integer) lifetime period
- Lifetime period
type: int
load_balancers:
description:
- (List of Strings) List of classic ELB names
- List of classic ELB names
type: list
elements: str
max_size:
description:
- (Integer) The upper limit number of instances that you can scale up to
- The upper limit number of instances that you can scale up to
required: true
type: int
mesosphere:
description:
- (Object) The Mesosphere integration configuration.
- The Mesosphere integration configuration.
Expects the following key -
api_server (String)
type: dict
min_size:
description:
- (Integer) The lower limit number of instances that you can scale down to
- The lower limit number of instances that you can scale down to
required: true
type: int
monitoring:
description:
- (String) Describes whether instance Enhanced Monitoring is enabled
- Describes whether instance Enhanced Monitoring is enabled
type: str
name:
description:
- (String) Unique name for elastigroup to be created, updated or deleted
- Unique name for elastigroup to be created, updated or deleted
required: true
type: str
network_interfaces:
description:
- (List of Objects) a list of hash/dictionaries of network interfaces to add to the elastigroup;
- A list of hash/dictionaries of network interfaces to add to the elastigroup;
'[{"key":"value", "key":"value"}]';
keys allowed are -
description (String),
@@ -229,29 +234,30 @@ options:
associate_ipv6_address (Boolean),
private_ip_addresses (List of Objects, Keys are privateIpAddress (String, required) and primary (Boolean))
type: list
elements: dict
on_demand_count:
description:
- (Integer) Required if risk is not set
- Required if risk is not set
- Number of on demand instances to launch. All other instances will be spot instances.;
Either set this parameter or the risk parameter
type: int
on_demand_instance_type:
description:
- (String) On-demand instance type that will be provisioned
- On-demand instance type that will be provisioned
type: str
opsworks:
description:
- (Object) The elastigroup OpsWorks integration configration.;
- The elastigroup OpsWorks integration configration.;
Expects the following key -
layer_id (String)
type: dict
persistence:
description:
- (Object) The Stateful elastigroup configration.;
- The Stateful elastigroup configration.;
Accepts the following keys -
should_persist_root_device (Boolean),
should_persist_block_devices (Boolean),
@@ -260,14 +266,14 @@ options:
product:
description:
- (String) Operation system type.
- Operation system type.
- "Available choices are: C(Linux/UNIX), C(SUSE Linux), C(Windows), C(Linux/UNIX (Amazon VPC)), C(SUSE Linux (Amazon VPC))."
required: true
type: str
rancher:
description:
- (Object) The Rancher integration configuration.;
- The Rancher integration configuration.;
Expects the following keys -
version (String),
access_key (String),
@@ -277,7 +283,7 @@ options:
right_scale:
description:
- (Object) The Rightscale integration configuration.;
- The Rightscale integration configuration.;
Expects the following keys -
account_id (String),
refresh_token (String)
@@ -285,12 +291,12 @@ options:
risk:
description:
- (Integer) required if on demand is not set. The percentage of Spot instances to launch (0 - 100).
- Required if on demand is not set. The percentage of Spot instances to launch (0 - 100).
type: int
roll_config:
description:
- (Object) Roll configuration.;
- Roll configuration.;
If you would like the group to roll after updating, please use this feature.
Accepts the following keys -
batch_size_percentage(Integer, Required),
@@ -300,7 +306,7 @@ options:
scheduled_tasks:
description:
- (List of Objects) a list of hash/dictionaries of scheduled tasks to configure in the elastigroup;
- A list of hash/dictionaries of scheduled tasks to configure in the elastigroup;
'[{"key":"value", "key":"value"}]';
keys allowed are -
adjustment (Integer),
@@ -315,84 +321,90 @@ options:
task_type (String, required),
is_enabled (Boolean)
type: list
elements: dict
security_group_ids:
description:
- (List of Strings) One or more security group IDs. ;
- One or more security group IDs. ;
In case of update it will override the existing Security Group with the new given array
required: true
type: list
elements: str
shutdown_script:
description:
- (String) The Base64-encoded shutdown script that executes prior to instance termination.
- The Base64-encoded shutdown script that executes prior to instance termination.
Encode before setting.
type: str
signals:
description:
- (List of Objects) a list of hash/dictionaries of signals to configure in the elastigroup;
- A list of hash/dictionaries of signals to configure in the elastigroup;
keys allowed are -
name (String, required),
timeout (Integer)
type: list
elements: dict
spin_up_time:
description:
- (Integer) spin up time, in seconds, for the instance
- Spin up time, in seconds, for the instance
type: int
spot_instance_types:
description:
- (List of Strings) Spot instance type that will be provisioned.
- Spot instance type that will be provisioned.
required: true
type: list
elements: str
state:
choices:
- present
- absent
description:
- (String) create or delete the elastigroup
- Create or delete the elastigroup
default: present
type: str
tags:
description:
- (List of tagKey:tagValue pairs) a list of tags to configure in the elastigroup. Please specify list of keys and values (key colon value);
- A list of tags to configure in the elastigroup. Please specify list of keys and values (key colon value);
type: list
elements: dict
target:
description:
- (Integer) The number of instances to launch
- The number of instances to launch
required: true
type: int
target_group_arns:
description:
- (List of Strings) List of target group arns instances should be registered to
- List of target group arns instances should be registered to
type: list
elements: str
tenancy:
description:
- (String) dedicated vs shared tenancy.
- Dedicated vs shared tenancy.
- "The available choices are: C(default), C(dedicated)."
type: str
terminate_at_end_of_billing_hour:
description:
- (Boolean) terminate at the end of billing hour
- Terminate at the end of billing hour
type: bool
unit:
description:
- (String) The capacity unit to launch instances by.
- The capacity unit to launch instances by.
- "The available choices are: C(instance), C(weight)."
type: str
up_scaling_policies:
description:
- (List of Objects) a list of hash/dictionaries of scaling policies to configure in the elastigroup;
- A list of hash/dictionaries of scaling policies to configure in the elastigroup;
'[{"key":"value", "key":"value"}]';
keys allowed are -
policy_name (String, required),
@@ -413,10 +425,11 @@ options:
maximum (String),
minimum (String)
type: list
elements: dict
down_scaling_policies:
description:
- (List of Objects) a list of hash/dictionaries of scaling policies to configure in the elastigroup;
- A list of hash/dictionaries of scaling policies to configure in the elastigroup;
'[{"key":"value", "key":"value"}]';
keys allowed are -
policy_name (String, required),
@@ -437,10 +450,11 @@ options:
maximum (String),
minimum (String)
type: list
elements: dict
target_tracking_policies:
description:
- (List of Objects) a list of hash/dictionaries of target tracking policies to configure in the elastigroup;
- A list of hash/dictionaries of target tracking policies to configure in the elastigroup;
'[{"key":"value", "key":"value"}]';
keys allowed are -
policy_name (String, required),
@@ -452,37 +466,38 @@ options:
cooldown (String, required),
target (String, required)
type: list
elements: dict
uniqueness_by:
choices:
- id
- name
description:
- (String) If your group names are not unique, you may use this feature to update or delete a specific group.
- If your group names are not unique, you may use this feature to update or delete a specific group.
Whenever this property is set, you must set a group_id in order to update or delete a group, otherwise a group will be created.
default: name
type: str
user_data:
description:
- (String) Base64-encoded MIME user data. Encode before setting the value.
- Base64-encoded MIME user data. Encode before setting the value.
type: str
utilize_reserved_instances:
description:
- (Boolean) In case of any available Reserved Instances,
- In case of any available Reserved Instances,
Elastigroup will utilize your reservations before purchasing Spot instances.
type: bool
wait_for_instances:
description:
- (Boolean) Whether or not the elastigroup creation / update actions should wait for the instances to spin
- Whether or not the elastigroup creation / update actions should wait for the instances to spin
type: bool
default: false
wait_timeout:
description:
- (Integer) How long the module should wait for instances before failing the action.;
- How long the module should wait for instances before failing the action.;
Only works if wait_for_instances is True.
type: int
@@ -1428,18 +1443,18 @@ def main():
fields = dict(
account_id=dict(type='str'),
availability_vs_cost=dict(type='str', required=True),
availability_zones=dict(type='list', required=True),
block_device_mappings=dict(type='list'),
availability_zones=dict(type='list', elements='dict', required=True),
block_device_mappings=dict(type='list', elements='dict'),
chef=dict(type='dict'),
credentials_path=dict(type='path', default="~/.spotinst/credentials"),
do_not_update=dict(default=[], type='list'),
down_scaling_policies=dict(type='list'),
down_scaling_policies=dict(type='list', elements='dict'),
draining_timeout=dict(type='int'),
ebs_optimized=dict(type='bool'),
ebs_volume_pool=dict(type='list'),
ebs_volume_pool=dict(type='list', elements='dict'),
ecs=dict(type='dict'),
elastic_beanstalk=dict(type='dict'),
elastic_ips=dict(type='list'),
elastic_ips=dict(type='list', elements='str'),
fallback_to_od=dict(type='bool'),
id=dict(type='str'),
health_check_grace_period=dict(type='int'),
@@ -1451,7 +1466,7 @@ def main():
key_pair=dict(type='str', no_log=False),
kubernetes=dict(type='dict'),
lifetime_period=dict(type='int'),
load_balancers=dict(type='list'),
load_balancers=dict(type='list', elements='str'),
max_size=dict(type='int', required=True),
mesosphere=dict(type='dict'),
min_size=dict(type='int', required=True),
@@ -1459,7 +1474,7 @@ def main():
multai_load_balancers=dict(type='list'),
multai_token=dict(type='str', no_log=True),
name=dict(type='str', required=True),
network_interfaces=dict(type='list'),
network_interfaces=dict(type='list', elements='dict'),
on_demand_count=dict(type='int'),
on_demand_instance_type=dict(type='str'),
opsworks=dict(type='dict'),
@@ -1469,16 +1484,16 @@ def main():
right_scale=dict(type='dict'),
risk=dict(type='int'),
roll_config=dict(type='dict'),
scheduled_tasks=dict(type='list'),
security_group_ids=dict(type='list', required=True),
scheduled_tasks=dict(type='list', elements='dict'),
security_group_ids=dict(type='list', elements='str', required=True),
shutdown_script=dict(type='str'),
signals=dict(type='list'),
signals=dict(type='list', elements='dict'),
spin_up_time=dict(type='int'),
spot_instance_types=dict(type='list', required=True),
spot_instance_types=dict(type='list', elements='str', required=True),
state=dict(default='present', choices=['present', 'absent']),
tags=dict(type='list'),
tags=dict(type='list', elements='dict'),
target=dict(type='int', required=True),
target_group_arns=dict(type='list'),
target_group_arns=dict(type='list', elements='str'),
tenancy=dict(type='str'),
terminate_at_end_of_billing_hour=dict(type='bool'),
token=dict(type='str', no_log=True),
@@ -1486,8 +1501,8 @@ def main():
user_data=dict(type='str'),
utilize_reserved_instances=dict(type='bool'),
uniqueness_by=dict(default='name', choices=['name', 'id']),
up_scaling_policies=dict(type='list'),
target_tracking_policies=dict(type='list'),
up_scaling_policies=dict(type='list', elements='dict'),
target_tracking_policies=dict(type='list', elements='dict'),
wait_for_instances=dict(type='bool', default=False),
wait_timeout=dict(type='int')
)

View File

@@ -189,7 +189,24 @@ from collections import defaultdict
from ansible.module_utils.basic import to_text, AnsibleModule
RULE_SCOPES = ["agent", "event", "key", "keyring", "node", "operator", "query", "service", "session"]
RULE_SCOPES = [
"agent",
"agent_prefix",
"event",
"event_prefix",
"key",
"key_prefix",
"keyring",
"node",
"node_prefix",
"operator",
"query",
"query_prefix",
"service",
"service_prefix",
"session",
"session_prefix",
]
MANAGEMENT_PARAMETER_NAME = "mgmt_token"
HOST_PARAMETER_NAME = "host"

View File

@@ -29,17 +29,24 @@ options:
- Name of the retention policy.
required: true
type: str
state:
description:
- State of the retention policy.
choices: [ absent, present ]
default: present
type: str
version_added: 3.1.0
duration:
description:
- Determines how long InfluxDB should keep the data. If specified, it
should be C(INF) or at least one hour. If not specified, C(INF) is
assumed. Supports complex duration expressions with multiple units.
required: true
- Required only if I(state) is set to C(present).
type: str
replication:
description:
- Determines how many independent copies of each point are stored in the cluster.
required: true
- Required only if I(state) is set to C(present).
type: int
default:
description:
@@ -63,53 +70,65 @@ EXAMPLES = r'''
# Example influxdb_retention_policy command from Ansible Playbooks
- name: Create 1 hour retention policy
community.general.influxdb_retention_policy:
hostname: "{{influxdb_ip_address}}"
database_name: "{{influxdb_database_name}}"
hostname: "{{ influxdb_ip_address }}"
database_name: "{{ influxdb_database_name }}"
policy_name: test
duration: 1h
replication: 1
ssl: yes
validate_certs: yes
state: present
- name: Create 1 day retention policy with 1 hour shard group duration
community.general.influxdb_retention_policy:
hostname: "{{influxdb_ip_address}}"
database_name: "{{influxdb_database_name}}"
hostname: "{{ influxdb_ip_address }}"
database_name: "{{ influxdb_database_name }}"
policy_name: test
duration: 1d
replication: 1
shard_group_duration: 1h
state: present
- name: Create 1 week retention policy with 1 day shard group duration
community.general.influxdb_retention_policy:
hostname: "{{influxdb_ip_address}}"
database_name: "{{influxdb_database_name}}"
hostname: "{{ influxdb_ip_address }}"
database_name: "{{ influxdb_database_name }}"
policy_name: test
duration: 1w
replication: 1
shard_group_duration: 1d
state: present
- name: Create infinite retention policy with 1 week of shard group duration
community.general.influxdb_retention_policy:
hostname: "{{influxdb_ip_address}}"
database_name: "{{influxdb_database_name}}"
hostname: "{{ influxdb_ip_address }}"
database_name: "{{ influxdb_database_name }}"
policy_name: test
duration: INF
replication: 1
ssl: no
validate_certs: no
shard_group_duration: 1w
state: present
- name: Create retention policy with complex durations
community.general.influxdb_retention_policy:
hostname: "{{influxdb_ip_address}}"
database_name: "{{influxdb_database_name}}"
hostname: "{{ influxdb_ip_address }}"
database_name: "{{ influxdb_database_name }}"
policy_name: test
duration: 5d1h30m
replication: 1
ssl: no
validate_certs: no
shard_group_duration: 1d10h30m
state: present
- name: Drop retention policy
community.general.influxdb_retention_policy:
hostname: "{{ influxdb_ip_address }}"
database_name: "{{ influxdb_database_name }}"
policy_name: test
state: absent
'''
RETURN = r'''
@@ -134,6 +153,21 @@ VALID_DURATION_REGEX = re.compile(r'^(INF|(\d+(ns|u|µ|ms|s|m|h|d|w)))+$')
DURATION_REGEX = re.compile(r'(\d+)(ns|u|µ|ms|s|m|h|d|w)')
EXTENDED_DURATION_REGEX = re.compile(r'(?:(\d+)(ns|u|µ|ms|m|h|d|w)|(\d+(?:\.\d+)?)(s))')
DURATION_UNIT_NANOSECS = {
'ns': 1,
'u': 1000,
'µ': 1000,
'ms': 1000 * 1000,
's': 1000 * 1000 * 1000,
'm': 1000 * 1000 * 1000 * 60,
'h': 1000 * 1000 * 1000 * 60 * 60,
'd': 1000 * 1000 * 1000 * 60 * 60 * 24,
'w': 1000 * 1000 * 1000 * 60 * 60 * 24 * 7,
}
MINIMUM_VALID_DURATION = 1 * DURATION_UNIT_NANOSECS['h']
MINIMUM_VALID_SHARD_GROUP_DURATION = 1 * DURATION_UNIT_NANOSECS['h']
def check_duration_literal(value):
return VALID_DURATION_REGEX.search(value) is not None
@@ -148,28 +182,9 @@ def parse_duration_literal(value, extended=False):
lookup = (EXTENDED_DURATION_REGEX if extended else DURATION_REGEX).findall(value)
for duration_literal in lookup:
if extended and duration_literal[3] == 's':
duration_val = float(duration_literal[2])
duration += duration_val * 1000 * 1000 * 1000
else:
duration_val = int(duration_literal[0])
if duration_literal[1] == 'ns':
duration += duration_val
elif duration_literal[1] == 'u' or duration_literal[1] == 'µ':
duration += duration_val * 1000
elif duration_literal[1] == 'ms':
duration += duration_val * 1000 * 1000
elif duration_literal[1] == 's':
duration += duration_val * 1000 * 1000 * 1000
elif duration_literal[1] == 'm':
duration += duration_val * 1000 * 1000 * 1000 * 60
elif duration_literal[1] == 'h':
duration += duration_val * 1000 * 1000 * 1000 * 60 * 60
elif duration_literal[1] == 'd':
duration += duration_val * 1000 * 1000 * 1000 * 60 * 60 * 24
elif duration_literal[1] == 'w':
duration += duration_val * 1000 * 1000 * 1000 * 60 * 60 * 24 * 7
filtered_literal = list(filter(None, duration_literal))
duration_val = float(filtered_literal[0])
duration += duration_val * DURATION_UNIT_NANOSECS[filtered_literal[1]]
return duration
@@ -208,7 +223,7 @@ def create_retention_policy(module, client):
module.fail_json(msg="Failed to parse value of duration")
influxdb_duration_format = parse_duration_literal(duration)
if influxdb_duration_format != 0 and influxdb_duration_format < 3600000000000:
if influxdb_duration_format != 0 and influxdb_duration_format < MINIMUM_VALID_DURATION:
module.fail_json(msg="duration value must be at least 1h")
if shard_group_duration is not None:
@@ -216,7 +231,7 @@ def create_retention_policy(module, client):
module.fail_json(msg="Failed to parse value of shard_group_duration")
influxdb_shard_group_duration_format = parse_duration_literal(shard_group_duration)
if influxdb_shard_group_duration_format < 3600000000000:
if influxdb_shard_group_duration_format < MINIMUM_VALID_SHARD_GROUP_DURATION:
module.fail_json(msg="shard_group_duration value must be finite and at least 1h")
if not module.check_mode:
@@ -245,7 +260,7 @@ def alter_retention_policy(module, client, retention_policy):
module.fail_json(msg="Failed to parse value of duration")
influxdb_duration_format = parse_duration_literal(duration)
if influxdb_duration_format != 0 and influxdb_duration_format < 3600000000000:
if influxdb_duration_format != 0 and influxdb_duration_format < MINIMUM_VALID_DURATION:
module.fail_json(msg="duration value must be at least 1h")
if shard_group_duration is None:
@@ -255,7 +270,7 @@ def alter_retention_policy(module, client, retention_policy):
module.fail_json(msg="Failed to parse value of shard_group_duration")
influxdb_shard_group_duration_format = parse_duration_literal(shard_group_duration)
if influxdb_shard_group_duration_format < 3600000000000:
if influxdb_shard_group_duration_format < MINIMUM_VALID_SHARD_GROUP_DURATION:
module.fail_json(msg="shard_group_duration value must be finite and at least 1h")
if (retention_policy['duration'] != influxdb_duration_format or
@@ -272,30 +287,55 @@ def alter_retention_policy(module, client, retention_policy):
module.exit_json(changed=changed)
def drop_retention_policy(module, client):
database_name = module.params['database_name']
policy_name = module.params['policy_name']
if not module.check_mode:
try:
client.drop_retention_policy(policy_name, database_name)
except exceptions.InfluxDBClientError as e:
module.fail_json(msg=e.content)
module.exit_json(changed=True)
def main():
argument_spec = InfluxDb.influxdb_argument_spec()
argument_spec.update(
state=dict(default='present', type='str', choices=['present', 'absent']),
database_name=dict(required=True, type='str'),
policy_name=dict(required=True, type='str'),
duration=dict(required=True, type='str'),
replication=dict(required=True, type='int'),
duration=dict(type='str'),
replication=dict(type='int'),
default=dict(default=False, type='bool'),
shard_group_duration=dict(required=False, type='str'),
shard_group_duration=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True
supports_check_mode=True,
required_if=(
('state', 'present', ['duration', 'replication']),
),
)
state = module.params['state']
influxdb = InfluxDb(module)
client = influxdb.connect_to_influxdb()
retention_policy = find_retention_policy(module, client)
if retention_policy:
alter_retention_policy(module, client, retention_policy)
else:
create_retention_policy(module, client)
if state == 'present':
if retention_policy:
alter_retention_policy(module, client, retention_policy)
else:
create_retention_policy(module, client)
if state == 'absent':
if retention_policy:
drop_retention_policy(module, client)
else:
module.exit_json(changed=False)
if __name__ == '__main__':

View File

@@ -100,6 +100,8 @@ RETURN = r'''
#only defaults
'''
import json
from ansible.module_utils.urls import ConnectionError
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
@@ -115,7 +117,7 @@ def find_user(module, client, user_name):
if user['user'] == user_name:
user_result = user
break
except (ConnectionError, influx.exceptions.InfluxDBClientError) as e:
except ConnectionError as e:
module.fail_json(msg=to_native(e))
return user_result
@@ -166,16 +168,16 @@ def set_user_grants(module, client, user_name, grants):
try:
current_grants = client.get_list_privileges(user_name)
parsed_grants = []
# Fix privileges wording
for i, v in enumerate(current_grants):
if v['privilege'] == 'ALL PRIVILEGES':
v['privilege'] = 'ALL'
current_grants[i] = v
elif v['privilege'] == 'NO PRIVILEGES':
del(current_grants[i])
if v['privilege'] != 'NO PRIVILEGES':
if v['privilege'] == 'ALL PRIVILEGES':
v['privilege'] = 'ALL'
parsed_grants.add(v)
# check if the current grants are included in the desired ones
for current_grant in current_grants:
for current_grant in parsed_grants:
if current_grant not in grants:
if not module.check_mode:
client.revoke_privilege(current_grant['privilege'],
@@ -185,7 +187,7 @@ def set_user_grants(module, client, user_name, grants):
# check if the desired grants are included in the current ones
for grant in grants:
if grant not in current_grants:
if grant not in parsed_grants:
if not module.check_mode:
client.grant_privilege(grant['privilege'],
grant['database'],
@@ -198,6 +200,9 @@ def set_user_grants(module, client, user_name, grants):
return changed
INFLUX_AUTH_FIRST_USER_REQUIRED = "error authorizing query: create admin user first or disable authentication"
def main():
argument_spec = influx.InfluxDb.influxdb_argument_spec()
argument_spec.update(
@@ -219,7 +224,23 @@ def main():
grants = module.params['grants']
influxdb = influx.InfluxDb(module)
client = influxdb.connect_to_influxdb()
user = find_user(module, client, user_name)
user = None
try:
user = find_user(module, client, user_name)
except influx.exceptions.InfluxDBClientError as e:
if e.code == 403:
reason = None
try:
msg = json.loads(e.content)
reason = msg["error"]
except (KeyError, ValueError):
module.fail_json(msg=to_native(e))
if reason != INFLUX_AUTH_FIRST_USER_REQUIRED:
module.fail_json(msg=to_native(e))
else:
module.fail_json(msg=to_native(e))
changed = False

1
plugins/modules/discord.py Symbolic link
View File

@@ -0,0 +1 @@
./notification/discord.py

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_a_record
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS A records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_a_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of A record objects from
Infoblox NIOS servers. This module manages NIOS C(record:a) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_aaaa_record
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS AAAA records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_aaaa_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of AAAA record objects from
Infoblox NIOS servers. This module manages NIOS C(record:aaaa) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_cname_record
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS CNAME records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_cname_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of CNAME record objects from
Infoblox NIOS servers. This module manages NIOS C(record:cname) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_dns_view
author: "Peter Sprygada (@privateip)"
short_description: Configure Infoblox NIOS DNS views
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_dns_view
removed_in: 5.0.0
description:
- Adds and/or removes instances of DNS view objects from
Infoblox NIOS servers. This module manages NIOS C(view) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_fixed_address
author: "Sumit Jaiswal (@sjaiswal)"
short_description: Configure Infoblox NIOS DHCP Fixed Address
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_fixed_address
removed_in: 5.0.0
description:
- A fixed address is a specific IP address that a DHCP server
always assigns when a lease request comes from a particular

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_host_record
author: "Peter Sprygada (@privateip)"
short_description: Configure Infoblox NIOS host records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_host_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of host record objects from
Infoblox NIOS servers. This module manages NIOS C(record:host) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_member
author: "Krishna Vasudevan (@krisvasudevan)"
short_description: Configure Infoblox NIOS members
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_member
removed_in: 5.0.0
description:
- Adds and/or removes Infoblox NIOS servers. This module manages NIOS C(member) objects using the Infoblox WAPI interface over REST.
requirements:

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_mx_record
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS MX records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_mx_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of MX record objects from
Infoblox NIOS servers. This module manages NIOS C(record:mx) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_naptr_record
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS NAPTR records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_naptr_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of NAPTR record objects from
Infoblox NIOS servers. This module manages NIOS C(record:naptr) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_network
author: "Peter Sprygada (@privateip)"
short_description: Configure Infoblox NIOS network object
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_network
removed_in: 5.0.0
description:
- Adds and/or removes instances of network objects from
Infoblox NIOS servers. This module manages NIOS C(network) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_network_view
author: "Peter Sprygada (@privateip)"
short_description: Configure Infoblox NIOS network views
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_network_view
removed_in: 5.0.0
description:
- Adds and/or removes instances of network view objects from
Infoblox NIOS servers. This module manages NIOS C(networkview) objects

View File

@@ -11,6 +11,10 @@ DOCUMENTATION = '''
---
module: nios_nsgroup
short_description: Configure InfoBlox DNS Nameserver Groups
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_nsgroup
removed_in: 5.0.0
extends_documentation_fragment:
- community.general.nios

View File

@@ -11,6 +11,10 @@ DOCUMENTATION = '''
module: nios_ptr_record
author: "Trebuchet Clement (@clementtrebuchet)"
short_description: Configure Infoblox NIOS PTR records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_ptr_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of PTR record objects from
Infoblox NIOS servers. This module manages NIOS C(record:ptr) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_srv_record
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS SRV records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_srv_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of SRV record objects from
Infoblox NIOS servers. This module manages NIOS C(record:srv) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_txt_record
author: "Corey Wanless (@coreywan)"
short_description: Configure Infoblox NIOS txt records
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_txt_record
removed_in: 5.0.0
description:
- Adds and/or removes instances of txt record objects from
Infoblox NIOS servers. This module manages NIOS C(record:txt) objects

View File

@@ -10,6 +10,10 @@ DOCUMENTATION = '''
module: nios_zone
author: "Peter Sprygada (@privateip)"
short_description: Configure Infoblox NIOS DNS zones
deprecated:
why: Please install the infoblox.nios_modules collection and use the corresponding module from it.
alternative: infoblox.nios_modules.nios_zone
removed_in: 5.0.0
description:
- Adds and/or removes instances of DNS zone objects from
Infoblox NIOS servers. This module manages NIOS C(zone_auth) objects

View File

@@ -1036,17 +1036,6 @@ class Nmcli(object):
return conn_info
def _compare_conn_params(self, conn_info, options):
# See nmcli(1) for details
param_alias = {
'type': 'connection.type',
'con-name': 'connection.id',
'autoconnect': 'connection.autoconnect',
'ifname': 'connection.interface-name',
'master': 'connection.master',
'slave-type': 'connection.slave-type',
'zone': 'connection.zone',
}
changed = False
diff_before = dict()
diff_after = dict()
@@ -1070,13 +1059,6 @@ class Nmcli(object):
value = value.upper()
# ensure current_value is also converted to uppercase in case nmcli changes behaviour
current_value = current_value.upper()
elif key in param_alias:
real_key = param_alias[key]
if real_key in conn_info:
current_value = conn_info[real_key]
else:
# alias parameter does not exist
current_value = None
else:
# parameter does not exist
current_value = None

View File

@@ -0,0 +1,215 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Christian Wollinger <cwollinger@web.de>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: discord
short_description: Send Discord messages
version_added: 3.1.0
description:
- Sends a message to a Discord channel using the Discord webhook API.
author: Christian Wollinger (@cwollinger)
seealso:
- name: API documentation
description: Documentation for Discord API
link: https://discord.com/developers/docs/resources/webhook#execute-webhook
options:
webhook_id:
description:
- The webhook ID.
- "Format from Discord webhook URL: C(/webhooks/{webhook.id}/{webhook.token})."
required: yes
type: str
webhook_token:
description:
- The webhook token.
- "Format from Discord webhook URL: C(/webhooks/{webhook.id}/{webhook.token})."
required: yes
type: str
content:
description:
- Content of the message to the Discord channel.
- At least one of I(content) and I(embeds) must be specified.
type: str
username:
description:
- Overrides the default username of the webhook.
type: str
avatar_url:
description:
- Overrides the default avatar of the webhook.
type: str
tts:
description:
- Set this to C(true) if this is a TTS (Text to Speech) message.
type: bool
default: false
embeds:
description:
- Send messages as Embeds to the Discord channel.
- Embeds can have a colored border, embedded images, text fields and more.
- "Allowed parameters are described in the Discord Docs: U(https://discord.com/developers/docs/resources/channel#embed-object)"
- At least one of I(content) and I(embeds) must be specified.
type: list
elements: dict
'''
EXAMPLES = """
- name: Send a message to the Discord channel
community.general.discord:
webhook_id: "00000"
webhook_token: "XXXYYY"
content: "This is a message from ansible"
- name: Send a message to the Discord channel with specific username and avatar
community.general.discord:
webhook_id: "00000"
webhook_token: "XXXYYY"
content: "This is a message from ansible"
username: Ansible
avatar_url: "https://docs.ansible.com/ansible/latest/_static/images/logo_invert.png"
- name: Send a embedded message to the Discord channel
community.general.discord:
webhook_id: "00000"
webhook_token: "XXXYYY"
embeds:
- title: "Embedded message"
description: "This is an embedded message"
footer:
text: "Author: Ansible"
image:
url: "https://docs.ansible.com/ansible/latest/_static/images/logo_invert.png"
- name: Send two embedded messages
community.general.discord:
webhook_id: "00000"
webhook_token: "XXXYYY"
embeds:
- title: "First message"
description: "This is my first embedded message"
footer:
text: "Author: Ansible"
image:
url: "https://docs.ansible.com/ansible/latest/_static/images/logo_invert.png"
- title: "Second message"
description: "This is my first second message"
footer:
text: "Author: Ansible"
icon_url: "https://docs.ansible.com/ansible/latest/_static/images/logo_invert.png"
fields:
- name: "Field 1"
value: "Value of my first field"
- name: "Field 2"
value: "Value of my second field"
timestamp: "{{ ansible_date_time.iso8601 }}"
"""
RETURN = """
http_code:
description:
- Response Code returned by Discord API.
returned: always
type: int
sample: 204
"""
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.basic import AnsibleModule
def discord_check_mode(module):
webhook_id = module.params['webhook_id']
webhook_token = module.params['webhook_token']
headers = {
'content-type': 'application/json'
}
url = "https://discord.com/api/webhooks/%s/%s" % (
webhook_id, webhook_token)
response, info = fetch_url(module, url, method='GET', headers=headers)
return response, info
def discord_text_msg(module):
webhook_id = module.params['webhook_id']
webhook_token = module.params['webhook_token']
content = module.params['content']
user = module.params['username']
avatar_url = module.params['avatar_url']
tts = module.params['tts']
embeds = module.params['embeds']
headers = {
'content-type': 'application/json'
}
url = "https://discord.com/api/webhooks/%s/%s" % (
webhook_id, webhook_token)
payload = {
'content': content,
'username': user,
'avatar_url': avatar_url,
'tts': tts,
'embeds': embeds,
}
payload = module.jsonify(payload)
response, info = fetch_url(module, url, data=payload, headers=headers, method='POST')
return response, info
def main():
module = AnsibleModule(
argument_spec=dict(
webhook_id=dict(type='str', required=True),
webhook_token=dict(type='str', required=True, no_log=True),
content=dict(type='str'),
username=dict(type='str'),
avatar_url=dict(type='str'),
tts=dict(type='bool', default=False),
embeds=dict(type='list', elements='dict'),
),
required_one_of=[['content', 'embeds']],
supports_check_mode=True
)
result = dict(
changed=False,
http_code='',
)
if module.check_mode:
response, info = discord_check_mode(module)
if info['status'] != 200:
try:
module.fail_json(http_code=info['status'], msg=info['msg'], response=module.from_json(info['body']), info=info)
except Exception:
module.fail_json(http_code=info['status'], msg=info['msg'], info=info)
else:
module.exit_json(msg=info['msg'], changed=False, http_code=info['status'], response=module.from_json(response.read()))
else:
response, info = discord_text_msg(module)
if info['status'] != 204:
try:
module.fail_json(http_code=info['status'], msg=info['msg'], response=module.from_json(info['body']), info=info)
except Exception:
module.fail_json(http_code=info['status'], msg=info['msg'], info=info)
else:
module.exit_json(msg=info['msg'], changed=True, http_code=info['status'])
if __name__ == "__main__":
main()

View File

@@ -127,6 +127,11 @@ EXAMPLES = '''
state: present
install_options: with-baz,enable-debug
- name: Install formula foo with 'brew' from cask
community.general.homebrew:
name: homebrew/cask/foo
state: present
- name: Use ignored-pinned option while upgrading all
community.general.homebrew:
upgrade_all: yes

View File

@@ -44,6 +44,14 @@ options:
default: no
type: bool
executable:
description:
- Name of binary to use. This can either be C(pacman) or a pacman compatible AUR helper.
- Beware that AUR helpers might behave unexpectedly and are therefore not recommended.
default: pacman
type: str
version_added: 3.1.0
extra_args:
description:
- Additional option to pass to pacman when enforcing C(state).
@@ -79,8 +87,10 @@ options:
type: str
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
- When used with a C(loop:) each package will be processed individually,
it is much more efficient to pass the list directly to the I(name) option.
- To use an AUR helper (I(executable) option), a few extra setup steps might be required beforehand.
For example, a dedicated build user with permissions to install packages could be necessary.
'''
RETURN = '''
@@ -109,6 +119,13 @@ EXAMPLES = '''
- ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Install package from AUR using a Pacman compatible AUR helper
community.general.pacman:
name: foo
state: present
executable: yay
extra_args: --builddir /var/cache/yay
- name: Upgrade package foo
community.general.pacman:
name: foo
@@ -419,6 +436,7 @@ def main():
name=dict(type='list', elements='str', aliases=['pkg', 'package']),
state=dict(type='str', default='present', choices=['present', 'installed', 'latest', 'absent', 'removed']),
force=dict(type='bool', default=False),
executable=dict(type='str', default='pacman'),
extra_args=dict(type='str', default=''),
upgrade=dict(type='bool', default=False),
upgrade_extra_args=dict(type='str', default=''),
@@ -432,11 +450,13 @@ def main():
supports_check_mode=True,
)
pacman_path = module.get_bin_path('pacman', True)
module.run_command_environ_update = dict(LC_ALL='C')
p = module.params
# find pacman binary
pacman_path = module.get_bin_path(p['executable'], True)
# normalize the state parameter
if p['state'] in ['present', 'installed']:
p['state'] = 'present'

View File

@@ -0,0 +1 @@
cloud/misc/proxmox_nic.py

View File

@@ -179,6 +179,7 @@ class IdracRedfishUtils(RedfishUtils):
attrs_to_patch = {}
attrs_skipped = {}
attrs_bad = {} # Store attrs which were not found in the system
# Search for key entry and extract URI from it
response = self.get_request(self.root_uri + manager_uri + "/" + key)
@@ -189,13 +190,15 @@ class IdracRedfishUtils(RedfishUtils):
if key not in data:
return {'ret': False,
'msg': "%s: Key %s not found" % (command, key)}
'msg': "%s: Key %s not found" % (command, key),
'warning': ""}
for attr_name, attr_value in attributes.items():
# Check if attribute exists
if attr_name not in data[u'Attributes']:
return {'ret': False,
'msg': "%s: Manager attribute %s not found" % (command, attr_name)}
# Skip and proceed to next attribute if this isn't valid
attrs_bad.update({attr_name: attr_value})
continue
# Find out if value is already set to what we want. If yes, exclude
# those attributes
@@ -204,16 +207,23 @@ class IdracRedfishUtils(RedfishUtils):
else:
attrs_to_patch.update({attr_name: attr_value})
warning = ""
if attrs_bad:
warning = "Incorrect attributes %s" % (attrs_bad)
if not attrs_to_patch:
return {'ret': True, 'changed': False,
'msg': "Manager attributes already set"}
'msg': "No changes made. Manager attributes already set.",
'warning': warning}
payload = {"Attributes": attrs_to_patch}
response = self.patch_request(self.root_uri + manager_uri + "/" + key, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "%s: Modified Manager attributes %s" % (command, attrs_to_patch)}
'msg': "%s: Modified Manager attributes %s" % (command, attrs_to_patch),
'warning': warning}
CATEGORY_COMMANDS_ALL = {
@@ -221,6 +231,7 @@ CATEGORY_COMMANDS_ALL = {
"SetSystemAttributes"]
}
# list of mutually exclusive commands for a category
CATEGORY_COMMANDS_MUTUALLY_EXCLUSIVE = {
"Manager": [["SetManagerAttributes", "SetLifecycleControllerAttributes",
@@ -308,6 +319,9 @@ def main():
# Return data back or fail with proper message
if result['ret'] is True:
if result.get('warning'):
module.warn(to_native(result['warning']))
module.exit_json(changed=result['changed'], msg=to_native(result['msg']))
else:
module.fail_json(msg=to_native(result['msg']))

View File

@@ -321,6 +321,9 @@ def main():
# Return data back or fail with proper message
if result['ret'] is True:
if result.get('warning'):
module.warn(to_native(result['warning']))
module.exit_json(changed=result['changed'], msg=to_native(result['msg']))
else:
module.fail_json(msg=to_native(result['msg']))

View File

@@ -57,16 +57,22 @@ options:
type: str
sshkey_name:
description:
- The name of the sshkey
- The name of the SSH public key.
type: str
sshkey_file:
description:
- The ssh key itself.
- The SSH public key itself.
type: str
sshkey_expires_at:
description:
- The expiration date of the SSH public key in ISO 8601 format C(YYYY-MM-DDTHH:MM:SSZ).
- This is only used when adding new SSH public keys.
type: str
version_added: 3.1.0
group:
description:
- Id or Full path of parent group in the form of group/name.
- Add user as an member to this group.
- Add user as a member to this group.
type: str
access_level:
description:
@@ -254,7 +260,8 @@ class GitLabUser(object):
if options['sshkey_name'] and options['sshkey_file']:
key_changed = self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
'file': options['sshkey_file'],
'expires_at': options['sshkey_expires_at']})
changed = changed or key_changed
# Assign group
@@ -295,7 +302,7 @@ class GitLabUser(object):
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
@param sshkey Dict containing sshkey infos {"name": "", "file": "", "expires_at": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
@@ -303,9 +310,13 @@ class GitLabUser(object):
return True
try:
user.keys.create({
parameter = {
'title': sshkey['name'],
'key': sshkey['file']})
'key': sshkey['file'],
}
if sshkey['expires_at'] is not None:
parameter['expires_at'] = sshkey['expires_at']
user.keys.create(parameter)
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
@@ -471,6 +482,7 @@ def main():
email=dict(type='str'),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str', no_log=False),
sshkey_expires_at=dict(type='str', no_log=False),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
@@ -503,6 +515,7 @@ def main():
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
user_sshkey_expires_at = module.params['sshkey_expires_at']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
@@ -549,6 +562,7 @@ def main():
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"sshkey_expires_at": user_sshkey_expires_at,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,

View File

@@ -37,6 +37,12 @@ options:
- A dictionary of zfs properties to be set.
- See the zfs(8) man page for more information.
type: dict
notes:
- C(check_mode) is supported, but in certain situations it may report a task
as changed that will not be reported as changed when C(check_mode) is disabled.
For example, this might occur when the zpool C(altroot) option is set or when
a size is written using human-readable notation, such as C(1M) or C(1024K),
instead of as an unqualified byte count, such as C(1048576).
author:
- Johan Wiren (@johanwiren)
'''
@@ -184,9 +190,7 @@ class Zfs(object):
return
cmd = [self.zfs_cmd, 'set', prop + '=' + str(value), self.name]
(rc, out, err) = self.module.run_command(cmd)
if rc == 0:
self.changed = True
else:
if rc != 0:
self.module.fail_json(msg=err)
def set_properties_if_changed(self):
@@ -194,15 +198,25 @@ class Zfs(object):
for prop, value in self.properties.items():
if current_properties.get(prop, None) != value:
self.set_property(prop, value)
if self.module.check_mode:
return
updated_properties = self.get_current_properties()
for prop in self.properties:
value = updated_properties.get(prop, None)
if value is None:
self.module.fail_json(msg="zfsprop was not present after being successfully set: %s" % prop)
if current_properties.get(prop, None) != value:
self.changed = True
def get_current_properties(self):
cmd = [self.zfs_cmd, 'get', '-H']
cmd = [self.zfs_cmd, 'get', '-H', '-p', '-o', "property,value,source"]
if self.enhanced_sharing:
cmd += ['-e']
cmd += ['all', self.name]
rc, out, err = self.module.run_command(" ".join(cmd))
properties = dict()
for prop, value, source in [l.split('\t')[1:4] for l in out.splitlines()]:
for line in out.splitlines():
prop, value, source = line.split('\t')
# include source '-' so that creation-only properties are not removed
# to avoids errors when the dataset already exists and the property is not changed
# this scenario is most likely when the same playbook is run more than once

View File

@@ -51,8 +51,9 @@ options:
permissions:
description:
- The list of permission(s) to delegate (required if C(state) is C(present)).
- Supported permissions depend on the ZFS version in use. See for example
U(https://openzfs.github.io/openzfs-docs/man/8/zfs-allow.8.html) for OpenZFS.
type: list
choices: [ allow, clone, create, destroy, diff, hold, mount, promote, readonly, receive, release, rename, rollback, send, share, snapshot, unallow ]
elements: str
local:
description:
@@ -248,10 +249,7 @@ def main():
users=dict(type='list', elements='str'),
groups=dict(type='list', elements='str'),
everyone=dict(type='bool', default=False),
permissions=dict(type='list', elements='str',
choices=['allow', 'clone', 'create', 'destroy', 'diff', 'hold', 'mount', 'promote',
'readonly', 'receive', 'release', 'rename', 'rollback', 'send', 'share',
'snapshot', 'unallow']),
permissions=dict(type='list', elements='str'),
local=dict(type='bool'),
descendents=dict(type='bool'),
recursive=dict(type='bool', default=False),

View File

@@ -7,10 +7,11 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
author:
- Alexander Bulimov (@abulimov)
- Alexander Bulimov (@abulimov)
module: filesystem
short_description: Makes a filesystem
description:
@@ -18,13 +19,12 @@ description:
options:
state:
description:
- If C(state=present), the filesystem is created if it doesn't already
exist, that is the default behaviour if I(state) is omitted.
- If C(state=absent), filesystem signatures on I(dev) are wiped if it
contains a filesystem (as known by C(blkid)).
- When C(state=absent), all other options but I(dev) are ignored, and the
module doesn't fail if the device I(dev) doesn't actually exist.
- C(state=absent) is not supported and will fail on FreeBSD systems.
- If C(state=present), the filesystem is created if it doesn't already
exist, that is the default behaviour if I(state) is omitted.
- If C(state=absent), filesystem signatures on I(dev) are wiped if it
contains a filesystem (as known by C(blkid)).
- When C(state=absent), all other options but I(dev) are ignored, and the
module doesn't fail if the device I(dev) doesn't actually exist.
type: str
choices: [ present, absent ]
default: present
@@ -32,48 +32,56 @@ options:
fstype:
choices: [ btrfs, ext2, ext3, ext4, ext4dev, f2fs, lvm, ocfs2, reiserfs, xfs, vfat, swap ]
description:
- Filesystem type to be created. This option is required with
C(state=present) (or if I(state) is omitted).
- reiserfs support was added in 2.2.
- lvm support was added in 2.5.
- since 2.5, I(dev) can be an image file.
- vfat support was added in 2.5
- ocfs2 support was added in 2.6
- f2fs support was added in 2.7
- swap support was added in 2.8
- Filesystem type to be created. This option is required with
C(state=present) (or if I(state) is omitted).
- reiserfs support was added in 2.2.
- lvm support was added in 2.5.
- since 2.5, I(dev) can be an image file.
- vfat support was added in 2.5
- ocfs2 support was added in 2.6
- f2fs support was added in 2.7
- swap support was added in 2.8
type: str
aliases: [type]
dev:
description:
- Target path to device or image file.
- Target path to block device or regular file.
- On systems not using block devices but character devices instead (as
FreeBSD), this module only works when applying to regular files, aka
disk images.
type: path
required: yes
aliases: [device]
force:
description:
- If C(yes), allows to create new filesystem on devices that already has filesystem.
- If C(yes), allows to create new filesystem on devices that already has filesystem.
type: bool
default: 'no'
resizefs:
description:
- If C(yes), if the block device and filesystem size differ, grow the filesystem into the space.
- Supported for C(ext2), C(ext3), C(ext4), C(ext4dev), C(f2fs), C(lvm), C(xfs) and C(vfat) filesystems.
Attempts to resize other filesystem types will fail.
- XFS Will only grow if mounted. Currently, the module is based on commands
from C(util-linux) package to perform operations, so resizing of XFS is
not supported on FreeBSD systems.
- vFAT will likely fail if fatresize < 1.04.
- If C(yes), if the block device and filesystem size differ, grow the filesystem into the space.
- Supported for C(ext2), C(ext3), C(ext4), C(ext4dev), C(f2fs), C(lvm), C(xfs) and C(vfat) filesystems.
Attempts to resize other filesystem types will fail.
- XFS Will only grow if mounted. Currently, the module is based on commands
from C(util-linux) package to perform operations, so resizing of XFS is
not supported on FreeBSD systems.
- vFAT will likely fail if fatresize < 1.04.
type: bool
default: 'no'
opts:
description:
- List of options to be passed to mkfs command.
- List of options to be passed to mkfs command.
type: str
requirements:
- Uses tools related to the I(fstype) (C(mkfs)) and C(blkid) command. When I(resizefs) is enabled, C(blockdev) command is required too.
- Uses tools related to the I(fstype) (C(mkfs)) and the C(blkid) command.
- When I(resizefs) is enabled, C(blockdev) command is required too.
notes:
- Potential filesystem on I(dev) are checked using C(blkid), in case C(blkid) isn't able to detect an existing filesystem,
this filesystem is overwritten even if I(force) is C(no).
- Potential filesystem on I(dev) are checked using C(blkid). In case C(blkid)
isn't able to detect an existing filesystem, this filesystem is overwritten
even if I(force) is C(no).
- On FreeBSD systems, either C(e2fsprogs) or C(util-linux) packages provide
a C(blkid) command that is compatible with this module, when applied to
regular files.
- This module supports I(check_mode).
'''
@@ -102,6 +110,7 @@ import re
import stat
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
class Device(object):
@@ -114,13 +123,15 @@ class Device(object):
statinfo = os.stat(self.path)
if stat.S_ISBLK(statinfo.st_mode):
blockdev_cmd = self.module.get_bin_path("blockdev", required=True)
dummy, devsize_in_bytes, dummy = self.module.run_command([blockdev_cmd, "--getsize64", self.path], check_rc=True)
return int(devsize_in_bytes)
dummy, out, dummy = self.module.run_command([blockdev_cmd, "--getsize64", self.path], check_rc=True)
devsize_in_bytes = int(out)
elif os.path.isfile(self.path):
return os.path.getsize(self.path)
devsize_in_bytes = os.path.getsize(self.path)
else:
self.module.fail_json(changed=False, msg="Target device not supported: %s" % self)
return devsize_in_bytes
def get_mountpoint(self):
"""Return (first) mountpoint of device. Returns None when not mounted."""
cmd_findmnt = self.module.get_bin_path("findmnt", required=True)
@@ -141,9 +152,12 @@ class Device(object):
class Filesystem(object):
GROW = None
MKFS = None
MKFS_FORCE_FLAGS = ''
MKFS_FORCE_FLAGS = []
INFO = None
GROW = None
GROW_MAX_SPACE_FLAGS = []
GROW_MOUNTPOINT_ONLY = False
LANG_ENV = {'LANG': 'C', 'LC_ALL': 'C', 'LC_MESSAGES': 'C'}
@@ -155,7 +169,11 @@ class Filesystem(object):
return type(self).__name__
def get_fs_size(self, dev):
""" Return size in bytes of filesystem on device. Returns int """
"""Return size in bytes of filesystem on device (integer).
Should query the info with a per-fstype command that can access the
device whenever it is mounted or not, and parse the command output.
Parser must ensure to return an integer, or raise a ValueError.
"""
raise NotImplementedError()
def create(self, opts, dev):
@@ -163,31 +181,27 @@ class Filesystem(object):
return
mkfs = self.module.get_bin_path(self.MKFS, required=True)
if opts is None:
cmd = "%s %s '%s'" % (mkfs, self.MKFS_FORCE_FLAGS, dev)
else:
cmd = "%s %s %s '%s'" % (mkfs, self.MKFS_FORCE_FLAGS, opts, dev)
cmd = [mkfs] + self.MKFS_FORCE_FLAGS + opts + [str(dev)]
self.module.run_command(cmd, check_rc=True)
def wipefs(self, dev):
if platform.system() == 'FreeBSD':
msg = "module param state=absent is currently not supported on this OS (FreeBSD)."
self.module.fail_json(msg=msg)
if self.module.check_mode:
return
# wipefs comes with util-linux package (as 'blockdev' & 'findmnt' above)
# so it is not supported on FreeBSD. Even the use of dd as a fallback is
# that is ported to FreeBSD. The use of dd as a portable fallback is
# not doable here if it needs get_mountpoint() (to prevent corruption of
# a mounted filesystem), since 'findmnt' is not available on FreeBSD.
# a mounted filesystem), since 'findmnt' is not available on FreeBSD,
# even in util-linux port for this OS.
wipefs = self.module.get_bin_path('wipefs', required=True)
cmd = [wipefs, "--all", dev.__str__()]
cmd = [wipefs, "--all", str(dev)]
self.module.run_command(cmd, check_rc=True)
def grow_cmd(self, dev):
cmd = self.module.get_bin_path(self.GROW, required=True)
return [cmd, str(dev)]
def grow_cmd(self, target):
"""Build and return the resizefs commandline as list."""
cmdline = [self.module.get_bin_path(self.GROW, required=True)]
cmdline += self.GROW_MAX_SPACE_FLAGS + [target]
return cmdline
def grow(self, dev):
"""Get dev and fs size and compare. Returns stdout of used command."""
@@ -196,31 +210,50 @@ class Filesystem(object):
try:
fssize_in_bytes = self.get_fs_size(dev)
except NotImplementedError:
self.module.fail_json(changed=False, msg="module does not support resizing %s filesystem yet." % self.fstype)
self.module.fail_json(msg="module does not support resizing %s filesystem yet" % self.fstype)
except ValueError as err:
self.module.warn("unable to process %s output '%s'" % (self.INFO, to_native(err)))
self.module.fail_json(msg="unable to process %s output for %s" % (self.INFO, dev))
if not fssize_in_bytes < devsize_in_bytes:
self.module.exit_json(changed=False, msg="%s filesystem is using the whole device %s" % (self.fstype, dev))
elif self.module.check_mode:
self.module.exit_json(changed=True, msg="Resizing filesystem %s on device %s" % (self.fstype, dev))
self.module.exit_json(changed=True, msg="resizing filesystem %s on device %s" % (self.fstype, dev))
if self.GROW_MOUNTPOINT_ONLY:
mountpoint = dev.get_mountpoint()
if not mountpoint:
self.module.fail_json(msg="%s needs to be mounted for %s operations" % (dev, self.fstype))
grow_target = mountpoint
else:
dummy, out, dummy = self.module.run_command(self.grow_cmd(dev), check_rc=True)
return out
grow_target = str(dev)
dummy, out, dummy = self.module.run_command(self.grow_cmd(grow_target), check_rc=True)
return out
class Ext(Filesystem):
MKFS_FORCE_FLAGS = '-F'
MKFS_FORCE_FLAGS = ['-F']
INFO = 'tune2fs'
GROW = 'resize2fs'
def get_fs_size(self, dev):
cmd = self.module.get_bin_path('tune2fs', required=True)
# Get Block count and Block size
dummy, size, dummy = self.module.run_command([cmd, '-l', str(dev)], check_rc=True, environ_update=self.LANG_ENV)
for line in size.splitlines():
"""Get Block count and Block size and return their product."""
cmd = self.module.get_bin_path(self.INFO, required=True)
dummy, out, dummy = self.module.run_command([cmd, '-l', str(dev)], check_rc=True, environ_update=self.LANG_ENV)
block_count = block_size = None
for line in out.splitlines():
if 'Block count:' in line:
block_count = int(line.split(':')[1].strip())
elif 'Block size:' in line:
block_size = int(line.split(':')[1].strip())
return block_size * block_count
if None not in (block_size, block_count):
break
else:
raise ValueError(out)
return block_size * block_count
class Ext2(Ext):
@@ -237,52 +270,46 @@ class Ext4(Ext):
class XFS(Filesystem):
MKFS = 'mkfs.xfs'
MKFS_FORCE_FLAGS = '-f'
MKFS_FORCE_FLAGS = ['-f']
INFO = 'xfs_info'
GROW = 'xfs_growfs'
GROW_MOUNTPOINT_ONLY = True
def get_fs_size(self, dev):
cmd = self.module.get_bin_path('xfs_info', required=True)
"""Get bsize and blocks and return their product."""
cmdline = [self.module.get_bin_path(self.INFO, required=True)]
# Depending on the versions, xfs_info is able to get info from the
# device, whenever it is mounted or not, or only if unmounted, or
# only if mounted, or not at all. For any version until now, it is
# able to query info from the mountpoint. So try it first, and use
# device as the last resort: it may or may not work.
mountpoint = dev.get_mountpoint()
if mountpoint:
rc, out, err = self.module.run_command([cmd, str(mountpoint)], environ_update=self.LANG_ENV)
cmdline += [mountpoint]
else:
# Recent GNU/Linux distros support access to unmounted XFS filesystems
rc, out, err = self.module.run_command([cmd, str(dev)], environ_update=self.LANG_ENV)
if rc != 0:
self.module.fail_json(msg="Error while attempting to query size of XFS filesystem: %s" % err)
cmdline += [str(dev)]
dummy, out, dummy = self.module.run_command(cmdline, check_rc=True, environ_update=self.LANG_ENV)
block_size = block_count = None
for line in out.splitlines():
col = line.split('=')
if col[0].strip() == 'data':
if col[1].strip() != 'bsize':
self.module.fail_json(msg='Unexpected output format from xfs_info (could not locate "bsize")')
if col[2].split()[1] != 'blocks':
self.module.fail_json(msg='Unexpected output format from xfs_info (could not locate "blocks")')
block_size = int(col[2].split()[0])
block_count = int(col[3].split(',')[0])
return block_size * block_count
if col[1].strip() == 'bsize':
block_size = int(col[2].split()[0])
if col[2].split()[1] == 'blocks':
block_count = int(col[3].split(',')[0])
if None not in (block_size, block_count):
break
else:
raise ValueError(out)
def grow_cmd(self, dev):
# Check first if growing is needed, and then if it is doable or not.
devsize_in_bytes = dev.size()
fssize_in_bytes = self.get_fs_size(dev)
if not fssize_in_bytes < devsize_in_bytes:
self.module.exit_json(changed=False, msg="%s filesystem is using the whole device %s" % (self.fstype, dev))
mountpoint = dev.get_mountpoint()
if not mountpoint:
# xfs filesystem needs to be mounted
self.module.fail_json(msg="%s needs to be mounted for xfs operations" % dev)
cmd = self.module.get_bin_path(self.GROW, required=True)
return [cmd, str(mountpoint)]
return block_size * block_count
class Reiserfs(Filesystem):
MKFS = 'mkfs.reiserfs'
MKFS_FORCE_FLAGS = '-f'
MKFS_FORCE_FLAGS = ['-q']
class Btrfs(Filesystem):
@@ -290,7 +317,8 @@ class Btrfs(Filesystem):
def __init__(self, module):
super(Btrfs, self).__init__(module)
dummy, stdout, stderr = self.module.run_command('%s --version' % self.MKFS, check_rc=True)
mkfs = self.module.get_bin_path(self.MKFS, required=True)
dummy, stdout, stderr = self.module.run_command([mkfs, '--version'], check_rc=True)
match = re.search(r" v([0-9.]+)", stdout)
if not match:
# v0.20-rc1 use stderr
@@ -298,29 +326,27 @@ class Btrfs(Filesystem):
if match:
# v0.20-rc1 doesn't have --force parameter added in following version v3.12
if LooseVersion(match.group(1)) >= LooseVersion('3.12'):
self.MKFS_FORCE_FLAGS = '-f'
else:
self.MKFS_FORCE_FLAGS = ''
self.MKFS_FORCE_FLAGS = ['-f']
else:
# assume version is greater or equal to 3.12
self.MKFS_FORCE_FLAGS = '-f'
self.MKFS_FORCE_FLAGS = ['-f']
self.module.warn('Unable to identify mkfs.btrfs version (%r, %r)' % (stdout, stderr))
class Ocfs2(Filesystem):
MKFS = 'mkfs.ocfs2'
MKFS_FORCE_FLAGS = '-Fx'
MKFS_FORCE_FLAGS = ['-Fx']
class F2fs(Filesystem):
MKFS = 'mkfs.f2fs'
INFO = 'dump.f2fs'
GROW = 'resize.f2fs'
@property
def MKFS_FORCE_FLAGS(self):
def __init__(self, module):
super(F2fs, self).__init__(module)
mkfs = self.module.get_bin_path(self.MKFS, required=True)
cmd = "%s %s" % (mkfs, os.devnull)
dummy, out, dummy = self.module.run_command(cmd, check_rc=False, environ_update=self.LANG_ENV)
dummy, out, dummy = self.module.run_command([mkfs, os.devnull], check_rc=False, environ_update=self.LANG_ENV)
# Looking for " F2FS-tools: mkfs.f2fs Ver: 1.10.0 (2018-01-30)"
# mkfs.f2fs displays version since v1.2.0
match = re.search(r"F2FS-tools: mkfs.f2fs Ver: ([0-9.]+) \(", out)
@@ -328,69 +354,73 @@ class F2fs(Filesystem):
# Since 1.9.0, mkfs.f2fs check overwrite before make filesystem
# before that version -f switch wasn't used
if LooseVersion(match.group(1)) >= LooseVersion('1.9.0'):
return '-f'
return ''
self.MKFS_FORCE_FLAGS = ['-f']
def get_fs_size(self, dev):
cmd = self.module.get_bin_path('dump.f2fs', required=True)
# Get sector count and sector size
dummy, dump, dummy = self.module.run_command([cmd, str(dev)], check_rc=True, environ_update=self.LANG_ENV)
sector_size = None
sector_count = None
for line in dump.splitlines():
"""Get sector size and total FS sectors and return their product."""
cmd = self.module.get_bin_path(self.INFO, required=True)
dummy, out, dummy = self.module.run_command([cmd, str(dev)], check_rc=True, environ_update=self.LANG_ENV)
sector_size = sector_count = None
for line in out.splitlines():
if 'Info: sector size = ' in line:
# expected: 'Info: sector size = 512'
sector_size = int(line.split()[4])
elif 'Info: total FS sectors = ' in line:
# expected: 'Info: total FS sectors = 102400 (50 MB)'
sector_count = int(line.split()[5])
if None not in (sector_size, sector_count):
break
else:
self.module.warn("Unable to process dump.f2fs output '%s'", '\n'.join(dump))
self.module.fail_json(msg="Unable to process dump.f2fs output for %s" % dev)
raise ValueError(out)
return sector_size * sector_count
class VFAT(Filesystem):
if platform.system() == 'FreeBSD':
MKFS = "newfs_msdos"
else:
MKFS = 'mkfs.vfat'
INFO = 'fatresize'
GROW = 'fatresize'
GROW_MAX_SPACE_FLAGS = ['-s', 'max']
def __init__(self, module):
super(VFAT, self).__init__(module)
if platform.system() == 'FreeBSD':
self.MKFS = 'newfs_msdos'
else:
self.MKFS = 'mkfs.vfat'
def get_fs_size(self, dev):
cmd = self.module.get_bin_path(self.GROW, required=True)
dummy, output, dummy = self.module.run_command([cmd, '--info', str(dev)], check_rc=True, environ_update=self.LANG_ENV)
for line in output.splitlines()[1:]:
"""Get and return size of filesystem, in bytes."""
cmd = self.module.get_bin_path(self.INFO, required=True)
dummy, out, dummy = self.module.run_command([cmd, '--info', str(dev)], check_rc=True, environ_update=self.LANG_ENV)
fssize = None
for line in out.splitlines()[1:]:
param, value = line.split(':', 1)
if param.strip() == 'Size':
return int(value.strip())
self.module.fail_json(msg="fatresize failed to provide filesystem size for %s" % dev)
fssize = int(value.strip())
break
else:
raise ValueError(out)
def grow_cmd(self, dev):
cmd = self.module.get_bin_path(self.GROW)
return [cmd, "-s", str(dev.size()), str(dev.path)]
return fssize
class LVM(Filesystem):
MKFS = 'pvcreate'
MKFS_FORCE_FLAGS = '-f'
MKFS_FORCE_FLAGS = ['-f']
INFO = 'pvs'
GROW = 'pvresize'
def get_fs_size(self, dev):
cmd = self.module.get_bin_path('pvs', required=True)
"""Get and return PV size, in bytes."""
cmd = self.module.get_bin_path(self.INFO, required=True)
dummy, size, dummy = self.module.run_command([cmd, '--noheadings', '-o', 'pv_size', '--units', 'b', '--nosuffix', str(dev)], check_rc=True)
block_count = int(size)
return block_count
pv_size = int(size)
return pv_size
class Swap(Filesystem):
MKFS = 'mkswap'
MKFS_FORCE_FLAGS = '-f'
MKFS_FORCE_FLAGS = ['-f']
FILESYSTEMS = {
@@ -439,6 +469,10 @@ def main():
force = module.params['force']
resizefs = module.params['resizefs']
mkfs_opts = []
if opts is not None:
mkfs_opts = opts.split()
changed = False
if not os.path.exists(dev):
@@ -451,7 +485,7 @@ def main():
dev = Device(module, dev)
cmd = module.get_bin_path('blkid', required=True)
rc, raw_fs, err = module.run_command("%s -c /dev/null -o value -s TYPE %s" % (cmd, dev))
rc, raw_fs, err = module.run_command([cmd, '-c', os.devnull, '-o', 'value', '-s', 'TYPE', str(dev)])
# In case blkid isn't able to identify an existing filesystem, device is considered as empty,
# then this existing filesystem would be overwritten even if force isn't enabled.
fs = raw_fs.strip()
@@ -481,7 +515,7 @@ def main():
module.fail_json(msg="'%s' is already used as %s, use force=yes to overwrite" % (dev, fs), rc=rc, err=err)
# create fs
filesystem.create(opts, dev)
filesystem.create(mkfs_opts, dev)
changed = True
elif fs:

View File

@@ -232,7 +232,7 @@ import filecmp
import shutil
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils._text import to_bytes, to_native
IPTABLES = dict(
@@ -262,7 +262,7 @@ def read_state(b_path):
lines = text.splitlines()
while '' in lines:
lines.remove('')
return (lines)
return lines
def write_state(b_path, lines, changed):
@@ -282,9 +282,9 @@ def write_state(b_path, lines, changed):
if b_destdir and not os.path.exists(b_destdir) and not module.check_mode:
try:
os.makedirs(b_destdir)
except Exception as e:
except Exception as err:
module.fail_json(
msg='Error creating %s. Error code: %s. Error description: %s' % (destdir, e[0], e[1]),
msg='Error creating %s: %s' % (destdir, to_native(err)),
initial_state=lines)
changed = True
@@ -295,10 +295,10 @@ def write_state(b_path, lines, changed):
if changed and not module.check_mode:
try:
shutil.copyfile(tmpfile, b_path)
except Exception as e:
except Exception as err:
path = to_native(b_path, errors='surrogate_or_strict')
module.fail_json(
msg='Error saving state into %s. Error code: %s. Error description: %s' % (path, e[0], e[1]),
msg='Error saving state into %s: %s' % (path, to_native(err)),
initial_state=lines)
return changed
@@ -313,14 +313,11 @@ def initialize_from_null_state(initializer, initcommand, table):
if table is None:
table = 'filter'
tmpfd, tmpfile = tempfile.mkstemp()
with os.fdopen(tmpfd, 'w') as f:
f.write('*%s\nCOMMIT\n' % table)
initializer.append(tmpfile)
(rc, out, err) = module.run_command(initializer, check_rc=True)
commandline = list(initializer)
commandline += ['-t', table]
(rc, out, err) = module.run_command(commandline, check_rc=True)
(rc, out, err) = module.run_command(initcommand, check_rc=True)
return (rc, out, err)
return rc, out, err
def filter_and_format_state(string):
@@ -328,13 +325,13 @@ def filter_and_format_state(string):
Remove timestamps to ensure idempotence between runs. Also remove counters
by default. And return the result as a list.
'''
string = re.sub('((^|\n)# (Generated|Completed)[^\n]*) on [^\n]*', '\\1', string)
string = re.sub(r'((^|\n)# (Generated|Completed)[^\n]*) on [^\n]*', r'\1', string)
if not module.params['counters']:
string = re.sub('[[][0-9]+:[0-9]+[]]', '[0:0]', string)
string = re.sub(r'\[[0-9]+:[0-9]+\]', r'[0:0]', string)
lines = string.splitlines()
while '' in lines:
lines.remove('')
return (lines)
return lines
def per_table_state(command, state):
@@ -347,14 +344,14 @@ def per_table_state(command, state):
COMMAND = list(command)
if '*%s' % t in state.splitlines():
COMMAND.extend(['--table', t])
(rc, out, err) = module.run_command(COMMAND, check_rc=True)
out = re.sub('(^|\n)(# Generated|# Completed|[*]%s|COMMIT)[^\n]*' % t, '', out)
out = re.sub(' *[[][0-9]+:[0-9]+[]] *', '', out)
dummy, out, dummy = module.run_command(COMMAND, check_rc=True)
out = re.sub(r'(^|\n)(# Generated|# Completed|[*]%s|COMMIT)[^\n]*' % t, r'', out)
out = re.sub(r' *\[[0-9]+:[0-9]+\] *', r'', out)
table = out.splitlines()
while '' in table:
table.remove('')
tables[t] = table
return (tables)
return tables
def main():
@@ -402,7 +399,7 @@ def main():
changed = False
COMMANDARGS = []
INITCOMMAND = [bin_iptables_save]
INITIALIZER = [bin_iptables_restore]
INITIALIZER = [bin_iptables, '-L', '-n']
TESTCOMMAND = [bin_iptables_restore, '--test']
if counters:
@@ -502,7 +499,7 @@ def main():
if _back is not None:
b_back = to_bytes(_back, errors='surrogate_or_strict')
garbage = write_state(b_back, initref_state, changed)
dummy = write_state(b_back, initref_state, changed)
BACKCOMMAND = list(MAINCOMMAND)
BACKCOMMAND.append(_back)
@@ -559,9 +556,7 @@ def main():
if os.path.exists(b_starter):
os.remove(b_starter)
break
else:
time.sleep(0.01)
continue
time.sleep(0.01)
(rc, stdout, stderr) = module.run_command(MAINCOMMAND)
if 'Another app is currently holding the xtables lock' in stderr:
@@ -579,7 +574,7 @@ def main():
(rc, stdout, stderr) = module.run_command(SAVECOMMAND, check_rc=True)
restored_state = filter_and_format_state(stdout)
if restored_state != initref_state and restored_state != initial_state:
if restored_state not in (initref_state, initial_state):
if module.check_mode:
changed = True
else:
@@ -609,7 +604,7 @@ def main():
# timeout
# * task attribute 'poll' equals 0
#
for x in range(_timeout):
for dummy in range(_timeout):
if os.path.exists(b_back):
time.sleep(1)
continue

View File

@@ -88,9 +88,19 @@ options:
description:
- Mode the file should be.
required: false
ssl_backend:
description:
- Backend for loading private keys and certificates.
type: str
default: openssl
choices:
- openssl
- cryptography
version_added: 3.1.0
requirements:
- openssl in PATH
- openssl in PATH (when I(ssl_backend=openssl))
- keytool in PATH
- cryptography >= 3.0 (when I(ssl_backend=cryptography))
author:
- Guillaume Grossetie (@Mogztter)
- quidame (@quidame)
@@ -164,55 +174,281 @@ import os
import re
import tempfile
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import PY2
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
try:
from cryptography.hazmat.primitives.serialization.pkcs12 import serialize_key_and_certificates
from cryptography.hazmat.primitives.serialization import (
BestAvailableEncryption,
NoEncryption,
load_pem_private_key,
load_der_private_key,
)
from cryptography.x509 import (
load_pem_x509_certificate,
load_der_x509_certificate,
)
from cryptography.hazmat.primitives import hashes
from cryptography.exceptions import UnsupportedAlgorithm
from cryptography.hazmat.backends.openssl import backend
HAS_CRYPTOGRAPHY_PKCS12 = True
except ImportError:
HAS_CRYPTOGRAPHY_PKCS12 = False
def read_certificate_fingerprint(module, openssl_bin, certificate_path):
current_certificate_fingerprint_cmd = [openssl_bin, "x509", "-noout", "-in", certificate_path, "-fingerprint", "-sha256"]
(rc, current_certificate_fingerprint_out, current_certificate_fingerprint_err) = run_commands(module, current_certificate_fingerprint_cmd)
if rc != 0:
return module.fail_json(msg=current_certificate_fingerprint_out,
err=current_certificate_fingerprint_err,
cmd=current_certificate_fingerprint_cmd,
rc=rc)
class JavaKeystore:
def __init__(self, module):
self.module = module
current_certificate_match = re.search(r"=([\w:]+)", current_certificate_fingerprint_out)
if not current_certificate_match:
return module.fail_json(msg="Unable to find the current certificate fingerprint in %s" % current_certificate_fingerprint_out,
cmd=current_certificate_fingerprint_cmd,
rc=rc)
self.keytool_bin = module.get_bin_path('keytool', True)
return current_certificate_match.group(1)
def read_stored_certificate_fingerprint(module, keytool_bin, alias, keystore_path, keystore_password):
stored_certificate_fingerprint_cmd = [keytool_bin, "-list", "-alias", alias, "-keystore", keystore_path, "-storepass:env", "STOREPASS", "-v"]
(rc, stored_certificate_fingerprint_out, stored_certificate_fingerprint_err) = run_commands(
module, stored_certificate_fingerprint_cmd, environ_update=dict(STOREPASS=keystore_password))
if rc != 0:
if "keytool error: java.lang.Exception: Alias <%s> does not exist" % alias in stored_certificate_fingerprint_out:
return "alias mismatch"
if re.match(r'keytool error: java\.io\.IOException: [Kk]eystore( was tampered with, or)? password was incorrect',
stored_certificate_fingerprint_out):
return "password mismatch"
return module.fail_json(msg=stored_certificate_fingerprint_out,
err=stored_certificate_fingerprint_err,
cmd=stored_certificate_fingerprint_cmd,
rc=rc)
stored_certificate_match = re.search(r"SHA256: ([\w:]+)", stored_certificate_fingerprint_out)
if not stored_certificate_match:
return module.fail_json(msg="Unable to find the stored certificate fingerprint in %s" % stored_certificate_fingerprint_out,
cmd=stored_certificate_fingerprint_cmd,
rc=rc)
return stored_certificate_match.group(1)
def run_commands(module, cmd, data=None, environ_update=None, check_rc=False):
return module.run_command(cmd, check_rc=check_rc, data=data, environ_update=environ_update)
self.certificate = module.params['certificate']
self.keypass = module.params['private_key_passphrase']
self.keystore_path = module.params['dest']
self.name = module.params['name']
self.password = module.params['password']
self.private_key = module.params['private_key']
self.ssl_backend = module.params['ssl_backend']
if self.ssl_backend == 'openssl':
self.openssl_bin = module.get_bin_path('openssl', True)
else:
if not HAS_CRYPTOGRAPHY_PKCS12:
self.module.fail_json(msg=missing_required_lib('cryptography >= 3.0'))
if module.params['certificate_path'] is None:
self.certificate_path = create_file(self.certificate)
self.module.add_cleanup_file(self.certificate_path)
else:
self.certificate_path = module.params['certificate_path']
if module.params['private_key_path'] is None:
self.private_key_path = create_file(self.private_key)
self.module.add_cleanup_file(self.private_key_path)
else:
self.private_key_path = module.params['private_key_path']
def update_permissions(self):
try:
file_args = self.module.load_file_common_arguments(self.module.params, path=self.keystore_path)
except TypeError:
# The path argument is only supported in Ansible-base 2.10+. Fall back to
# pre-2.10 behavior for older Ansible versions.
self.module.params['path'] = self.keystore_path
file_args = self.module.load_file_common_arguments(self.module.params)
return self.module.set_fs_attributes_if_different(file_args, False)
def read_certificate_fingerprint(self, cert_format='PEM'):
if self.ssl_backend == 'cryptography':
if cert_format == 'PEM':
cert_loader = load_pem_x509_certificate
else:
cert_loader = load_der_x509_certificate
try:
with open(self.certificate_path, 'rb') as cert_file:
cert = cert_loader(
cert_file.read(),
backend=backend
)
except (OSError, ValueError) as e:
self.module.fail_json(msg="Unable to read the provided certificate: %s" % to_native(e))
fp = hex_decode(cert.fingerprint(hashes.SHA256())).upper()
fingerprint = ':'.join([fp[i:i + 2] for i in range(0, len(fp), 2)])
else:
current_certificate_fingerprint_cmd = [
self.openssl_bin, "x509", "-noout", "-in", self.certificate_path, "-fingerprint", "-sha256"
]
(rc, current_certificate_fingerprint_out, current_certificate_fingerprint_err) = self.module.run_command(
current_certificate_fingerprint_cmd,
environ_update=None,
check_rc=False
)
if rc != 0:
return self.module.fail_json(
msg=current_certificate_fingerprint_out,
err=current_certificate_fingerprint_err,
cmd=current_certificate_fingerprint_cmd,
rc=rc
)
current_certificate_match = re.search(r"=([\w:]+)", current_certificate_fingerprint_out)
if not current_certificate_match:
return self.module.fail_json(
msg="Unable to find the current certificate fingerprint in %s" % (
current_certificate_fingerprint_out
),
cmd=current_certificate_fingerprint_cmd,
rc=rc
)
fingerprint = current_certificate_match.group(1)
return fingerprint
def read_stored_certificate_fingerprint(self):
stored_certificate_fingerprint_cmd = [
self.keytool_bin, "-list", "-alias", self.name,
"-keystore", self.keystore_path, "-v"
]
(rc, stored_certificate_fingerprint_out, stored_certificate_fingerprint_err) = self.module.run_command(
stored_certificate_fingerprint_cmd, data=self.password, check_rc=False)
if rc != 0:
if "keytool error: java.lang.Exception: Alias <%s> does not exist" % self.name \
in stored_certificate_fingerprint_out:
return "alias mismatch"
if re.match(
r'keytool error: java\.io\.IOException: ' +
'[Kk]eystore( was tampered with, or)? password was incorrect',
stored_certificate_fingerprint_out
):
return "password mismatch"
return self.module.fail_json(
msg=stored_certificate_fingerprint_out,
err=stored_certificate_fingerprint_err,
cmd=stored_certificate_fingerprint_cmd,
rc=rc
)
stored_certificate_match = re.search(r"SHA256: ([\w:]+)", stored_certificate_fingerprint_out)
if not stored_certificate_match:
return self.module.fail_json(
msg="Unable to find the stored certificate fingerprint in %s" % stored_certificate_fingerprint_out,
cmd=stored_certificate_fingerprint_cmd,
rc=rc
)
return stored_certificate_match.group(1)
def cert_changed(self):
current_certificate_fingerprint = self.read_certificate_fingerprint()
stored_certificate_fingerprint = self.read_stored_certificate_fingerprint()
return current_certificate_fingerprint != stored_certificate_fingerprint
def cryptography_create_pkcs12_bundle(self, keystore_p12_path, key_format='PEM', cert_format='PEM'):
if key_format == 'PEM':
key_loader = load_pem_private_key
else:
key_loader = load_der_private_key
if cert_format == 'PEM':
cert_loader = load_pem_x509_certificate
else:
cert_loader = load_der_x509_certificate
try:
with open(self.private_key_path, 'rb') as key_file:
private_key = key_loader(
key_file.read(),
password=to_bytes(self.keypass),
backend=backend
)
except TypeError:
# Re-attempt with no password to match existing behavior
try:
with open(self.private_key_path, 'rb') as key_file:
private_key = key_loader(
key_file.read(),
password=None,
backend=backend
)
except (OSError, TypeError, ValueError, UnsupportedAlgorithm) as e:
self.module.fail_json(
msg="The following error occurred while loading the provided private_key: %s" % to_native(e)
)
except (OSError, ValueError, UnsupportedAlgorithm) as e:
self.module.fail_json(
msg="The following error occurred while loading the provided private_key: %s" % to_native(e)
)
try:
with open(self.certificate_path, 'rb') as cert_file:
cert = cert_loader(
cert_file.read(),
backend=backend
)
except (OSError, ValueError, UnsupportedAlgorithm) as e:
self.module.fail_json(
msg="The following error occurred while loading the provided certificate: %s" % to_native(e)
)
if self.password:
encryption = BestAvailableEncryption(to_bytes(self.password))
else:
encryption = NoEncryption()
pkcs12_bundle = serialize_key_and_certificates(
name=to_bytes(self.name),
key=private_key,
cert=cert,
cas=None,
encryption_algorithm=encryption
)
with open(keystore_p12_path, 'wb') as p12_file:
p12_file.write(pkcs12_bundle)
def openssl_create_pkcs12_bundle(self, keystore_p12_path):
export_p12_cmd = [self.openssl_bin, "pkcs12", "-export", "-name", self.name, "-in", self.certificate_path,
"-inkey", self.private_key_path, "-out", keystore_p12_path, "-passout", "stdin"]
# when keypass is provided, add -passin
cmd_stdin = ""
if self.keypass:
export_p12_cmd.append("-passin")
export_p12_cmd.append("stdin")
cmd_stdin = "%s\n" % self.keypass
cmd_stdin += "%s\n%s" % (self.password, self.password)
(rc, export_p12_out, dummy) = self.module.run_command(
export_p12_cmd, data=cmd_stdin, environ_update=None, check_rc=False
)
if rc != 0:
self.module.fail_json(msg=export_p12_out, cmd=export_p12_cmd, rc=rc)
def create(self):
if self.module.check_mode:
return {'changed': True}
if os.path.exists(self.keystore_path):
os.remove(self.keystore_path)
keystore_p12_path = create_path()
self.module.add_cleanup_file(keystore_p12_path)
if self.ssl_backend == 'cryptography':
self.cryptography_create_pkcs12_bundle(keystore_p12_path)
else:
self.openssl_create_pkcs12_bundle(keystore_p12_path)
import_keystore_cmd = [self.keytool_bin, "-importkeystore",
"-destkeystore", self.keystore_path,
"-srckeystore", keystore_p12_path,
"-srcstoretype", "pkcs12",
"-alias", self.name,
"-noprompt"]
(rc, import_keystore_out, dummy) = self.module.run_command(
import_keystore_cmd, data='%s\n%s\n%s' % (self.password, self.password, self.password), check_rc=False
)
if rc != 0:
return self.module.fail_json(msg=import_keystore_out, cmd=import_keystore_cmd, rc=rc)
self.update_permissions()
return {
'changed': True,
'msg': import_keystore_out,
'cmd': import_keystore_cmd,
'rc': rc
}
def exists(self):
return os.path.exists(self.keystore_path)
# Utility functions
def create_path():
dummy, tmpfile = tempfile.mkstemp()
os.remove(tmpfile)
@@ -226,123 +462,11 @@ def create_file(content):
return tmpfile
def create_tmp_certificate(module):
return create_file(module.params['certificate'])
def create_tmp_private_key(module):
return create_file(module.params['private_key'])
def cert_changed(module, openssl_bin, keytool_bin, keystore_path, keystore_pass, alias):
certificate_path = module.params['certificate_path']
if certificate_path is None:
certificate_path = create_tmp_certificate(module)
try:
current_certificate_fingerprint = read_certificate_fingerprint(module, openssl_bin, certificate_path)
stored_certificate_fingerprint = read_stored_certificate_fingerprint(module, keytool_bin, alias, keystore_path, keystore_pass)
return current_certificate_fingerprint != stored_certificate_fingerprint
finally:
if module.params['certificate_path'] is None:
os.remove(certificate_path)
def create_jks(module, name, openssl_bin, keytool_bin, keystore_path, password, keypass):
if module.check_mode:
return module.exit_json(changed=True)
certificate_path = module.params['certificate_path']
if certificate_path is None:
certificate_path = create_tmp_certificate(module)
private_key_path = module.params['private_key_path']
if private_key_path is None:
private_key_path = create_tmp_private_key(module)
keystore_p12_path = create_path()
try:
if os.path.exists(keystore_path):
os.remove(keystore_path)
export_p12_cmd = [openssl_bin, "pkcs12", "-export", "-name", name, "-in", certificate_path,
"-inkey", private_key_path, "-out", keystore_p12_path, "-passout", "stdin"]
# when keypass is provided, add -passin
cmd_stdin = ""
if keypass:
export_p12_cmd.append("-passin")
export_p12_cmd.append("stdin")
cmd_stdin = "%s\n" % keypass
cmd_stdin += "%s\n%s" % (password, password)
(rc, export_p12_out, dummy) = run_commands(module, export_p12_cmd, data=cmd_stdin)
if rc != 0:
return module.fail_json(msg=export_p12_out,
cmd=export_p12_cmd,
rc=rc)
import_keystore_cmd = [keytool_bin, "-importkeystore",
"-destkeystore", keystore_path,
"-srckeystore", keystore_p12_path,
"-srcstoretype", "pkcs12",
"-alias", name,
"-deststorepass:env", "STOREPASS",
"-srcstorepass:env", "STOREPASS",
"-noprompt"]
(rc, import_keystore_out, dummy) = run_commands(module, import_keystore_cmd, data=None,
environ_update=dict(STOREPASS=password))
if rc != 0:
return module.fail_json(msg=import_keystore_out,
cmd=import_keystore_cmd,
rc=rc)
update_jks_perm(module, keystore_path)
return module.exit_json(changed=True,
msg=import_keystore_out,
cmd=import_keystore_cmd,
rc=rc)
finally:
if module.params['certificate_path'] is None:
os.remove(certificate_path)
if module.params['private_key_path'] is None:
os.remove(private_key_path)
os.remove(keystore_p12_path)
def update_jks_perm(module, keystore_path):
try:
file_args = module.load_file_common_arguments(module.params, path=keystore_path)
except TypeError:
# The path argument is only supported in Ansible-base 2.10+. Fall back to
# pre-2.10 behavior for older Ansible versions.
module.params['path'] = keystore_path
file_args = module.load_file_common_arguments(module.params)
module.set_fs_attributes_if_different(file_args, False)
def process_jks(module):
name = module.params['name']
password = module.params['password']
keypass = module.params['private_key_passphrase']
keystore_path = module.params['dest']
force = module.params['force']
openssl_bin = module.get_bin_path('openssl', True)
keytool_bin = module.get_bin_path('keytool', True)
if os.path.exists(keystore_path):
if force:
create_jks(module, name, openssl_bin, keytool_bin, keystore_path, password, keypass)
else:
if cert_changed(module, openssl_bin, keytool_bin, keystore_path, password, name):
create_jks(module, name, openssl_bin, keytool_bin, keystore_path, password, keypass)
else:
if not module.check_mode:
update_jks_perm(module, keystore_path)
module.exit_json(changed=False)
def hex_decode(s):
if PY2:
return s.decode('hex')
else:
create_jks(module, name, openssl_bin, keytool_bin, keystore_path, password, keypass)
return s.hex()
class ArgumentSpec(object):
@@ -358,6 +482,7 @@ class ArgumentSpec(object):
private_key_path=dict(type='path', no_log=False),
private_key_passphrase=dict(type='str', no_log=True),
password=dict(type='str', required=True, no_log=True),
ssl_backend=dict(type='str', default='openssl', choices=['openssl', 'cryptography']),
force=dict(type='bool', default=False),
)
choose_between = (
@@ -379,7 +504,19 @@ def main():
add_file_common_args=spec.add_file_common_args,
)
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
process_jks(module)
result = dict()
jks = JavaKeystore(module)
if jks.exists():
if module.params['force'] or jks.cert_changed():
result = jks.create()
else:
result['changed'] = jks.update_permissions()
else:
result = jks.create()
module.exit_json(**result)
if __name__ == '__main__':

View File

@@ -258,7 +258,7 @@ class XFConfProperty(CmdMixin, StateMixin, ModuleHelper):
params = ['channel', 'property', {'create': True}]
if self.vars.is_array:
params.append({'is_array': True})
params.append('is_array')
params.append({'values_and_types': (self.vars.value, value_type)})
if not self.module.check_mode:

View File

@@ -17,7 +17,9 @@ tested_filesystems:
ext2: {fssize: 10, grow: True}
xfs: {fssize: 20, grow: False} # grow requires a mounted filesystem
btrfs: {fssize: 150, grow: False} # grow not implemented
reiserfs: {fssize: 33, grow: False} # grow not implemented
vfat: {fssize: 20, grow: True}
ocfs2: {fssize: '{{ ocfs2_fssize }}', grow: False} # grow not implemented
f2fs: {fssize: '{{ f2fs_fssize|default(60) }}', grow: 'f2fs_version is version("1.10.0", ">=")'}
lvm: {fssize: 20, grow: True}
swap: {fssize: 10, grow: False} # grow not implemented

View File

@@ -1,6 +1,9 @@
---
- name: 'Create a "disk" file'
command: 'dd if=/dev/zero of={{ image_file }} bs=1M count={{ fssize }}'
community.general.filesize:
path: '{{ image_file }}'
size: '{{ fssize }}M'
force: true
- vars:
dev: '{{ image_file }}'
@@ -8,26 +11,29 @@
- when: fstype == 'lvm'
block:
- name: 'Create a loop device for LVM'
command: 'losetup --show -f {{ dev }}'
ansible.builtin.command:
cmd: 'losetup --show -f {{ dev }}'
register: loop_device_cmd
- set_fact:
- name: 'Switch to loop device target for further tasks'
ansible.builtin.set_fact:
dev: "{{ loop_device_cmd.stdout }}"
- include_tasks: '{{ action }}.yml'
always:
- name: 'Detach loop device used for LVM'
command: 'losetup -d {{ dev }}'
args:
ansible.builtin.command:
cmd: 'losetup -d {{ dev }}'
removes: '{{ dev }}'
when: fstype == 'lvm'
- name: 'Clean correct device for LVM'
set_fact:
ansible.builtin.set_fact:
dev: '{{ image_file }}'
when: fstype == 'lvm'
- file:
- name: 'Remove disk image file'
ansible.builtin.file:
name: '{{ image_file }}'
state: absent

View File

@@ -1,43 +1,58 @@
- name: filesystem creation
filesystem:
---
- name: "Create filesystem"
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
register: fs_result
- assert:
- name: "Assert that results are as expected"
ansible.builtin.assert:
that:
- 'fs_result is changed'
- 'fs_result is success'
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: "Get UUID of created filesystem"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid
- name: "Check that filesystem isn't created if force isn't used"
filesystem:
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
register: fs2_result
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: "Get UUID of the filesystem"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid2
- assert:
- name: "Assert that filesystem UUID is not changed"
ansible.builtin.assert:
that:
- 'not (fs2_result is changed)'
- 'fs2_result is not changed'
- 'fs2_result is success'
- 'uuid.stdout == uuid2.stdout'
- name: Check that filesystem is recreated if force is used
filesystem:
- name: "Check that filesystem is recreated if force is used"
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
force: yes
register: fs3_result
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: "Get UUID of the new filesystem"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid3
- assert:
- name: "Assert that filesystem UUID is changed"
# libblkid gets no UUID at all for this fstype on FreeBSD
when: not (ansible_system == 'FreeBSD' and fstype == 'reiserfs')
ansible.builtin.assert:
that:
- 'fs3_result is changed'
- 'fs3_result is success'
@@ -46,24 +61,31 @@
- when: 'grow|bool and (fstype != "vfat" or resize_vfat)'
block:
- name: increase fake device
shell: 'dd if=/dev/zero bs=1M count=1 >> {{ image_file }}'
- name: "Increase fake device"
community.general.filesize:
path: '{{ image_file }}'
size: '{{ fssize | int + 1 }}M'
- name: Resize loop device for LVM
command: losetup -c {{ dev }}
- name: "Resize loop device for LVM"
ansible.builtin.command:
cmd: 'losetup -c {{ dev }}'
when: fstype == 'lvm'
- name: Expand filesystem
filesystem:
- name: "Expand filesystem"
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
resizefs: yes
register: fs4_result
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: "Get UUID of the filesystem"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid4
- assert:
- name: "Assert that filesystem UUID is not changed"
ansible.builtin.assert:
that:
- 'fs4_result is changed'
- 'fs4_result is success'
@@ -74,14 +96,15 @@
(fstype == "xfs" and ansible_system == "Linux" and
ansible_distribution not in ["CentOS", "Ubuntu"])
block:
- name: Check that resizefs does nothing if device size is not changed
filesystem:
- name: "Check that resizefs does nothing if device size is not changed"
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
resizefs: yes
register: fs5_result
- assert:
- name: "Assert that the state did not change"
ansible.builtin.assert:
that:
- 'fs5_result is not changed'
- 'fs5_result is succeeded'

View File

@@ -4,9 +4,9 @@
# and should not be used as examples of how to write Ansible roles #
####################################################################
- debug:
- ansible.builtin.debug:
msg: '{{ role_name }}'
- debug:
- ansible.builtin.debug:
msg: '{{ role_path|basename }}'
- import_tasks: setup.yml
@@ -27,29 +27,35 @@
grow: '{{ item.0.value.grow }}'
action: '{{ item.1 }}'
when:
- 'not (item.0.key == "btrfs" and ansible_system == "FreeBSD")' # btrfs not available on FreeBSD
# On Ubuntu trusty, blkid is unable to identify filesystem smaller than 256Mo, see
# https://www.kernel.org/pub/linux/utils/util-linux/v2.21/v2.21-ChangeLog
# https://anonscm.debian.org/cgit/collab-maint/pkg-util-linux.git/commit/?id=04f7020eadf31efc731558df92daa0a1c336c46c
- 'not (item.0.key == "btrfs" and (ansible_distribution == "Ubuntu" and ansible_distribution_release == "trusty"))'
- 'not (item.0.key == "btrfs" and (ansible_facts.os_family == "RedHat" and ansible_facts.distribution_major_version is version("8", ">=")))'
- 'not (item.0.key == "lvm" and ansible_system == "FreeBSD")' # LVM not available on FreeBSD
- 'not (item.0.key == "lvm" and ansible_virtualization_type in ["docker", "container", "containerd"])' # Tests use losetup which can not be used inside unprivileged container
- 'not (item.0.key == "ocfs2" and ansible_os_family != "Debian")' # ocfs2 only available on Debian based distributions
- 'not (item.0.key == "f2fs" and ansible_system == "FreeBSD")'
# f2fs-tools package not available with RHEL/CentOS
- 'not (item.0.key == "f2fs" and ansible_distribution in ["CentOS", "RedHat"])'
# On Ubuntu trusty, blkid (2.20.1) is unable to identify F2FS filesystem. blkid handles F2FS since v2.23, see:
# https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/v2.23/v2.23-ReleaseNotes
- 'not (item.0.key == "f2fs" and ansible_distribution == "Ubuntu" and ansible_distribution_version is version("14.04", "<="))'
- 'not (item.1 == "overwrite_another_fs" and ansible_system == "FreeBSD")'
# FreeBSD limited support
# Not available: btrfs, lvm, f2fs, ocfs2
# All BSD systems use swap fs, but only Linux needs mkswap
# Supported: ext2/3/4 (e2fsprogs), xfs (xfsprogs), reiserfs (progsreiserfs), vfat
- 'not (ansible_system == "FreeBSD" and item.0.key in ["btrfs", "f2fs", "swap", "lvm", "ocfs2"])'
# Available on FreeBSD but not on testbed (util-linux conflicts with e2fsprogs): wipefs, mkfs.minix
- 'not (ansible_system == "FreeBSD" and item.1 in ["overwrite_another_fs", "remove_fs"])'
# Other limitations and corner cases
# f2fs-tools and reiserfs-utils packages not available with RHEL/CentOS on CI
- 'not (ansible_distribution in ["CentOS", "RedHat"] and item.0.key in ["f2fs", "reiserfs"])'
- 'not (ansible_os_family == "RedHat" and ansible_distribution_major_version is version("8", ">=") and
item.0.key == "btrfs")'
# ocfs2 only available on Debian based distributions
- 'not (item.0.key == "ocfs2" and ansible_os_family != "Debian")'
# Tests use losetup which can not be used inside unprivileged container
- 'not (item.0.key == "lvm" and ansible_virtualization_type in ["docker", "container", "containerd"])'
- 'not (item.1 == "remove_fs" and ansible_system == "FreeBSD")' # util-linux not available on FreeBSD
# On CentOS 6 shippable containers, wipefs seems unable to remove vfat signatures
- 'not (item.1 == "remove_fs" and item.0.key == "vfat" and ansible_distribution == "CentOS" and
ansible_distribution_version is version("7.0", "<"))'
- 'not (ansible_distribution == "CentOS" and ansible_distribution_version is version("7.0", "<") and
item.1 == "remove_fs" and item.0.key == "vfat")'
# On same systems, mkfs.minix (unhandled by the module) can't find the device/file
- 'not (ansible_distribution == "CentOS" and ansible_distribution_version is version("7.0", "<") and
item.1 == "overwrite_another_fs")'
# The xfsprogs package on newer versions of OpenSUSE (15+) require Python 3, we skip this on our Python 2 container
# OpenSUSE 42.3 Python2 and the other py3 containers are not affected so we will continue to run that
- 'not (item.0.key == "xfs" and ansible_os_family == "Suse" and ansible_python.version.major == 2 and ansible_distribution_major_version|int != 42)'
- 'not (ansible_os_family == "Suse" and ansible_distribution_major_version|int != 42 and
item.0.key == "xfs" and ansible_python.version.major == 2)'
loop: "{{ query('dict', tested_filesystems)|product(['create_fs', 'overwrite_another_fs', 'remove_fs'])|list }}"

View File

@@ -1,40 +1,55 @@
---
- name: 'Recreate "disk" file'
command: 'dd if=/dev/zero of={{ image_file }} bs=1M count={{ fssize }}'
community.general.filesize:
path: '{{ image_file }}'
size: '{{ fssize }}M'
force: true
- name: 'Create a swap filesystem'
command: 'mkswap {{ dev }}'
- name: 'Create a minix filesystem'
ansible.builtin.command:
cmd: 'mkfs.minix {{ dev }}'
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: 'Get UUID of the new filesystem'
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid
- name: "Check that an existing filesystem (not handled by this module) isn't overwritten when force isn't used"
filesystem:
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
register: fs_result
ignore_errors: True
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: 'Get UUID of the filesystem'
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid2
- assert:
- name: 'Assert that module failed and filesystem UUID is not changed'
ansible.builtin.assert:
that:
- 'fs_result is failed'
- 'uuid.stdout == uuid2.stdout'
- name: "Check that an existing filesystem (not handled by this module) is overwritten when force is used"
filesystem:
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
force: yes
register: fs_result2
- command: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
- name: 'Get UUID of the new filesystem'
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: uuid3
- assert:
- name: 'Assert that module succeeded and filesystem UUID is changed'
ansible.builtin.assert:
that:
- 'fs_result2 is successful'
- 'fs_result2 is success'
- 'fs_result2 is changed'
- 'uuid2.stdout != uuid3.stdout'

View File

@@ -1,98 +1,98 @@
---
# We assume 'create_fs' tests have passed.
- name: filesystem creation
filesystem:
- name: "Create filesystem"
community.general.filesystem:
dev: '{{ dev }}'
fstype: '{{ fstype }}'
- name: get filesystem UUID with 'blkid'
command:
- name: "Get filesystem UUID with 'blkid'"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: blkid_ref
- name: Assert that a filesystem exists on top of the device
assert:
- name: "Assert that a filesystem exists on top of the device"
ansible.builtin.assert:
that:
- blkid_ref.stdout | length > 0
# Test check_mode first
- name: filesystem removal (check mode)
filesystem:
- name: "Remove filesystem (check mode)"
community.general.filesystem:
dev: '{{ dev }}'
state: absent
register: wipefs
check_mode: yes
- name: get filesystem UUID with 'blkid' (should remain the same)
command:
- name: "Get filesystem UUID with 'blkid' (should remain the same)"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
register: blkid
- name: Assert that the state changed but the filesystem still exists
assert:
- name: "Assert that the state changed but the filesystem still exists"
ansible.builtin.assert:
that:
- wipefs is changed
- blkid.stdout == blkid_ref.stdout
# Do it
- name: filesystem removal
filesystem:
- name: "Remove filesystem"
community.general.filesystem:
dev: '{{ dev }}'
state: absent
register: wipefs
- name: get filesystem UUID with 'blkid' (should be empty)
command:
- name: "Get filesystem UUID with 'blkid' (should be empty)"
ansible.builtin.command:
cmd: 'blkid -c /dev/null -o value -s UUID {{ dev }}'
changed_when: false
failed_when: false
register: blkid
- name: Assert that the state changed and the device has no filesystem
assert:
- name: "Assert that the state changed and the device has no filesystem"
ansible.builtin.assert:
that:
- wipefs is changed
- blkid.stdout | length == 0
- blkid.rc == 2
# Do it again
- name: filesystem removal (idempotency)
filesystem:
- name: "Remove filesystem (idempotency)"
community.general.filesystem:
dev: '{{ dev }}'
state: absent
register: wipefs
- name: Assert that the state did not change
assert:
- name: "Assert that the state did not change"
ansible.builtin.assert:
that:
- wipefs is not changed
# and again
- name: filesystem removal (idempotency, check mode)
filesystem:
- name: "Remove filesystem (idempotency, check mode)"
community.general.filesystem:
dev: '{{ dev }}'
state: absent
register: wipefs
check_mode: yes
- name: Assert that the state did not change
assert:
- name: "Assert that the state did not change"
ansible.builtin.assert:
that:
- wipefs is not changed
# By the way, test removal of a filesystem on unexistent device
- name: filesystem removal (unexistent device)
filesystem:
- name: "Remove filesystem (unexistent device)"
community.general.filesystem:
dev: '/dev/unexistent_device'
state: absent
register: wipefs
- name: Assert that the state did not change
assert:
- name: "Assert that the state did not change"
ansible.builtin.assert:
that:
- wipefs is not changed

View File

@@ -1,6 +1,9 @@
---
- name: install filesystem tools
package:
# By installing e2fsprogs on FreeBSD, we get a usable blkid command, but this
# package conflicts with util-linux, that provides blkid too, but also wipefs
# (required for filesystem state=absent).
- name: "Install filesystem tools"
ansible.builtin.package:
name: '{{ item }}'
state: present
# xfsprogs on OpenSUSE requires Python 3, skip this for our newer Py2 OpenSUSE builds
@@ -9,86 +12,134 @@
- e2fsprogs
- xfsprogs
- block:
- name: install btrfs progs
package:
name: btrfs-progs
state: present
when:
- ansible_os_family != 'Suse'
- not (ansible_distribution == 'Ubuntu' and ansible_distribution_version is version('16.04', '<='))
- ansible_system != "FreeBSD"
- not (ansible_facts.os_family == "RedHat" and ansible_facts.distribution_major_version is version('8', '>='))
- name: "Install btrfs progs"
ansible.builtin.package:
name: btrfs-progs
state: present
when:
- ansible_os_family != 'Suse'
- not (ansible_distribution == 'Ubuntu' and ansible_distribution_version is version('16.04', '<='))
- ansible_system != "FreeBSD"
- not (ansible_facts.os_family == "RedHat" and ansible_facts.distribution_major_version is version('8', '>='))
- name: install btrfs progs (Ubuntu <= 16.04)
package:
name: btrfs-tools
state: present
when: ansible_distribution == 'Ubuntu' and ansible_distribution_version is version('16.04', '<=')
- name: "Install btrfs tools (Ubuntu <= 16.04)"
ansible.builtin.package:
name: btrfs-tools
state: present
when:
- ansible_distribution == 'Ubuntu'
- ansible_distribution_version is version('16.04', '<=')
- name: install btrfs progs (OpenSuse)
package:
name: '{{ item }}'
state: present
when: ansible_os_family == 'Suse'
with_items:
- python{{ ansible_python.version.major }}-xml
- btrfsprogs
- name: "Install btrfs progs (OpenSuse)"
ansible.builtin.package:
name: '{{ item }}'
state: present
when: ansible_os_family == 'Suse'
with_items:
- python{{ ansible_python.version.major }}-xml
- btrfsprogs
- name: install ocfs2 (Debian)
package:
name: ocfs2-tools
state: present
when: ansible_os_family == 'Debian'
- name: "Install reiserfs utils (Fedora)"
ansible.builtin.package:
name: reiserfs-utils
state: present
when:
- ansible_distribution == 'Fedora'
- when:
- ansible_os_family != 'RedHat' or ansible_distribution == 'Fedora'
- ansible_distribution != 'Ubuntu' or ansible_distribution_version is version('16.04', '>=')
- ansible_system != "FreeBSD"
block:
- name: install f2fs
package:
name: f2fs-tools
state: present
- name: "Install reiserfs (OpenSuse)"
ansible.builtin.package:
name: reiserfs
state: present
when:
- ansible_os_family == 'Suse'
- name: fetch f2fs version
command: mkfs.f2fs /dev/null
ignore_errors: yes
register: mkfs_f2fs
- name: "Install reiserfs progs (Debian and more)"
ansible.builtin.package:
name: reiserfsprogs
state: present
when:
- ansible_system == 'Linux'
- ansible_os_family not in ['Suse', 'RedHat']
- set_fact:
f2fs_version: '{{ mkfs_f2fs.stdout | regex_search("F2FS-tools: mkfs.f2fs Ver:.*") | regex_replace("F2FS-tools: mkfs.f2fs Ver: ([0-9.]+) .*", "\1") }}'
- name: "Install reiserfs progs (FreeBSD)"
ansible.builtin.package:
name: progsreiserfs
state: present
when:
- ansible_system == 'FreeBSD'
- name: install dosfstools and lvm2 (Linux)
package:
name: '{{ item }}'
with_items:
- dosfstools
- lvm2
when: ansible_system == 'Linux'
- name: "Install ocfs2 (Debian)"
ansible.builtin.package:
name: ocfs2-tools
state: present
when: ansible_os_family == 'Debian'
- block:
- name: install fatresize
package:
name: fatresize
state: present
- command: fatresize --help
register: fatresize
- set_fact:
fatresize_version: '{{ fatresize.stdout_lines[0] | regex_search("[0-9]+\.[0-9]+\.[0-9]+") }}'
- name: "Install f2fs tools and get version"
when:
- ansible_os_family != 'RedHat' or ansible_distribution == 'Fedora'
- ansible_distribution != 'Ubuntu' or ansible_distribution_version is version('16.04', '>=')
- ansible_system != "FreeBSD"
block:
- name: "Install f2fs tools"
ansible.builtin.package:
name: f2fs-tools
state: present
- name: "Fetch f2fs version"
ansible.builtin.command:
cmd: mkfs.f2fs /dev/null
changed_when: false
ignore_errors: true
register: mkfs_f2fs
- name: "Record f2fs_version"
ansible.builtin.set_fact:
f2fs_version: '{{ mkfs_f2fs.stdout
| regex_search("F2FS-tools: mkfs.f2fs Ver:.*")
| regex_replace("F2FS-tools: mkfs.f2fs Ver: ([0-9.]+) .*", "\1") }}'
- name: "Install dosfstools and lvm2 (Linux)"
ansible.builtin.package:
name: '{{ item }}'
with_items:
- dosfstools
- lvm2
when: ansible_system == 'Linux'
- name: "Install fatresize and get version"
when:
- ansible_system == 'Linux'
- ansible_os_family != 'Suse'
- ansible_os_family != 'RedHat' or (ansible_distribution == 'CentOS' and ansible_distribution_version is version('7.0', '=='))
block:
- name: "Install fatresize"
ansible.builtin.package:
name: fatresize
state: present
- command: mke2fs -V
- name: "Fetch fatresize version"
ansible.builtin.command:
cmd: fatresize --help
changed_when: false
register: fatresize
- name: "Record fatresize_version"
ansible.builtin.set_fact:
fatresize_version: '{{ fatresize.stdout_lines[0] | regex_search("[0-9]+\.[0-9]+\.[0-9]+") }}'
- name: "Fetch e2fsprogs version"
ansible.builtin.command:
cmd: mke2fs -V
changed_when: false
register: mke2fs
- set_fact:
- name: "Record e2fsprogs_version"
ansible.builtin.set_fact:
# mke2fs 1.43.6 (29-Aug-2017)
e2fsprogs_version: '{{ mke2fs.stderr_lines[0] | regex_search("[0-9]{1,2}\.[0-9]{1,2}(\.[0-9]{1,2})?") }}'
- set_fact:
- name: "Set version-related facts to skip further tasks"
ansible.builtin.set_fact:
# http://e2fsprogs.sourceforge.net/e2fsprogs-release.html#1.43
# Mke2fs no longer complains if the user tries to create a file system
# using the entire block device.

View File

@@ -0,0 +1,2 @@
shippable/posix/group4
skip/python2.6 # filters are controller only, and we no longer support Python 2.6 on the controller

View File

@@ -0,0 +1,45 @@
---
- name: Test functionality
assert:
that:
- list1 | community.general.groupby_as_dict('name') == dict1
- name: 'Test error: not a list'
set_fact:
test: "{{ list_no_list | community.general.groupby_as_dict('name') }}"
ignore_errors: true
register: result
- assert:
that:
- result.msg == 'Input is not a sequence'
- name: 'Test error: list element not a mapping'
set_fact:
test: "{{ list_no_dict | community.general.groupby_as_dict('name') }}"
ignore_errors: true
register: result
- assert:
that:
- "result.msg == 'Sequence element #0 is not a mapping'"
- name: 'Test error: list element does not have attribute'
set_fact:
test: "{{ list_no_attribute | community.general.groupby_as_dict('name') }}"
ignore_errors: true
register: result
- assert:
that:
- "result.msg == 'Attribute not contained in element #1 of sequence'"
- name: 'Test error: attribute collision'
set_fact:
test: "{{ list_collision | community.general.groupby_as_dict('name') }}"
ignore_errors: true
register: result
- assert:
that:
- result.msg == "Multiple sequence entries have attribute value 'a'"

View File

@@ -0,0 +1,31 @@
---
list1:
- name: a
x: y
- name: b
z: 1
dict1:
a:
name: a
x: y
b:
name: b
z: 1
list_no_list:
a:
name: a
list_no_dict:
- []
- 1
list_no_attribute:
- name: a
foo: baz
- foo: bar
list_collision:
- name: a
- name: a

View File

@@ -1,3 +1,6 @@
gitlab_user: ansible_test_user
gitlab_user_pass: Secr3tPassw00rd
gitlab_user_email: root@localhost
gitlab_sshkey_name: ansibletest
gitlab_sshkey_file: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI8GIMlrirf+zsvBpxnF0daykP6YEJ5wytZXhDGD2dZXg9Tln0KUSDgreT3FDgoabjlOmG1L/nhu6ML76WCsmc/wnVMlXlDlQpVJSQ2PCxGNs9WRW7Y/Pk6t9KtV/VSYr0LaPgLEU8VkffSUBJezbKa1cssjb4CmRRqcePRNYpgCXdK05TEgFvmXl9qIM8Domf1ak1PlbyMmi/MytzHmnVFzxgUKv5c0Mr+vguCi131gPdh3QSf5AHPLEoO9LcMfu2IO1zvl61wYfsJ0Wn2Fncw+tJQfUin0ffTFgUIsGqki04/YjXyWynjSwQf5Jym4BYM0i2zlDUyRxs4/Tfp4yvJFik42ambzjLK6poq+iCpQReeYih9WZUaZwUQe7zYWhTOuoV7ydsk8+kDRMPidF9K5zWkQnglGrOzdbTqnhxNpwHCg2eSRJ49kPYLOH76g8P7IQvl+zluG0o8Nndir1WcYil4D4CCBskM8WbmrElZH1CRyP/NQMNIf4hFMItTjk= ansible@ansible
gitlab_sshkey_expires_at: 2030-01-01T00:00:00.000Z

View File

@@ -56,7 +56,7 @@
- gitlab_user_state_again.user.is_admin == False
- name: Update User Test => Make User Admin
- name: Update User Test => Make User Admin
gitlab_user:
api_url: "{{ gitlab_host }}"
email: "{{ gitlab_user_email }}"
@@ -189,8 +189,8 @@
api_url: "{{ gitlab_host }}"
validate_certs: False
# note: the only way to check if a password really is what it is expected
# to be is to use it for login, so we use it here instead of the
# note: the only way to check if a password really is what it is expected
# to be is to use it for login, so we use it here instead of the
# default token assuming that a user can always change its own password
api_username: "{{ gitlab_user }}"
api_password: "{{ gitlab_user_pass }}"
@@ -205,8 +205,8 @@
- name: Check PW setting return state
assert:
that:
# note: there is no way to determine if a password has changed or
# not, so it can only be always yellow or always green, we
# note: there is no way to determine if a password has changed or
# not, so it can only be always yellow or always green, we
# decided for always green for now
- gitlab_user_state is not changed
@@ -248,3 +248,5 @@
assert:
that:
- gitlab_user_state is not changed
- include_tasks: sshkey.yml

View File

@@ -0,0 +1,134 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Create gitlab user with sshkey credentials
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
password: "{{ gitlab_user_pass }}"
validate_certs: false
sshkey_name: "{{ gitlab_sshkey_name }}"
sshkey_file: "{{ gitlab_sshkey_file }}"
state: present
register: gitlab_user_sshkey
- name: Check user has been created correctly
assert:
that:
- gitlab_user_sshkey is changed
- name: Create gitlab user again
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
password: "{{ gitlab_user_pass }}"
validate_certs: false
sshkey_name: "{{ gitlab_sshkey_name }}"
sshkey_file: "{{ gitlab_sshkey_file }}"
state: present
register: gitlab_user_sshkey_again
- name: Check state is not changed
assert:
that:
- gitlab_user_sshkey_again is not changed
- name: Add expires_at to an already created gitlab user with ssh key
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
password: "{{ gitlab_user_pass }}"
validate_certs: false
sshkey_name: "{{ gitlab_sshkey_name }}"
sshkey_file: "{{ gitlab_sshkey_file }}"
sshkey_expires_at: "{{ gitlab_sshkey_expires_at }}"
state: present
register: gitlab_user_created_user_sshkey_expires_at
- name: Check expires_at will not be added to a present ssh key
assert:
that:
- gitlab_user_created_user_sshkey_expires_at is not changed
- name: Remove created gitlab user
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
validate_certs: false
state: absent
register: gitlab_user_sshkey_remove
- name: Check user has been removed correctly
assert:
that:
- gitlab_user_sshkey_remove is changed
- name: Create gitlab user with sshkey and expires_at
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
password: "{{ gitlab_user_pass }}"
validate_certs: false
sshkey_name: "{{ gitlab_sshkey_name }}"
sshkey_file: "{{ gitlab_sshkey_file }}"
sshkey_expires_at: "{{ gitlab_sshkey_expires_at }}"
state: present
register: gitlab_user_sshkey_expires_at
- name: Check user has been created correctly
assert:
that:
- gitlab_user_sshkey_expires_at is changed
- name: Create gitlab user with sshkey and expires_at again
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
password: "{{ gitlab_user_pass }}"
validate_certs: false
sshkey_name: "{{ gitlab_sshkey_name }}"
sshkey_file: "{{ gitlab_sshkey_file }}"
sshkey_expires_at: "{{ gitlab_sshkey_expires_at }}"
state: present
register: gitlab_user_sshkey_expires_at_again
- name: Check state is not changed
assert:
that:
- gitlab_user_sshkey_expires_at_again is not changed
- name: Remove created gitlab user
gitlab_user:
api_url: "{{ gitlab_host }}"
api_token: "{{ gitlab_login_token }}"
email: "{{ gitlab_user_email }}"
name: "{{ gitlab_user }}"
username: "{{ gitlab_user }}"
validate_certs: false
state: absent
register: gitlab_user_sshkey_expires_at_remove
- name: Check user has been removed correctly
assert:
that:
- gitlab_user_sshkey_expires_at_remove is changed

View File

@@ -9,12 +9,22 @@
- name: Include tasks to create ssl materials on the controller
include_tasks: prepare.yml
- set_fact:
ssl_backends: ['openssl']
- set_fact:
ssl_backends: "{{ ssl_backends + ['cryptography'] }}"
when: cryptography_version.stdout is version('3.0', '>=')
- when: has_java_keytool
block:
- name: Include tasks to play with 'certificate' and 'private_key' contents
include_tasks: tests.yml
vars:
remote_cert: false
loop: "{{ ssl_backends }}"
loop_control:
loop_var: ssl_backend
- name: Include tasks to create ssl materials on the remote host
include_tasks: prepare.yml
@@ -23,3 +33,6 @@
include_tasks: tests.yml
vars:
remote_cert: true
loop: "{{ ssl_backends }}"
loop_control:
loop_var: ssl_backend

View File

@@ -23,6 +23,7 @@
private_key_path: "{{ omit if not remote_cert else output_dir ~ '/' ~ (item.keyname | d(item.name)) ~ '.key' }}"
private_key_passphrase: "{{ item.passphrase | d(omit) }}"
password: changeit
ssl_backend: "{{ ssl_backend }}"
loop: "{{ java_keystore_certs }}"
check_mode: yes
register: result_check

View File

@@ -0,0 +1,2 @@
shippable/posix/group2
skip/python2.6 # lookups are controller only, and we no longer support Python 2.6 on the controller

View File

@@ -0,0 +1,179 @@
---
- name: Test 1
set_fact:
loop_result: >-
{{
query('community.general.dependent',
dict(key1=[1, 2]),
dict(key2='[item.key1 + 3, item.key1 + 6]'),
dict(key3='[item.key1 + item.key2 * 10]'))
}}
- name: Check result of Test 1
assert:
that:
- loop_result == expected_result
vars:
expected_result:
- key1: 1
key2: 4
key3: 41
- key1: 1
key2: 7
key3: 71
- key1: 2
key2: 5
key3: 52
- key1: 2
key2: 8
key3: 82
- name: Test 2
set_fact:
loop_result: >-
{{ query('community.general.dependent',
dict([['a', [1, 2, 3]]]),
dict([['b', '[1, 2, 3, 4] if item.a == 1 else [2, 3, 4] if item.a == 2 else [3, 4]']])) }}
# The last expression could have been `range(item.a, 5)`, but that's not supported by all Jinja2 versions used in CI
- name: Check result of Test 2
assert:
that:
- loop_result == expected_result
vars:
expected_result:
- a: 1
b: 1
- a: 1
b: 2
- a: 1
b: 3
- a: 1
b: 4
- a: 2
b: 2
- a: 2
b: 3
- a: 2
b: 4
- a: 3
b: 3
- a: 3
b: 4
- name: Test 3
debug:
var: item
with_community.general.dependent:
- var1:
a:
- 1
- 2
b:
- 3
- 4
- var2: 'item.var1.value'
- var3: 'dependent_lookup_test[item.var1.key ~ "_" ~ item.var2]'
loop_control:
label: "{{ [item.var1.key, item.var2, item.var3] }}"
register: dependent
vars:
dependent_lookup_test:
a_1:
- A
- B
a_2:
- C
b_3:
- D
b_4:
- E
- F
- G
- name: Check result of Test 3
assert:
that:
- (dependent.results | length) == 7
- dependent.results[0].item.var1.key == "a"
- dependent.results[0].item.var2 == 1
- dependent.results[0].item.var3 == "A"
- dependent.results[1].item.var1.key == "a"
- dependent.results[1].item.var2 == 1
- dependent.results[1].item.var3 == "B"
- dependent.results[2].item.var1.key == "a"
- dependent.results[2].item.var2 == 2
- dependent.results[2].item.var3 == "C"
- dependent.results[3].item.var1.key == "b"
- dependent.results[3].item.var2 == 3
- dependent.results[3].item.var3 == "D"
- dependent.results[4].item.var1.key == "b"
- dependent.results[4].item.var2 == 4
- dependent.results[4].item.var3 == "E"
- dependent.results[5].item.var1.key == "b"
- dependent.results[5].item.var2 == 4
- dependent.results[5].item.var3 == "F"
- dependent.results[6].item.var1.key == "b"
- dependent.results[6].item.var2 == 4
- dependent.results[6].item.var3 == "G"
- name: "Test 4: template failure"
debug:
msg: "{{ item }}"
with_community.general.dependent:
- a:
- 1
- 2
- b: "[item.a + foo]"
ignore_errors: true
register: eval_error
- name: Check result of Test 4
assert:
that:
- eval_error is failed
- eval_error.msg.startswith("Caught \"'foo' is undefined\" while evaluating ")
- name: "Test 5: same variable name reused"
debug:
msg: "{{ item }}"
with_community.general.dependent:
- a: x
- b: x
ignore_errors: true
register: eval_error
- name: Check result of Test 5
assert:
that:
- eval_error is failed
- eval_error.msg.startswith("Caught \"'x' is undefined\" while evaluating ")
- name: "Test 6: multi-value dict"
debug:
msg: "{{ item }}"
with_community.general.dependent:
- a: x
b: x
ignore_errors: true
register: eval_error
- name: Check result of Test 6
assert:
that:
- eval_error is failed
- eval_error.msg == 'Parameter 0 must be a one-element dictionary, got 2 elements'
- name: "Test 7: empty dict"
debug:
msg: "{{ item }}"
with_community.general.dependent:
- {}
ignore_errors: true
register: eval_error
- name: Check result of Test 7
assert:
that:
- eval_error is failed
- eval_error.msg == 'Parameter 0 must be a one-element dictionary, got 0 elements'

View File

@@ -61,6 +61,37 @@
that:
- readpass == newpass
- name: Create a password using missing=create
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-missing-create missing=create length=8') }}"
- name: Fetch password from an existing file
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-missing-create') }}"
- name: Verify password
assert:
that:
- readpass == newpass
- name: Fetch password from existing file using missing=empty
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-missing-create missing=empty') }}"
- name: Verify password
assert:
that:
- readpass == newpass
- name: Fetch password from non-existing file using missing=empty
set_fact:
readpass: "{{ query('community.general.passwordstore', 'test-missing-pass missing=empty') }}"
- name: Verify password
assert:
that:
- readpass == [ none ]
# As inserting multiline passwords on the commandline would require something
# like expect, simply create it by using default gpg on a file with the correct
# structure.

View File

@@ -0,0 +1,3 @@
shippable/posix/group2
skip/aix
skip/python2.6 # lookups are controller only, and we no longer support Python 2.6 on the controller

View File

@@ -0,0 +1,6 @@
---
- hosts: localhost
tasks:
- name: Install Petname Python package
pip:
name: petname

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
set -eux
ANSIBLE_ROLES_PATH=../ \
ansible-playbook dependencies.yml -v "$@"
ANSIBLE_ROLES_PATH=../ \
ansible-playbook test.yml -v "$@"

View File

@@ -0,0 +1,25 @@
- hosts: localhost
gather_facts: no
tasks:
- name: Call plugin
set_fact:
result1: "{{ query('community.general.random_pet', words=3) }}"
result2: "{{ query('community.general.random_pet', length=3) }}"
result3: "{{ query('community.general.random_pet', prefix='kubernetes') }}"
result4: "{{ query('community.general.random_pet', separator='_') }}"
result5: "{{ query('community.general.random_pet', words=2, length=6, prefix='kubernetes', separator='_') }}"
- name: Check results
assert:
that:
- result1 | length == 1
- result1[0].split('-') | length == 3
- result2 | length == 1
- result2[0].split('-')[0] | length <= 3
- result3 | length == 1
- result3[0].split('-')[0] == 'kubernetes'
- result4 | length == 1
- result4[0].split('_') | length == 2
- result5 | length == 1
- result5[0].split('_') | length == 3
- result5[0].split('_')[0] == 'kubernetes'

View File

@@ -48,7 +48,7 @@
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
register: results
- assert:
that:
- results is not changed
@@ -226,6 +226,92 @@
- results_action_current.vmid == {{ vmid }}
- results_action_current.msg == "VM test-instance with vmid = {{ vmid }} is running"
- name: VM add/change/delete NIC
tags: [ 'nic' ]
block:
- name: Add NIC to test VM
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: present
interface: net5
bridge: vmbr0
tag: 42
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 updated on VM with vmid {{ vmid }}"
- name: Update NIC no changes
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: present
interface: net5
bridge: vmbr0
tag: 42
register: results
- assert:
that:
- results is not changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 unchanged on VM with vmid {{ vmid }}"
- name: Update NIC with changes
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: present
interface: net5
bridge: vmbr0
tag: 24
firewall: True
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 updated on VM with vmid {{ vmid }}"
- name: Delete NIC
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: absent
interface: net5
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 deleted on VM with vmid {{ vmid }}"
- name: VM stop
tags: [ 'stop' ]
block:

View File

@@ -23,7 +23,8 @@
state: present
register: lock_all_packages
- name: Update all packages
# This should fail when it needs user interaction and missing -y is on purpose.
- name: Update all packages (not really)
command: yum update --setopt=obsoletes=0
register: update_all_locked_packages
changed_when:
@@ -59,4 +60,4 @@
state: absent
when: yum_versionlock_install is changed
when: (ansible_distribution in ['CentOS', 'RedHat'] and ansible_distribution_major_version is version('7', '>=')) or
(ansible_distribution == 'Fedora')
(ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('33', '<='))

View File

@@ -3,9 +3,6 @@ plugins/module_utils/compat/ipaddress.py no-assert
plugins/module_utils/compat/ipaddress.py no-unicode-literals
plugins/module_utils/_mount.py future-import-boilerplate
plugins/module_utils/_mount.py metaclass-boilerplate
plugins/modules/cloud/linode/linode.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:undocumented-parameter
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice

View File

@@ -2,9 +2,6 @@ plugins/module_utils/compat/ipaddress.py no-assert
plugins/module_utils/compat/ipaddress.py no-unicode-literals
plugins/module_utils/_mount.py future-import-boilerplate
plugins/module_utils/_mount.py metaclass-boilerplate
plugins/modules/cloud/linode/linode.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:undocumented-parameter
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice

Some files were not shown because too many files have changed in this diff Show More