Compare commits

..

35 Commits
5.3.0 ... 5.4.0

Author SHA1 Message Date
Felix Fontein
6b3c797bf6 Release 5.4.0. 2022-08-02 12:20:29 +02:00
patchback[bot]
a81e94ddc7 WDC Redfish firmware update support for update image creds (#5056) (#5057)
Allows user to specify Basic Auth credentials for firmware update image.

(cherry picked from commit 4eb3540c8e)

Co-authored-by: Mike <michael.moerk@wdc.com>
2022-08-02 10:20:15 +02:00
patchback[bot]
e56dafde94 Set CARGO_NET_GIT_FETCH_WITH_CLI=true for cargo on Alpine. (#5053) (#5055)
(cherry picked from commit b5eae69e36)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-01 23:20:33 +02:00
patchback[bot]
767a296b60 New lookup plug-in: bitwarden (#5012) (#5049)
* Basic support for Bitwarden lookups

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>

* Update plugins/lookup/bitwarden.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/bitwarden.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/bitwarden.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: jonathan lung <lungj@heresjono.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
(cherry picked from commit ab0cd83bb1)

Co-authored-by: Jonathan Lung <lungj@users.noreply.github.com>
2022-08-01 11:52:05 +02:00
patchback[bot]
963bbaccb7 xfconf: add command output to results (#5037) (#5051)
* xfconf: add command output to results

* add changelog fragment

* add docs for return value cmd

* Update plugins/modules/system/xfconf.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 5933d28dc4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-08-01 11:27:57 +02:00
patchback[bot]
9358640ed9 Fix: Add user-agent header to allow request through WAF with bot protection (#5024) (#5046)
* Fix: Add user agent header to allow request through CDN/WAF with bot protection

* upate doc-fragment

* move http_agent variable assignment

* set http_agent param for all Keycloak API Requests

* Update plugins/doc_fragments/keycloak.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/5023-http-agent-param-keycloak.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix formatting

* Update plugins/doc_fragments/keycloak.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 88a3daf2ec)

Co-authored-by: Dishant Pandya <drpdishant@gmail.com>
2022-08-01 09:59:49 +02:00
patchback[bot]
2846242e95 lastpass lookup: use config manager, improve documentation (#5022) (#5047)
* LastPass lookup: use config manager, improve documentation

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/5022-lastpass-lookup-cleanup.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: jonathan lung <lungj@heresjono.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e8e6b9bbd7)

Co-authored-by: Jonathan Lung <lungj@users.noreply.github.com>
2022-08-01 09:59:36 +02:00
patchback[bot]
ce934aa49b Slack: Add support for (some) groups (#5019) (#5044)
* Slack: Add support for (some) groups

Some of the older private channels in the workspace I'm working in have channel ID's starting with `G0` and `GF` and this resulted to false positive `channel_not_found` errors.
I've added these prefixes to the list to maintain as much backwards compatibility as possible.

Ideally the auto-prefix of the channel name with `#` is dropped entirely, given the Channel ID's have become more dominant in the Slack API over the past years.

* Add changelog fragment for slack channel prefix fix

* Update changelogs/fragments/5019-slack-support-more-groups.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3fe9592cf1)

Co-authored-by: Richard Tuin <richardtuin@gmail.com>
2022-07-31 23:52:47 +02:00
Felix Fontein
083bd49976 Prepare 5.4.0 release. 2022-07-31 22:57:11 +02:00
patchback[bot]
2cc72c2213 Pacman: Add support for install reason (#4956) (#5040)
* Pacman: Add support for setting install reason

* Improved description

* Fix documentation

* Add changelog fragment

* Use source for installation

* Get all reasons at once

* Removed default for reason

* Added version info to documentation

* Fix NameError

* Moved determination of reason to _build_inventory

* Fix duplication and sanity errors

* adjust tests for changed inventory

* Documentation: remove empty default for reason

* mention packages with changed reason in exit params/info

* Added integration tests for reason and reason_for

Inspired by the integration tests for url packages

* Correct indentation

* Fix indentation

* Also sort changed packages in normal mode

* Also sort result in unit test

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 9f3841703f)

Co-authored-by: Minei3oat <Minei3oat@users.noreply.github.com>
2022-07-31 22:55:21 +02:00
patchback[bot]
1f4a98c8cc Module listen ports facts extend output (#4953) (#5041)
* Initial Rework of netstat and ss to include additional information.
State, foreign address, process.

* Fixed sanity tests. Python 2 compatible code. pylint errors resolved.

* Sanity tests. ss_parse fix minor error I created before.

* Rename variable for clarity

* Python2 rsplit takes no keyword argument. -> remove keyword argument

* Generic improvments for split_pid_name. Added changelog

* Sanity Test (no type hints for python2.7)

* add include_non_listening param. Add param to test. Add documentation. Only return state and foreign_address when include_non_listening

* Update changelogs/fragments/4953-listen-ports-facts-extend-output.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add info to changelog fragment. Clarify documentation.

* The case where we have multiple entries in pids for udp eg: users:(("rpcbind",pid=733,fd=5),("systemd",pid=1,fd=30)) is not in the tests. So roll back to previous approach where this is covered. Fix wrong if condition for include_non_listening.

* Rewrite documentation and formatting.

* Last small documentation adjustments.

* Update parameters to match description.

* added test cases to check if include_non_listening is set to no by default. And test if ports and foreign_address exists if set to yes

* undo rename from address to local_address -> breaking change

* Replace choice with bool, as it is the correct fit here

* nestat distinguishes between tcp6 and tcp output should always be tcp

* Minor adjustments in the docs (no -> false, is set to yes -> true)

Co-authored-by: Paul-Kehnel <paul.kehnel@ocean.ibm.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c273498a03)

Co-authored-by: PKehnel <ga65coy@mytum.de>
2022-07-31 22:55:13 +02:00
patchback[bot]
53b2d69bd7 passwordstore: Add some real gopass integration tests (#5030) (#5042)
* passwordstore: Add some real go tests

This is work in progress.

* passwordstore: Fix gopass init

* Init gopass store in explicit path in integration test

* passwordstore: Show versions of tools in integration test

* passwordstore: Install gopass from different location on Debian

Part of integration tests

* passwordstore: Add changelog fragment for #5030

* passwordstore: Address review feedback

(cherry picked from commit 74f2e1d28b)

Co-authored-by: grembo <freebsd@grem.de>
2022-07-31 22:55:05 +02:00
patchback[bot]
981c7849ce consul: add support for session TTL (#4996) (#5038)
Signed-off-by: Wilfried Roset <wilfriedroset@users.noreply.github.com>
(cherry picked from commit d214f49be7)

Co-authored-by: wilfriedroset <wilfriedroset@users.noreply.github.com>
2022-07-31 22:12:13 +02:00
patchback[bot]
258471b267 mh base: add verbosity() property (#5035) (#5036)
* mh base: add verbosity property

* add changelog fragment

(cherry picked from commit aba089369e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-07-31 11:39:33 +02:00
patchback[bot]
3020b305bb Fix returnall for gopass (#5027) (#5029)
* Fix returnall for gopass

Gopass was always given the --password flag, despite there being no need for this.

* Add changelog fragment

Co-authored-by: Sylvia van Os <sylvia.van.os@politie.nl>
(cherry picked from commit 3eb29eb4b6)

Co-authored-by: Sylvia van Os <sylvia@hackerchick.me>
2022-07-29 14:33:45 +02:00
Felix Fontein
66cbd926f2 Fix changelog fragment.
(cherry picked from commit c64dd16f1c)
2022-07-29 12:10:40 +02:00
patchback[bot]
37fb2137b3 vmadm: add comment to ignore file (#5025) (#5026)
(cherry picked from commit 618fab5f9c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-07-29 11:46:27 +02:00
patchback[bot]
f083a0f4e7 xfconf: add unit test for bool value (#5014) (#5018)
(cherry picked from commit 1c167ab894)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-07-28 21:14:50 +02:00
patchback[bot]
9dc82793c4 xfconf: fix setting of boolean values (#5007) (#5013)
(cherry picked from commit 9290381bea)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-27 08:37:43 +02:00
patchback[bot]
aab93949e1 Pacman: Improve url integrity test (#4968) (#5011)
* Fix typo

* Host url package

* Delete cached files

* Add cases for cached url package

* Rename file_pkg for clarification

* Change port to 8080, as 80 is already used in pipeline

* Added fragment

* Change port to 8000, as 8080 is already used in pipeline

* Fixed changelog fragment

* Change port to 53280, as 8000 is already used in pipeline

* Change port to 27617 (copied from get_url), as 53280 is already used in pipeline

* Also download the signature of url package

Co-authored-by: Jean Raby <jean@raby.sh>

* Fix duplication errors

Co-authored-by: Jean Raby <jean@raby.sh>

* Copied waiting from get_url; applyed output redirection from jraby

* Fix signature filename

* Use correct cache dir

* Add missing assertions for uninstall_1c

* Fix typo

* Delete changelog fragment

* Make python server true async with 90 sec timeout

Copied from ainsible.builtin.get_url

Co-authored-by: Jean Raby <jean@raby.sh>
(cherry picked from commit 76b235c6b3)

Co-authored-by: Minei3oat <Minei3oat@users.noreply.github.com>
2022-07-27 07:41:40 +02:00
patchback[bot]
c8d6181f64 fixing minor documentation flaws (#5000) (#5004)
Co-authored-by: Thomas Blaesing <thomas.blaesing@erwinhymergroup.com>
(cherry picked from commit 037c75db4f)

Co-authored-by: Thomas <3999809+tehtbl@users.noreply.github.com>
2022-07-26 12:26:56 +02:00
patchback[bot]
c286758248 Apk: add support for a custom world file (#4976) (#5005)
* Apk: add support for a custom world file

* Apk: Add changelog fragment for custom world file

(cherry picked from commit be0e47bfdc)

Co-authored-by: CactiChameleon9 <51231053+CactiChameleon9@users.noreply.github.com>
2022-07-26 12:26:38 +02:00
patchback[bot]
6e685e740e xfconf and xfconf_info: use do_raise (#4975) (#4993)
* remove redundant XfConfException class

* adjusted indentation in the documentaiton blocks

* add changelog fragment

(cherry picked from commit 31ef6c914b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-07-24 13:19:59 +02:00
patchback[bot]
695599e7d5 Introduce dig lookup argument fail_on_error (#4973) (#4992)
with default False for backwards compatibility.

Allows fail-fast behavior on lookup failures instead of returning strings and continuing.

(cherry picked from commit 2662bc881f)

Co-authored-by: Benjamin <1982589+tumbl3w33d@users.noreply.github.com>
2022-07-24 13:18:38 +02:00
patchback[bot]
29e7fae303 Fix keyring_info when using keyring library (#4964) (#4991)
* Fix keyring_info when using keyring library

This line used to always clobber the passphrase retrieved via the `keyring` library, making it useless on everything except gnome-keyring. After this change, it'll only use the alternate method if the default one didn't work.

* delete whitespace

* add changelog fragment

* Update changelogs/fragments/4964-fix-keyring-info.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e2426707e2)

Co-authored-by: Sargun Vohra <sargun.vohra@gmail.com>
2022-07-24 13:18:29 +02:00
patchback[bot]
303000c1a1 Expose unredirected_headers on maven_artifact (#4812) (#4994)
* Expose unredirected_headers to module

In some cases, when the initial request returns a redirect and we want
to follow it to get the artifact, we might not want to include certain
headers in the redirection request. Specially headers like
Authorization and Cookies.
Or perhaps the redirect server returns a 400 because it included some
unexpected headers.
Fetch url already supports this feature, but it was being shadowed by
maven_artifact. In here we just expose it.

* Fix Linting errors

* Applied Comments

 - Specified version added
 - Changed description of unredirected_headers

* Check for ansible version

If it's 2.11 or older, we ignore unredirected_headers, otherwise we use
it, as fetch_url has them

* Applied comments

- Removed duplicated code in the call of fetch_url. Used kwargs instead
- Added check if unredirected_params is not empty and the fetch_url
  function does not support it
- Changed function that checks for ansible version
- Removed unused import

* Remove 2.11 breaking change

Made default only for ansible-core version 2.12 and above, but for keep
it empty for ansible-core version 2.11 and below.
Also include the following changes:
  - change doc to use C() on the function description
  - changed doc to use ansible-core instead of Ansible

* Changes in description for readability

* Add changelog fragment

* Change description changelog fragment

(cherry picked from commit a2677fd051)

Co-authored-by: Raul Gabriel Verdi <95469166+raul-verdi@users.noreply.github.com>
2022-07-24 12:29:11 +02:00
patchback[bot]
371ffaeabe Update to new Github account for notifications (#4986) (#4989)
* Update to new Github account for notifications

* Update to new Github account for notifications

(cherry picked from commit 3204905e5c)

Co-authored-by: Florian <100365291+florianpaulhoberg@users.noreply.github.com>
2022-07-23 14:23:29 +02:00
patchback[bot]
42854887eb python-daemon 2.3.1 requires Python 3+. (#4977) (#4981)
(cherry picked from commit e1cfa13a1b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-23 12:58:03 +02:00
patchback[bot]
5386f7d8cd Temporarily disable the yum_versionlock tests. (#4978) (#4985)
(cherry picked from commit 8f5a8cf4ba)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-23 12:22:41 +02:00
patchback[bot]
e86fcf76fc Pacman: Fix name of URL packages (#4959) (#4971)
* Strip downloading... of unseen URLs

* Added changelog fragment

* Added integration tests for reason and reason_for

Inspired by the integration tests for url packages

* Revert "Added integration tests for reason and reason_for"

This reverts commit f60d92f0d7.

Accidentally commited to the wrong branch.

(cherry picked from commit 788cfb624a)

Co-authored-by: Minei3oat <Minei3oat@users.noreply.github.com>
2022-07-21 20:16:28 +02:00
patchback[bot]
4d2895676f proxmox module_utils: fix get_vm int parse handling (#4945) (#4967)
* add int parse handling

* Revert "add int parse handling"

This reverts commit db2aac4254.

* fix: vmid check if state is absent

* add changelogs fragments

* Update changelogs/fragments/4945-fix-get_vm-int-parse-handling.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c57204f9a9)

Co-authored-by: miyuk <enough7531@gmail.com>
2022-07-21 08:14:42 +02:00
patchback[bot]
97b3ad6843 Fix path detection for gopass (#4955) (#4965)
* Fix path detection for gopass

As per fc8c9a2286/docs/features.md (initializing-a-password-store), gopass defaults to ~/.local/share/gopass/stores/root for its password store root location.

However, the user can also override this, and this will be stored in the gopass config file (ed7451678c/docs/config.md (configuration-options)).

This patch ensures that the config setting in gopass is respected, falling back to the default gopass path. pass' behaviour remains unchanged.

* Formatting improvements

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add changelog fragment

* Formatting improvement

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit c31e6413f2)

Co-authored-by: Sylvia van Os <sylvia@hackerchick.me>
2022-07-21 08:14:30 +02:00
patchback[bot]
d7ecd40118 Redfish modules for Western Digital UltraStar Data102 storage enclosures (#4885) (#4958)
* WDC Redfish Info / Command modules for Western Digital Ultrastar Data102 storage enclosures.
Initial commands include:
* FWActivate
* UpdateAndActivate
* SimpleUpdateStatus

* delete unnecessary __init__.py modules

* PR Feedback

Notes list not guaranteed to be sorted
Use EXAMPLES tos how specifying ioms/basuri
Import missing_required_lib

* Apply suggestions from code review

Suggestions that could be auto-committed.

Co-authored-by: Felix Fontein <felix@fontein.de>

* Remove DNSCacheBypass

It is now the caller's responsibility to deal with stale IP addresses.

* Remove dnspython dependency.

Fix bug that this uncovered.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* PR Feedback

* Documentation, simple update status output format, unit tests.

Add docs showing how to use SimpleUpdateStatus
Change the format of SimpleUpateStatus format, put the results in a sub-object.
Fix unit tests whose asserts weren't actually running.

* PR Feedback

register: result on the 2nd example

* Final adjustments for merging for 5.4.0

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit be70d18e3f)

Co-authored-by: Mike <mike@moerk.org>
2022-07-17 11:03:55 +02:00
patchback[bot]
fbf3b85d6b Adjust to b1dd2af4ca. (#4949) (#4952)
(cherry picked from commit ade54bceb8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-12 17:18:37 +02:00
Felix Fontein
1bb1e882df Next expected release is 5.4.0. 2022-07-12 11:39:02 +02:00
57 changed files with 3311 additions and 407 deletions

12
.github/BOTMETA.yml vendored
View File

@@ -197,6 +197,8 @@ files:
$inventories/virtualbox.py: {}
$lookups/:
labels: lookups
$lookups/bitwarden.py:
maintainers: lungj
$lookups/cartesian.py: {}
$lookups/chef_databag.py: {}
$lookups/collection_version.py:
@@ -304,6 +306,9 @@ files:
$module_utils/utm_utils.py:
maintainers: $team_e_spirit
labels: utm_utils
$module_utils/wdc_redfish_utils.py:
maintainers: $team_wdc
labels: wdc_redfish_utils
$module_utils/xenserver.py:
maintainers: bvitnik
labels: xenserver
@@ -923,7 +928,7 @@ files:
$modules/packaging/os/xbps.py:
maintainers: dinoocch the-maldridge
$modules/packaging/os/yum_versionlock.py:
maintainers: florianpaulhoberg aminvakil
maintainers: gyptazy aminvakil
$modules/packaging/os/zypper.py:
maintainers: $team_suse
labels: zypper
@@ -968,6 +973,10 @@ files:
$modules/remote_management/redfish/:
maintainers: $team_redfish
ignore: jose-delarosa
$modules/remote_management/redfish/wdc_redfish_command.py:
maintainers: $team_wdc
$modules/remote_management/redfish/wdc_redfish_info.py:
maintainers: $team_wdc
$modules/remote_management/stacki/stacki_host.py:
maintainers: bsanders bbyhuy
labels: stacki_host
@@ -1298,3 +1307,4 @@ macros:
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l
team_suse: commel evrardjp lrupp toabctl AnderEnder alxgu andytom sealor
team_virt: joshainglis karmab tleguern Thulium-Drake Ajpantuso
team_wdc: mikemoerk

View File

@@ -6,6 +6,62 @@ Community General Release Notes
This changelog describes changes after version 4.0.0.
v5.4.0
======
Release Summary
---------------
Regular bugfix and feature release.
Minor Changes
-------------
- ModuleHelper module utils - added property ``verbosity`` to base class (https://github.com/ansible-collections/community.general/pull/5035).
- apk - add ``world`` parameter for supporting a custom world file (https://github.com/ansible-collections/community.general/pull/4976).
- consul - adds ``ttl`` parameter for session (https://github.com/ansible-collections/community.general/pull/4996).
- dig lookup plugin - add option ``fail_on_error`` to allow stopping execution on lookup failures (https://github.com/ansible-collections/community.general/pull/4973).
- keycloak_* modules - add ``http_agent`` parameter with default value ``Ansible`` (https://github.com/ansible-collections/community.general/issues/5023).
- lastpass - use config manager for handling plugin options (https://github.com/ansible-collections/community.general/pull/5022).
- listen_ports_facts - add new ``include_non_listening`` option which adds ``-a`` option to ``netstat`` and ``ss``. This shows both listening and non-listening (for TCP this means established connections) sockets, and returns ``state`` and ``foreign_address`` (https://github.com/ansible-collections/community.general/issues/4762, https://github.com/ansible-collections/community.general/pull/4953).
- maven_artifact - add a new ``unredirected_headers`` option that can be used with ansible-core 2.12 and above. The default value is to not use ``Authorization`` and ``Cookie`` headers on redirects for security reasons. With ansible-core 2.11, all headers are still passed on for redirects (https://github.com/ansible-collections/community.general/pull/4812).
- pacman - added parameters ``reason`` and ``reason_for`` to set/change the install reason of packages (https://github.com/ansible-collections/community.general/pull/4956).
- xfconf - add ``stdout``, ``stderr`` and ``cmd`` to the module results (https://github.com/ansible-collections/community.general/pull/5037).
- xfconf - use ``do_raise()`` instead of defining custom exception class (https://github.com/ansible-collections/community.general/pull/4975).
- xfconf_info - use ``do_raise()`` instead of defining custom exception class (https://github.com/ansible-collections/community.general/pull/4975).
Bugfixes
--------
- keyring_info - fix the result from the keyring library never getting returned (https://github.com/ansible-collections/community.general/pull/4964).
- pacman - fixed name resolution of URL packages (https://github.com/ansible-collections/community.general/pull/4959).
- passwordstore lookup plugin - fix ``returnall`` for gopass (https://github.com/ansible-collections/community.general/pull/5027).
- passwordstore lookup plugin - fix password store path detection for gopass (https://github.com/ansible-collections/community.general/pull/4955).
- proxmox - fix error handling when getting VM by name when ``state=absent`` (https://github.com/ansible-collections/community.general/pull/4945).
- proxmox_kvm - fix error handling when getting VM by name when ``state=absent`` (https://github.com/ansible-collections/community.general/pull/4945).
- slack - fix incorrect channel prefix ``#`` caused by incomplete pattern detection by adding ``G0`` and ``GF`` as channel ID patterns (https://github.com/ansible-collections/community.general/pull/5019).
- xfconf - fix setting of boolean values (https://github.com/ansible-collections/community.general/issues/4999, https://github.com/ansible-collections/community.general/pull/5007).
New Plugins
-----------
Lookup
~~~~~~
- bitwarden - Retrieve secrets from Bitwarden
New Modules
-----------
Remote Management
~~~~~~~~~~~~~~~~~
redfish
^^^^^^^
- wdc_redfish_command - Manages WDC UltraStar Data102 Out-Of-Band controllers using Redfish APIs
- wdc_redfish_info - Manages WDC UltraStar Data102 Out-Of-Band controllers using Redfish APIs
v5.3.0
======

View File

@@ -947,3 +947,80 @@ releases:
- 4933-fix-rax-clb-nodes.yaml
- 5.3.0.yml
release_date: '2022-07-12'
5.4.0:
changes:
bugfixes:
- keyring_info - fix the result from the keyring library never getting returned
(https://github.com/ansible-collections/community.general/pull/4964).
- pacman - fixed name resolution of URL packages (https://github.com/ansible-collections/community.general/pull/4959).
- passwordstore lookup plugin - fix ``returnall`` for gopass (https://github.com/ansible-collections/community.general/pull/5027).
- passwordstore lookup plugin - fix password store path detection for gopass
(https://github.com/ansible-collections/community.general/pull/4955).
- proxmox - fix error handling when getting VM by name when ``state=absent``
(https://github.com/ansible-collections/community.general/pull/4945).
- proxmox_kvm - fix error handling when getting VM by name when ``state=absent``
(https://github.com/ansible-collections/community.general/pull/4945).
- slack - fix incorrect channel prefix ``#`` caused by incomplete pattern detection
by adding ``G0`` and ``GF`` as channel ID patterns (https://github.com/ansible-collections/community.general/pull/5019).
- xfconf - fix setting of boolean values (https://github.com/ansible-collections/community.general/issues/4999,
https://github.com/ansible-collections/community.general/pull/5007).
minor_changes:
- ModuleHelper module utils - added property ``verbosity`` to base class (https://github.com/ansible-collections/community.general/pull/5035).
- apk - add ``world`` parameter for supporting a custom world file (https://github.com/ansible-collections/community.general/pull/4976).
- consul - adds ``ttl`` parameter for session (https://github.com/ansible-collections/community.general/pull/4996).
- dig lookup plugin - add option ``fail_on_error`` to allow stopping execution
on lookup failures (https://github.com/ansible-collections/community.general/pull/4973).
- keycloak_* modules - add ``http_agent`` parameter with default value ``Ansible``
(https://github.com/ansible-collections/community.general/issues/5023).
- lastpass - use config manager for handling plugin options (https://github.com/ansible-collections/community.general/pull/5022).
- listen_ports_facts - add new ``include_non_listening`` option which adds ``-a``
option to ``netstat`` and ``ss``. This shows both listening and non-listening
(for TCP this means established connections) sockets, and returns ``state``
and ``foreign_address`` (https://github.com/ansible-collections/community.general/issues/4762,
https://github.com/ansible-collections/community.general/pull/4953).
- maven_artifact - add a new ``unredirected_headers`` option that can be used
with ansible-core 2.12 and above. The default value is to not use ``Authorization``
and ``Cookie`` headers on redirects for security reasons. With ansible-core
2.11, all headers are still passed on for redirects (https://github.com/ansible-collections/community.general/pull/4812).
- pacman - added parameters ``reason`` and ``reason_for`` to set/change the
install reason of packages (https://github.com/ansible-collections/community.general/pull/4956).
- xfconf - add ``stdout``, ``stderr`` and ``cmd`` to the module results (https://github.com/ansible-collections/community.general/pull/5037).
- xfconf - use ``do_raise()`` instead of defining custom exception class (https://github.com/ansible-collections/community.general/pull/4975).
- xfconf_info - use ``do_raise()`` instead of defining custom exception class
(https://github.com/ansible-collections/community.general/pull/4975).
release_summary: Regular bugfix and feature release.
fragments:
- 4812-expose-unredirected-headers.yml
- 4945-fix-get_vm-int-parse-handling.yaml
- 4953-listen-ports-facts-extend-output.yaml
- 4955-fix-path-detection-for-gopass.yaml
- 4956-pacman-install-reason.yaml
- 4959-pacman-fix-url-packages-name.yaml
- 4964-fix-keyring-info.yml
- 4973-introduce-dig-lookup-argument.yaml
- 4975-xfconf-use-do-raise.yaml
- 4976-apk-add-support-for-a-custom-world-file.yaml
- 4996-consul-session-ttl.yml
- 4999-xfconf-bool.yml
- 5.4.0.yml
- 5019-slack-support-more-groups.yml
- 5022-lastpass-lookup-cleanup.yml
- 5023-http-agent-param-keycloak.yml
- 5027-fix-returnall-for-gopass.yaml
- 5035-mh-base-verbosity.yaml
- 5037-xfconf-add-cmd-output.yaml
modules:
- description: Manages WDC UltraStar Data102 Out-Of-Band controllers using Redfish
APIs
name: wdc_redfish_command
namespace: remote_management.redfish
- description: Manages WDC UltraStar Data102 Out-Of-Band controllers using Redfish
APIs
name: wdc_redfish_info
namespace: remote_management.redfish
plugins:
lookup:
- description: Retrieve secrets from Bitwarden
name: bitwarden
namespace: null
release_date: '2022-08-02'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 5.3.0
version: 5.4.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -1605,6 +1605,10 @@ plugin_routing:
redirect: community.general.cloud.smartos.vmadm
wakeonlan:
redirect: community.general.remote_management.wakeonlan
wdc_redfish_command:
redirect: community.general.remote_management.redfish.wdc_redfish_command
wdc_redfish_info:
redirect: community.general.remote_management.redfish.wdc_redfish_info
webfaction_app:
redirect: community.general.cloud.webfaction.webfaction_app
webfaction_db:

View File

@@ -8,9 +8,9 @@ DOCUMENTATION = """
name: sudosu
short_description: Run tasks using sudo su -
description:
- This become plugins allows your remote/login user to execute commands as another user via the C(sudo) and C(su) utilities combined.
- This become plugin allows your remote/login user to execute commands as another user via the C(sudo) and C(su) utilities combined.
author:
- Dag Wieers (@dagwieers)
- Dag Wieers (@dagwieers)
version_added: 2.4.0
options:
become_user:

View File

@@ -68,4 +68,10 @@ options:
type: int
default: 10
version_added: 4.5.0
http_agent:
description:
- Configures the HTTP User-Agent header.
type: str
default: Ansible
version_added: 5.4.0
'''

118
plugins/lookup/bitwarden.py Normal file
View File

@@ -0,0 +1,118 @@
# -*- coding: utf-8 -*-
# (c) 2022, Jonathan Lung <lungj@heresjono.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: bitwarden
author:
- Jonathan Lung (@lungj) <lungj@heresjono.com>
requirements:
- bw (command line utility)
- be logged into bitwarden
short_description: Retrieve secrets from Bitwarden
version_added: 5.4.0
description:
- Retrieve secrets from Bitwarden.
options:
_terms:
description: Key(s) to fetch values for from login info.
required: true
type: list
elements: str
field:
description: Field to fetch; leave unset to fetch whole response.
type: str
"""
EXAMPLES = """
- name: "Get 'password' from Bitwarden record named 'a_test'"
ansible.builtin.debug:
msg: >-
{{ lookup('community.general.bitwarden', 'a_test', field='password') }}
- name: "Get full Bitwarden record named 'a_test'"
ansible.builtin.debug:
msg: >-
{{ lookup('community.general.bitwarden', 'a_test') }}
"""
RETURN = """
_raw:
description: List of requested field or JSON object of list of matches.
type: list
elements: raw
"""
from subprocess import Popen, PIPE
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.parsing.ajson import AnsibleJSONDecoder
from ansible.plugins.lookup import LookupBase
class BitwardenException(AnsibleError):
pass
class Bitwarden(object):
def __init__(self, path='bw'):
self._cli_path = path
@property
def cli_path(self):
return self._cli_path
@property
def logged_in(self):
out, err = self._run(['status'], stdin="")
decoded = AnsibleJSONDecoder().raw_decode(out)[0]
return decoded['status'] == 'unlocked'
def _run(self, args, stdin=None, expected_rc=0):
p = Popen([self.cli_path] + args, stdout=PIPE, stderr=PIPE, stdin=PIPE)
out, err = p.communicate(to_bytes(stdin))
rc = p.wait()
if rc != expected_rc:
raise BitwardenException(err)
return to_text(out, errors='surrogate_or_strict'), to_text(err, errors='surrogate_or_strict')
def _get_matches(self, search_value, search_field="name"):
"""Return matching records whose search_field is equal to key.
"""
out, err = self._run(['list', 'items', '--search', search_value])
# This includes things that matched in different fields.
initial_matches = AnsibleJSONDecoder().raw_decode(out)[0]
# Filter to only include results from the right field.
return [item for item in initial_matches if item[search_field] == search_value]
def get_field(self, field, search_value, search_field="name"):
"""Return a list of the specified field for records whose search_field match search_value.
If field is None, return the whole record for each match.
"""
matches = self._get_matches(search_value)
if field:
return [match['login'][field] for match in matches]
return matches
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
self.set_options(var_options=variables, direct=kwargs)
field = self.get_option('field')
if not _bitwarden.logged_in:
raise AnsibleError("Not logged into Bitwarden. Run 'bw login'.")
return [_bitwarden.get_field(field, term) for term in terms]
_bitwarden = Bitwarden()

View File

@@ -42,6 +42,15 @@ DOCUMENTATION = '''
default: false
type: bool
version_added: 3.6.0
fail_on_error:
description:
- Abort execution on lookup errors.
- The default for this option will likely change to C(true) in the future.
The current default, C(false), is used for backwards compatibility, and will result in empty strings
or the string C(NXDOMAIN) in the result in case of errors.
default: false
type: bool
version_added: 5.4.0
notes:
- ALL is not a record per-se, merely the listed fields are available for any record results you retrieve in the form of a dictionary.
- While the 'dig' lookup plugin supports anything which dnspython supports out of the box, only a subset can be converted into a dictionary.
@@ -279,6 +288,7 @@ class LookupModule(LookupBase):
domain = None
qtype = 'A'
flat = True
fail_on_error = False
rdclass = dns.rdataclass.from_text('IN')
for t in terms:
@@ -317,6 +327,8 @@ class LookupModule(LookupBase):
raise AnsibleError("dns lookup illegal CLASS: %s" % to_native(e))
elif opt == 'retry_servfail':
myres.retry_servfail = bool(arg)
elif opt == 'fail_on_error':
fail_on_error = bool(arg)
continue
@@ -364,16 +376,24 @@ class LookupModule(LookupBase):
rd['class'] = dns.rdataclass.to_text(rdata.rdclass)
ret.append(rd)
except Exception as e:
ret.append(str(e))
except Exception as err:
if fail_on_error:
raise AnsibleError("Lookup failed: %s" % str(err))
ret.append(str(err))
except dns.resolver.NXDOMAIN:
except dns.resolver.NXDOMAIN as err:
if fail_on_error:
raise AnsibleError("Lookup failed: %s" % str(err))
ret.append('NXDOMAIN')
except dns.resolver.NoAnswer:
except dns.resolver.NoAnswer as err:
if fail_on_error:
raise AnsibleError("Lookup failed: %s" % str(err))
ret.append("")
except dns.resolver.Timeout:
except dns.resolver.Timeout as err:
if fail_on_error:
raise AnsibleError("Lookup failed: %s" % str(err))
ret.append('')
except dns.exception.DNSException as e:
raise AnsibleError("dns.resolver unhandled exception %s" % to_native(e))
except dns.exception.DNSException as err:
raise AnsibleError("dns.resolver unhandled exception %s" % to_native(err))
return ret

View File

@@ -11,21 +11,24 @@ DOCUMENTATION = '''
- Andrew Zenk (!UNKNOWN) <azenk@umn.edu>
requirements:
- lpass (command line utility)
- must have already logged into lastpass
short_description: fetch data from lastpass
- must have already logged into LastPass
short_description: fetch data from LastPass
description:
- use the lpass command line utility to fetch specific fields from lastpass
- Use the lpass command line utility to fetch specific fields from LastPass.
options:
_terms:
description: key from which you want to retrieve the field
required: True
description: Key from which you want to retrieve the field.
required: true
type: list
elements: str
field:
description: field to return from lastpass
description: Field to return from LastPass.
default: 'password'
type: str
'''
EXAMPLES = """
- name: get 'custom_field' from lastpass entry 'entry-name'
- name: get 'custom_field' from LastPass entry 'entry-name'
ansible.builtin.debug:
msg: "{{ lookup('community.general.lastpass', 'entry-name', field='custom_field') }}"
"""
@@ -88,12 +91,14 @@ class LPass(object):
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
self.set_options(var_options=variables, direct=kwargs)
field = self.get_option('field')
lp = LPass()
if not lp.logged_in:
raise AnsibleError("Not logged into lastpass: please run 'lpass login' first")
raise AnsibleError("Not logged into LastPass: please run 'lpass login' first")
field = kwargs.get('field', 'password')
values = []
for term in terms:
values.append(lp.get_field(term, field))

View File

@@ -21,8 +21,14 @@ DOCUMENTATION = '''
description: query key.
required: True
passwordstore:
description: location of the password store.
default: '~/.password-store'
description:
- Location of the password store.
- 'The value is decided by checking the following in order:'
- If set, this value is used.
- If C(directory) is set, that value will be used.
- If I(backend=pass), then C(~/.password-store) is used.
- If I(backend=gopass), then the C(path) field in C(~/.config/gopass/config.yml) is used,
falling back to C(~/.local/share/gopass/stores/root) if not defined.
directory:
description: The directory of the password store.
env:
@@ -255,11 +261,11 @@ class LookupModule(LookupBase):
def is_real_pass(self):
if self.realpass is None:
try:
self.passoutput = to_text(
passoutput = to_text(
check_output2([self.pass_cmd, "--version"], env=self.env),
errors='surrogate_or_strict'
)
self.realpass = 'pass: the standard unix password manager' in self.passoutput
self.realpass = 'pass: the standard unix password manager' in passoutput
except (subprocess.CalledProcessError) as e:
raise AnsibleError(e)
@@ -325,7 +331,6 @@ class LookupModule(LookupBase):
try:
self.passoutput = to_text(
check_output2([self.pass_cmd, 'show'] +
(['--password'] if self.backend == 'gopass' else []) +
[self.passname], env=self.env),
errors='surrogate_or_strict'
).splitlines()
@@ -428,11 +433,22 @@ class LookupModule(LookupBase):
raise AnsibleError("{0} is not a correct value for locktimeout".format(timeout))
unit_to_seconds = {"s": 1, "m": 60, "h": 3600}
self.lock_timeout = int(timeout[:-1]) * unit_to_seconds[timeout[-1]]
directory = variables.get('passwordstore', os.environ.get('PASSWORD_STORE_DIR', None))
if directory is None:
if self.backend == 'gopass':
try:
with open(os.path.expanduser('~/.config/gopass/config.yml')) as f:
directory = yaml.safe_load(f)['path']
except (FileNotFoundError, KeyError, yaml.YAMLError):
directory = os.path.expanduser('~/.local/share/gopass/stores/root')
else:
directory = os.path.expanduser('~/.password-store')
self.paramvals = {
'subkey': 'password',
'directory': variables.get('passwordstore', os.environ.get(
'PASSWORD_STORE_DIR',
os.path.expanduser('~/.password-store'))),
'directory': directory,
'create': False,
'returnall': False,
'overwrite': False,

View File

@@ -104,6 +104,7 @@ def keycloak_argument_spec():
validate_certs=dict(type='bool', default=True),
connection_timeout=dict(type='int', default=10),
token=dict(type='str', no_log=True),
http_agent=dict(type='str', default='Ansible'),
)
@@ -123,6 +124,7 @@ def get_token(module_params):
"""
token = module_params.get('token')
base_url = module_params.get('auth_keycloak_url')
http_agent = module_params.get('http_agent')
if not base_url.lower().startswith(('http', 'https')):
raise KeycloakError("auth_url '%s' should either start with 'http' or 'https'." % base_url)
@@ -149,7 +151,7 @@ def get_token(module_params):
(k, v) for k, v in temp_payload.items() if v is not None)
try:
r = json.loads(to_native(open_url(auth_url, method='POST',
validate_certs=validate_certs, timeout=connection_timeout,
validate_certs=validate_certs, http_agent=http_agent, timeout=connection_timeout,
data=urlencode(payload)).read()))
except ValueError as e:
raise KeycloakError(
@@ -233,6 +235,7 @@ class KeycloakAPI(object):
self.validate_certs = self.module.params.get('validate_certs')
self.connection_timeout = self.module.params.get('connection_timeout')
self.restheaders = connection_header
self.http_agent = self.module.params.get('http_agent')
def get_realm_info_by_id(self, realm='master'):
""" Obtain realm public info by id
@@ -243,7 +246,8 @@ class KeycloakAPI(object):
realm_info_url = URL_REALM_INFO.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(realm_info_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(realm_info_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -268,7 +272,7 @@ class KeycloakAPI(object):
realm_url = URL_REALM.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(realm_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(realm_url, method='GET', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -293,7 +297,7 @@ class KeycloakAPI(object):
realm_url = URL_REALM.format(url=self.baseurl, realm=realm)
try:
return open_url(realm_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(realm_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(realmrep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update realm %s: %s' % (realm, str(e)),
@@ -307,7 +311,7 @@ class KeycloakAPI(object):
realm_url = URL_REALMS.format(url=self.baseurl)
try:
return open_url(realm_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(realm_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(realmrep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create realm %s: %s' % (realmrep['id'], str(e)),
@@ -322,7 +326,7 @@ class KeycloakAPI(object):
realm_url = URL_REALM.format(url=self.baseurl, realm=realm)
try:
return open_url(realm_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(realm_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not delete realm %s: %s' % (realm, str(e)),
@@ -340,7 +344,8 @@ class KeycloakAPI(object):
clientlist_url += '?clientId=%s' % filter
try:
return json.loads(to_native(open_url(clientlist_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(clientlist_url, http_agent=self.http_agent, method='GET', headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of clients for realm %s: %s'
@@ -371,7 +376,8 @@ class KeycloakAPI(object):
client_url = URL_CLIENT.format(url=self.baseurl, realm=realm, id=id)
try:
return json.loads(to_native(open_url(client_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(client_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -410,7 +416,7 @@ class KeycloakAPI(object):
client_url = URL_CLIENT.format(url=self.baseurl, realm=realm, id=id)
try:
return open_url(client_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(client_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(clientrep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update client %s in realm %s: %s'
@@ -425,7 +431,7 @@ class KeycloakAPI(object):
client_url = URL_CLIENTS.format(url=self.baseurl, realm=realm)
try:
return open_url(client_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(client_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(clientrep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create client %s in realm %s: %s'
@@ -441,7 +447,7 @@ class KeycloakAPI(object):
client_url = URL_CLIENT.format(url=self.baseurl, realm=realm, id=id)
try:
return open_url(client_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(client_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not delete client %s in realm %s: %s'
@@ -456,7 +462,8 @@ class KeycloakAPI(object):
"""
client_roles_url = URL_CLIENT_ROLES.format(url=self.baseurl, realm=realm, id=cid)
try:
return json.loads(to_native(open_url(client_roles_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(client_roles_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg="Could not fetch rolemappings for client %s in realm %s: %s"
@@ -488,7 +495,8 @@ class KeycloakAPI(object):
"""
rolemappings_url = URL_CLIENT_ROLEMAPPINGS.format(url=self.baseurl, realm=realm, id=gid, client=cid)
try:
rolemappings = json.loads(to_native(open_url(rolemappings_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
rolemappings = json.loads(to_native(open_url(rolemappings_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
for role in rolemappings:
if rid == role['id']:
@@ -508,7 +516,8 @@ class KeycloakAPI(object):
"""
available_rolemappings_url = URL_CLIENT_ROLEMAPPINGS_AVAILABLE.format(url=self.baseurl, realm=realm, id=gid, client=cid)
try:
return json.loads(to_native(open_url(available_rolemappings_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(available_rolemappings_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg="Could not fetch available rolemappings for client %s in group %s, realm %s: %s"
@@ -524,7 +533,8 @@ class KeycloakAPI(object):
"""
available_rolemappings_url = URL_CLIENT_ROLEMAPPINGS_COMPOSITE.format(url=self.baseurl, realm=realm, id=gid, client=cid)
try:
return json.loads(to_native(open_url(available_rolemappings_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(available_rolemappings_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg="Could not fetch available rolemappings for client %s in group %s, realm %s: %s"
@@ -541,7 +551,7 @@ class KeycloakAPI(object):
"""
available_rolemappings_url = URL_CLIENT_ROLEMAPPINGS.format(url=self.baseurl, realm=realm, id=gid, client=cid)
try:
open_url(available_rolemappings_url, method="POST", headers=self.restheaders, data=json.dumps(role_rep),
open_url(available_rolemappings_url, method="POST", http_agent=self.http_agent, headers=self.restheaders, data=json.dumps(role_rep),
validate_certs=self.validate_certs, timeout=self.connection_timeout)
except Exception as e:
self.module.fail_json(msg="Could not fetch available rolemappings for client %s in group %s, realm %s: %s"
@@ -558,7 +568,7 @@ class KeycloakAPI(object):
"""
available_rolemappings_url = URL_CLIENT_ROLEMAPPINGS.format(url=self.baseurl, realm=realm, id=gid, client=cid)
try:
open_url(available_rolemappings_url, method="DELETE", headers=self.restheaders,
open_url(available_rolemappings_url, method="DELETE", http_agent=self.http_agent, headers=self.restheaders,
validate_certs=self.validate_certs, timeout=self.connection_timeout)
except Exception as e:
self.module.fail_json(msg="Could not delete available rolemappings for client %s in group %s, realm %s: %s"
@@ -573,7 +583,7 @@ class KeycloakAPI(object):
url = URL_CLIENTTEMPLATES.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(url, method='GET', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of client templates for realm %s: %s'
@@ -592,7 +602,7 @@ class KeycloakAPI(object):
url = URL_CLIENTTEMPLATE.format(url=self.baseurl, id=id, realm=realm)
try:
return json.loads(to_native(open_url(url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(url, method='GET', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain client templates %s for realm %s: %s'
@@ -638,7 +648,7 @@ class KeycloakAPI(object):
url = URL_CLIENTTEMPLATE.format(url=self.baseurl, realm=realm, id=id)
try:
return open_url(url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(clienttrep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update client template %s in realm %s: %s'
@@ -653,7 +663,7 @@ class KeycloakAPI(object):
url = URL_CLIENTTEMPLATES.format(url=self.baseurl, realm=realm)
try:
return open_url(url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(clienttrep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create client template %s in realm %s: %s'
@@ -669,7 +679,7 @@ class KeycloakAPI(object):
url = URL_CLIENTTEMPLATE.format(url=self.baseurl, realm=realm, id=id)
try:
return open_url(url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not delete client template %s in realm %s: %s'
@@ -686,7 +696,8 @@ class KeycloakAPI(object):
"""
clientscopes_url = URL_CLIENTSCOPES.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(clientscopes_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(clientscopes_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg="Could not fetch list of clientscopes in realm %s: %s"
@@ -703,7 +714,8 @@ class KeycloakAPI(object):
"""
clientscope_url = URL_CLIENTSCOPE.format(url=self.baseurl, realm=realm, id=cid)
try:
return json.loads(to_native(open_url(clientscope_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(clientscope_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -748,7 +760,7 @@ class KeycloakAPI(object):
"""
clientscopes_url = URL_CLIENTSCOPES.format(url=self.baseurl, realm=realm)
try:
return open_url(clientscopes_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(clientscopes_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(clientscoperep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Could not create clientscope %s in realm %s: %s"
@@ -763,7 +775,7 @@ class KeycloakAPI(object):
clientscope_url = URL_CLIENTSCOPE.format(url=self.baseurl, realm=realm, id=clientscoperep['id'])
try:
return open_url(clientscope_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(clientscope_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(clientscoperep), validate_certs=self.validate_certs)
except Exception as e:
@@ -801,7 +813,7 @@ class KeycloakAPI(object):
# should have a good cid by here.
clientscope_url = URL_CLIENTSCOPE.format(realm=realm, id=cid, url=self.baseurl)
try:
return open_url(clientscope_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(clientscope_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
@@ -819,7 +831,8 @@ class KeycloakAPI(object):
"""
protocolmappers_url = URL_CLIENTSCOPE_PROTOCOLMAPPERS.format(id=cid, url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(protocolmappers_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(protocolmappers_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg="Could not fetch list of protocolmappers in realm %s: %s"
@@ -838,7 +851,8 @@ class KeycloakAPI(object):
"""
protocolmapper_url = URL_CLIENTSCOPE_PROTOCOLMAPPER.format(url=self.baseurl, realm=realm, id=cid, mapper_id=pid)
try:
return json.loads(to_native(open_url(protocolmapper_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(protocolmapper_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -885,7 +899,7 @@ class KeycloakAPI(object):
"""
protocolmappers_url = URL_CLIENTSCOPE_PROTOCOLMAPPERS.format(url=self.baseurl, id=cid, realm=realm)
try:
return open_url(protocolmappers_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(protocolmappers_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(mapper_rep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Could not create protocolmapper %s in realm %s: %s"
@@ -901,7 +915,7 @@ class KeycloakAPI(object):
protocolmapper_url = URL_CLIENTSCOPE_PROTOCOLMAPPER.format(url=self.baseurl, realm=realm, id=cid, mapper_id=mapper_rep['id'])
try:
return open_url(protocolmapper_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(protocolmapper_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(mapper_rep), validate_certs=self.validate_certs)
except Exception as e:
@@ -918,7 +932,8 @@ class KeycloakAPI(object):
"""
groups_url = URL_GROUPS.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(groups_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(groups_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg="Could not fetch list of groups in realm %s: %s"
@@ -935,7 +950,8 @@ class KeycloakAPI(object):
"""
groups_url = URL_GROUP.format(url=self.baseurl, realm=realm, groupid=gid)
try:
return json.loads(to_native(open_url(groups_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(groups_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
@@ -981,7 +997,7 @@ class KeycloakAPI(object):
"""
groups_url = URL_GROUPS.format(url=self.baseurl, realm=realm)
try:
return open_url(groups_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(groups_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(grouprep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Could not create group %s in realm %s: %s"
@@ -996,7 +1012,7 @@ class KeycloakAPI(object):
group_url = URL_GROUP.format(url=self.baseurl, realm=realm, groupid=grouprep['id'])
try:
return open_url(group_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(group_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(grouprep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update group %s in realm %s: %s'
@@ -1033,7 +1049,7 @@ class KeycloakAPI(object):
# should have a good groupid by here.
group_url = URL_GROUP.format(realm=realm, groupid=groupid, url=self.baseurl)
try:
return open_url(group_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(group_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Unable to delete group %s: %s" % (groupid, str(e)))
@@ -1046,7 +1062,8 @@ class KeycloakAPI(object):
"""
rolelist_url = URL_REALM_ROLES.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(rolelist_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(rolelist_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of roles for realm %s: %s'
@@ -1064,7 +1081,7 @@ class KeycloakAPI(object):
"""
role_url = URL_REALM_ROLE.format(url=self.baseurl, realm=realm, name=quote(name))
try:
return json.loads(to_native(open_url(role_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(role_url, method="GET", http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
if e.code == 404:
@@ -1084,7 +1101,7 @@ class KeycloakAPI(object):
"""
roles_url = URL_REALM_ROLES.format(url=self.baseurl, realm=realm)
try:
return open_url(roles_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(roles_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(rolerep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create role %s in realm %s: %s'
@@ -1098,7 +1115,7 @@ class KeycloakAPI(object):
"""
role_url = URL_REALM_ROLE.format(url=self.baseurl, realm=realm, name=quote(rolerep['name']))
try:
return open_url(role_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(role_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(rolerep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update role %s in realm %s: %s'
@@ -1112,7 +1129,7 @@ class KeycloakAPI(object):
"""
role_url = URL_REALM_ROLE.format(url=self.baseurl, realm=realm, name=quote(name))
try:
return open_url(role_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(role_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Unable to delete role %s in realm %s: %s'
@@ -1131,7 +1148,8 @@ class KeycloakAPI(object):
% (clientid, realm))
rolelist_url = URL_CLIENT_ROLES.format(url=self.baseurl, realm=realm, id=cid)
try:
return json.loads(to_native(open_url(rolelist_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(rolelist_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of roles for client %s in realm %s: %s'
@@ -1155,7 +1173,7 @@ class KeycloakAPI(object):
% (clientid, realm))
role_url = URL_CLIENT_ROLE.format(url=self.baseurl, realm=realm, id=cid, name=quote(name))
try:
return json.loads(to_native(open_url(role_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(role_url, method="GET", http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
if e.code == 404:
@@ -1181,7 +1199,7 @@ class KeycloakAPI(object):
% (clientid, realm))
roles_url = URL_CLIENT_ROLES.format(url=self.baseurl, realm=realm, id=cid)
try:
return open_url(roles_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(roles_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(rolerep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create role %s for client %s in realm %s: %s'
@@ -1201,7 +1219,7 @@ class KeycloakAPI(object):
% (clientid, realm))
role_url = URL_CLIENT_ROLE.format(url=self.baseurl, realm=realm, id=cid, name=quote(rolerep['name']))
try:
return open_url(role_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(role_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(rolerep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update role %s for client %s in realm %s: %s'
@@ -1220,7 +1238,7 @@ class KeycloakAPI(object):
% (clientid, realm))
role_url = URL_CLIENT_ROLE.format(url=self.baseurl, realm=realm, id=cid, name=quote(name))
try:
return open_url(role_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(role_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Unable to delete role %s for client %s in realm %s: %s'
@@ -1237,7 +1255,8 @@ class KeycloakAPI(object):
authentication_flow = {}
# Check if the authentication flow exists on the Keycloak serveraders
authentications = json.load(open_url(URL_AUTHENTICATION_FLOWS.format(url=self.baseurl, realm=realm), method='GET',
headers=self.restheaders, timeout=self.connection_timeout, validate_certs=self.validate_certs))
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout, validate_certs=self.validate_certs))
for authentication in authentications:
if authentication["alias"] == alias:
authentication_flow = authentication
@@ -1256,7 +1275,7 @@ class KeycloakAPI(object):
flow_url = URL_AUTHENTICATION_FLOW.format(url=self.baseurl, realm=realm, id=id)
try:
return open_url(flow_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(flow_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not delete authentication flow %s in realm %s: %s'
@@ -1279,7 +1298,7 @@ class KeycloakAPI(object):
realm=realm,
copyfrom=quote(config["copyFrom"])),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(new_name),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
@@ -1288,7 +1307,7 @@ class KeycloakAPI(object):
URL_AUTHENTICATION_FLOWS.format(url=self.baseurl,
realm=realm),
method='GET',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for flow in flow_list:
@@ -1318,7 +1337,7 @@ class KeycloakAPI(object):
url=self.baseurl,
realm=realm),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(new_flow),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
@@ -1328,7 +1347,7 @@ class KeycloakAPI(object):
url=self.baseurl,
realm=realm),
method='GET',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for flow in flow_list:
@@ -1353,7 +1372,7 @@ class KeycloakAPI(object):
realm=realm,
flowalias=quote(flowAlias)),
method='PUT',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(updatedExec),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
@@ -1374,7 +1393,7 @@ class KeycloakAPI(object):
realm=realm,
id=executionId),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(authenticationConfig),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
@@ -1399,7 +1418,7 @@ class KeycloakAPI(object):
realm=realm,
flowalias=quote(flowAlias)),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(newSubFlow),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
@@ -1423,7 +1442,7 @@ class KeycloakAPI(object):
realm=realm,
flowalias=quote(flowAlias)),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
data=json.dumps(newExec),
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
@@ -1447,7 +1466,7 @@ class KeycloakAPI(object):
realm=realm,
id=executionId),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
elif diff < 0:
@@ -1458,7 +1477,7 @@ class KeycloakAPI(object):
realm=realm,
id=executionId),
method='POST',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
@@ -1480,7 +1499,7 @@ class KeycloakAPI(object):
realm=realm,
flowalias=quote(config["alias"])),
method='GET',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for execution in executions:
@@ -1493,7 +1512,7 @@ class KeycloakAPI(object):
realm=realm,
id=execConfigId),
method='GET',
headers=self.restheaders,
http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
execution["authenticationConfig"] = execConfig
@@ -1509,7 +1528,7 @@ class KeycloakAPI(object):
"""
idps_url = URL_IDENTITY_PROVIDERS.format(url=self.baseurl, realm=realm)
try:
return json.loads(to_native(open_url(idps_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(idps_url, method='GET', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of identity providers for realm %s: %s'
@@ -1526,7 +1545,7 @@ class KeycloakAPI(object):
"""
idp_url = URL_IDENTITY_PROVIDER.format(url=self.baseurl, realm=realm, alias=alias)
try:
return json.loads(to_native(open_url(idp_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(idp_url, method="GET", http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
if e.code == 404:
@@ -1546,7 +1565,7 @@ class KeycloakAPI(object):
"""
idps_url = URL_IDENTITY_PROVIDERS.format(url=self.baseurl, realm=realm)
try:
return open_url(idps_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(idps_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(idprep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create identity provider %s in realm %s: %s'
@@ -1560,7 +1579,7 @@ class KeycloakAPI(object):
"""
idp_url = URL_IDENTITY_PROVIDER.format(url=self.baseurl, realm=realm, alias=idprep['alias'])
try:
return open_url(idp_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(idp_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(idprep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update identity provider %s in realm %s: %s'
@@ -1573,7 +1592,7 @@ class KeycloakAPI(object):
"""
idp_url = URL_IDENTITY_PROVIDER.format(url=self.baseurl, realm=realm, alias=alias)
try:
return open_url(idp_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(idp_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Unable to delete identity provider %s in realm %s: %s'
@@ -1587,7 +1606,8 @@ class KeycloakAPI(object):
"""
mappers_url = URL_IDENTITY_PROVIDER_MAPPERS.format(url=self.baseurl, realm=realm, alias=alias)
try:
return json.loads(to_native(open_url(mappers_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(mappers_url, method='GET', http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of identity provider mappers for idp %s in realm %s: %s'
@@ -1605,7 +1625,8 @@ class KeycloakAPI(object):
"""
mapper_url = URL_IDENTITY_PROVIDER_MAPPER.format(url=self.baseurl, realm=realm, alias=alias, id=mid)
try:
return json.loads(to_native(open_url(mapper_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(mapper_url, method="GET", http_agent=self.http_agent, headers=self.restheaders,
timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
if e.code == 404:
@@ -1626,7 +1647,7 @@ class KeycloakAPI(object):
"""
mappers_url = URL_IDENTITY_PROVIDER_MAPPERS.format(url=self.baseurl, realm=realm, alias=alias)
try:
return open_url(mappers_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(mappers_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(mapper), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not create identity provider mapper %s for idp %s in realm %s: %s'
@@ -1641,7 +1662,7 @@ class KeycloakAPI(object):
"""
mapper_url = URL_IDENTITY_PROVIDER_MAPPER.format(url=self.baseurl, realm=realm, alias=alias, id=mapper['id'])
try:
return open_url(mapper_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(mapper_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(mapper), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update mapper %s for identity provider %s in realm %s: %s'
@@ -1655,7 +1676,7 @@ class KeycloakAPI(object):
"""
mapper_url = URL_IDENTITY_PROVIDER_MAPPER.format(url=self.baseurl, realm=realm, alias=alias, id=mid)
try:
return open_url(mapper_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(mapper_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Unable to delete mapper %s for identity provider %s in realm %s: %s'
@@ -1672,7 +1693,7 @@ class KeycloakAPI(object):
comps_url += '?%s' % filter
try:
return json.loads(to_native(open_url(comps_url, method='GET', headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(comps_url, method='GET', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except ValueError as e:
self.module.fail_json(msg='API returned incorrect JSON when trying to obtain list of components for realm %s: %s'
@@ -1689,7 +1710,7 @@ class KeycloakAPI(object):
"""
comp_url = URL_COMPONENT.format(url=self.baseurl, realm=realm, id=cid)
try:
return json.loads(to_native(open_url(comp_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(comp_url, method="GET", http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except HTTPError as e:
if e.code == 404:
@@ -1709,13 +1730,13 @@ class KeycloakAPI(object):
"""
comps_url = URL_COMPONENTS.format(url=self.baseurl, realm=realm)
try:
resp = open_url(comps_url, method='POST', headers=self.restheaders, timeout=self.connection_timeout,
resp = open_url(comps_url, method='POST', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(comprep), validate_certs=self.validate_certs)
comp_url = resp.getheader('Location')
if comp_url is None:
self.module.fail_json(msg='Could not create component in realm %s: %s'
% (realm, 'unexpected response'))
return json.loads(to_native(open_url(comp_url, method="GET", headers=self.restheaders, timeout=self.connection_timeout,
return json.loads(to_native(open_url(comp_url, method="GET", http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs).read()))
except Exception as e:
self.module.fail_json(msg='Could not create component in realm %s: %s'
@@ -1732,7 +1753,7 @@ class KeycloakAPI(object):
self.module.fail_json(msg='Cannot update component without id')
comp_url = URL_COMPONENT.format(url=self.baseurl, realm=realm, id=cid)
try:
return open_url(comp_url, method='PUT', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(comp_url, method='PUT', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
data=json.dumps(comprep), validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Could not update component %s in realm %s: %s'
@@ -1745,7 +1766,7 @@ class KeycloakAPI(object):
"""
comp_url = URL_COMPONENT.format(url=self.baseurl, realm=realm, id=cid)
try:
return open_url(comp_url, method='DELETE', headers=self.restheaders, timeout=self.connection_timeout,
return open_url(comp_url, method='DELETE', http_agent=self.http_agent, headers=self.restheaders, timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg='Unable to delete component %s in realm %s: %s'

View File

@@ -31,6 +31,10 @@ class ModuleHelperBase(object):
def diff_mode(self):
return self.module._diff
@property
def verbosity(self):
return self.module._verbosity
def do_raise(self, *args, **kwargs):
raise _MHE(*args, **kwargs)

View File

@@ -0,0 +1,406 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2022 Western Digital Corporation
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import datetime
import re
import time
import tarfile
from ansible.module_utils.urls import fetch_file
from ansible_collections.community.general.plugins.module_utils.redfish_utils import RedfishUtils
from ansible.module_utils.six.moves.urllib.parse import urlparse, urlunparse
class WdcRedfishUtils(RedfishUtils):
"""Extension to RedfishUtils to support WDC enclosures."""
# Status codes returned by WDC FW Update Status
UPDATE_STATUS_CODE_READY_FOR_FW_UPDATE = 0
UPDATE_STATUS_CODE_FW_UPDATE_IN_PROGRESS = 1
UPDATE_STATUS_CODE_FW_UPDATE_COMPLETED_WAITING_FOR_ACTIVATION = 2
UPDATE_STATUS_CODE_FW_UPDATE_FAILED = 3
# Status messages returned by WDC FW Update Status
UPDATE_STATUS_MESSAGE_READY_FOR_FW_UDPATE = "Ready for FW update"
UDPATE_STATUS_MESSAGE_FW_UPDATE_IN_PROGRESS = "FW update in progress"
UPDATE_STATUS_MESSAGE_FW_UPDATE_COMPLETED_WAITING_FOR_ACTIVATION = "FW update completed. Waiting for activation."
UPDATE_STATUS_MESSAGE_FW_UPDATE_FAILED = "FW update failed."
def __init__(self,
creds,
root_uris,
timeout,
module,
resource_id,
data_modification):
super(WdcRedfishUtils, self).__init__(creds=creds,
root_uri=root_uris[0],
timeout=timeout,
module=module,
resource_id=resource_id,
data_modification=data_modification)
# Update the root URI if we cannot perform a Redfish GET to the first one
self._set_root_uri(root_uris)
def _set_root_uri(self, root_uris):
"""Set the root URI from a list of options.
If the current root URI is good, just keep it. Else cycle through our options until we find a good one.
A URI is considered good if we can GET uri/redfish/v1.
"""
for root_uri in root_uris:
uri = root_uri + "/redfish/v1"
response = self.get_request(uri)
if response['ret']:
self.root_uri = root_uri
break
def _find_updateservice_resource(self):
"""Find the update service resource as well as additional WDC-specific resources."""
response = super(WdcRedfishUtils, self)._find_updateservice_resource()
if not response['ret']:
return response
return self._find_updateservice_additional_uris()
def _is_enclosure_multi_tenant(self):
"""Determine if the enclosure is multi-tenant.
The serial number of a multi-tenant enclosure will end in "-A" or "-B".
:return: True/False if the enclosure is multi-tenant or not; None if unable to determine.
"""
response = self.get_request(self.root_uri + self.service_root + "Chassis/Enclosure")
if response['ret'] is False:
return None
pattern = r".*-[A,B]"
data = response['data']
return re.match(pattern, data['SerialNumber']) is not None
def _find_updateservice_additional_uris(self):
"""Find & set WDC-specific update service URIs"""
response = self.get_request(self.root_uri + self._update_uri())
if response['ret'] is False:
return response
data = response['data']
if 'Actions' not in data:
return {'ret': False, 'msg': 'Service does not support SimpleUpdate'}
if '#UpdateService.SimpleUpdate' not in data['Actions']:
return {'ret': False, 'msg': 'Service does not support SimpleUpdate'}
action = data['Actions']['#UpdateService.SimpleUpdate']
if 'target' not in action:
return {'ret': False, 'msg': 'Service does not support SimpleUpdate'}
self.simple_update_uri = action['target']
# Simple update status URI is not provided via GET /redfish/v1/UpdateService
# So we have to hard code it.
self.simple_update_status_uri = "{0}/Status".format(self.simple_update_uri)
# FWActivate URI
if 'Oem' not in data['Actions']:
return {'ret': False, 'msg': 'Service does not support OEM operations'}
if 'WDC' not in data['Actions']['Oem']:
return {'ret': False, 'msg': 'Service does not support WDC operations'}
if '#UpdateService.FWActivate' not in data['Actions']['Oem']['WDC']:
return {'ret': False, 'msg': 'Service does not support FWActivate'}
action = data['Actions']['Oem']['WDC']['#UpdateService.FWActivate']
if 'target' not in action:
return {'ret': False, 'msg': 'Service does not support FWActivate'}
self.firmware_activate_uri = action['target']
return {'ret': True}
def _simple_update_status_uri(self):
return self.simple_update_status_uri
def _firmware_activate_uri(self):
return self.firmware_activate_uri
def _update_uri(self):
return self.update_uri
def get_simple_update_status(self):
"""Issue Redfish HTTP GET to return the simple update status"""
result = {}
response = self.get_request(self.root_uri + self._simple_update_status_uri())
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
result['entries'] = data
return result
def firmware_activate(self, update_opts):
"""Perform FWActivate using Redfish HTTP API."""
creds = update_opts.get('update_creds')
payload = {}
if creds:
if creds.get('username'):
payload["Username"] = creds.get('username')
if creds.get('password'):
payload["Password"] = creds.get('password')
# Make sure the service supports FWActivate
response = self.get_request(self.root_uri + self._update_uri())
if response['ret'] is False:
return response
data = response['data']
if 'Actions' not in data:
return {'ret': False, 'msg': 'Service does not support FWActivate'}
response = self.post_request(self.root_uri + self._firmware_activate_uri(), payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "FWActivate requested"}
def _get_bundle_version(self,
bundle_uri):
"""Get the firmware version from a bundle file, and whether or not it is multi-tenant.
Only supports HTTP at this time. Assumes URI exists and is a tarfile.
Looks for a file oobm-[version].pkg, such as 'oobm-4.0.13.pkg`. Extracts the version number
from that filename (in the above example, the version number is "4.0.13".
To determine if the bundle is multi-tenant or not, it looks inside the .bin file within the tarfile,
and checks the appropriate byte in the file.
:param str bundle_uri: HTTP URI of the firmware bundle.
:return: Firmware version number contained in the bundle, and whether or not the bundle is multi-tenant.
Either value will be None if unable to deterine.
:rtype: str or None, bool or None
"""
bundle_temp_filename = fetch_file(module=self.module,
url=bundle_uri)
if not tarfile.is_tarfile(bundle_temp_filename):
return None, None
tf = tarfile.open(bundle_temp_filename)
pattern_pkg = r"oobm-(.+)\.pkg"
pattern_bin = r"(.*\.bin)"
bundle_version = None
is_multi_tenant = None
for filename in tf.getnames():
match_pkg = re.match(pattern_pkg, filename)
if match_pkg is not None:
bundle_version = match_pkg.group(1)
match_bin = re.match(pattern_bin, filename)
if match_bin is not None:
bin_filename = match_bin.group(1)
bin_file = tf.extractfile(bin_filename)
bin_file.seek(11)
byte_11 = bin_file.read(1)
is_multi_tenant = byte_11 == b'\x80'
return bundle_version, is_multi_tenant
@staticmethod
def uri_is_http(uri):
"""Return True if the specified URI is http or https.
:param str uri: A URI.
:return: True if the URI is http or https, else False
:rtype: bool
"""
parsed_bundle_uri = urlparse(uri)
return parsed_bundle_uri.scheme.lower() in ['http', 'https']
def update_and_activate(self, update_opts):
"""Update and activate the firmware in a single action.
Orchestrates the firmware update so that everything can be done in a single command.
Compares the update version with the already-installed version -- skips update if they are the same.
Performs retries, handles timeouts as needed.
"""
# Convert credentials to standard HTTP format
if update_opts.get("update_creds") is not None and "username" in update_opts["update_creds"] and "password" in update_opts["update_creds"]:
update_creds = update_opts["update_creds"]
parsed_url = urlparse(update_opts["update_image_uri"])
if update_creds:
original_netloc = parsed_url.netloc
parsed_url = parsed_url._replace(netloc="{0}:{1}@{2}".format(update_creds.get("username"),
update_creds.get("password"),
original_netloc))
update_opts["update_image_uri"] = urlunparse(parsed_url)
del update_opts["update_creds"]
# Make sure bundle URI is HTTP(s)
bundle_uri = update_opts["update_image_uri"]
if not self.uri_is_http(bundle_uri):
return {
'ret': False,
'msg': 'Bundle URI must be HTTP or HTTPS'
}
# Make sure IOM is ready for update
result = self.get_simple_update_status()
if result['ret'] is False:
return result
update_status = result['entries']
status_code = update_status['StatusCode']
status_description = update_status['Description']
if status_code not in [
self.UPDATE_STATUS_CODE_READY_FOR_FW_UPDATE,
self.UPDATE_STATUS_CODE_FW_UPDATE_FAILED
]:
return {
'ret': False,
'msg': 'Target is not ready for FW update. Current status: {0} ({1})'.format(
status_code, status_description
)}
# Check the FW version in the bundle file, and compare it to what is already on the IOMs
# Bundle version number
bundle_firmware_version, is_bundle_multi_tenant = self._get_bundle_version(bundle_uri)
if bundle_firmware_version is None or is_bundle_multi_tenant is None:
return {
'ret': False,
'msg': 'Unable to extract bundle version or multi-tenant status from update image tarfile'
}
# Verify that the bundle is correctly multi-tenant or not
is_enclosure_multi_tenant = self._is_enclosure_multi_tenant()
if is_enclosure_multi_tenant != is_bundle_multi_tenant:
return {
'ret': False,
'msg': 'Enclosure multi-tenant is {0} but bundle multi-tenant is {1}'.format(
is_enclosure_multi_tenant,
is_bundle_multi_tenant,
)
}
# Version number installed on IOMs
firmware_inventory = self.get_firmware_inventory()
if not firmware_inventory["ret"]:
return firmware_inventory
firmware_inventory_dict = {}
for entry in firmware_inventory["entries"]:
firmware_inventory_dict[entry["Id"]] = entry
iom_a_firmware_version = firmware_inventory_dict.get("IOModuleA_OOBM", {}).get("Version")
iom_b_firmware_version = firmware_inventory_dict.get("IOModuleB_OOBM", {}).get("Version")
# If version is None, we will proceed with the update, because we cannot tell
# for sure that we have a full version match.
if is_enclosure_multi_tenant:
# For multi-tenant, only one of the IOMs will be affected by the firmware update,
# so see if that IOM already has the same firmware version as the bundle.
firmware_already_installed = bundle_firmware_version == self._get_installed_firmware_version_of_multi_tenant_system(
iom_a_firmware_version,
iom_b_firmware_version)
else:
# For single-tenant, see if both IOMs already have the same firmware version as the bundle.
firmware_already_installed = bundle_firmware_version == iom_a_firmware_version == iom_b_firmware_version
# If this FW already installed, return changed: False, and do not update the firmware.
if firmware_already_installed:
return {
'ret': True,
'changed': False,
'msg': 'Version {0} already installed'.format(bundle_firmware_version)
}
# Version numbers don't match the bundle -- proceed with update (unless we are in check mode)
if self.module.check_mode:
return {
'ret': True,
'changed': True,
'msg': 'Update not performed in check mode.'
}
update_successful = False
retry_interval_seconds = 5
max_number_of_retries = 5
retry_number = 0
while retry_number < max_number_of_retries and not update_successful:
if retry_number != 0:
time.sleep(retry_interval_seconds)
retry_number += 1
result = self.simple_update(update_opts)
if result['ret'] is not True:
# Sometimes a timeout error is returned even though the update actually was requested.
# Check the update status to see if the update is in progress.
status_result = self.get_simple_update_status()
if status_result['ret'] is False:
continue
update_status = status_result['entries']
status_code = update_status['StatusCode']
if status_code != self.UPDATE_STATUS_CODE_FW_UPDATE_IN_PROGRESS:
# Update is not in progress -- retry until max number of retries
continue
else:
update_successful = True
else:
update_successful = True
if not update_successful:
# Unable to get SimpleUpdate to work. Return the failure from the SimpleUpdate
return result
# Wait for "ready to activate"
max_wait_minutes = 30
polling_interval_seconds = 30
status_code = self.UPDATE_STATUS_CODE_READY_FOR_FW_UPDATE
start_time = datetime.datetime.now()
# For a short time, target will still say "ready for firmware update" before it transitions
# to "update in progress"
status_codes_for_update_incomplete = [
self.UPDATE_STATUS_CODE_FW_UPDATE_IN_PROGRESS,
self.UPDATE_STATUS_CODE_READY_FOR_FW_UPDATE
]
iteration = 0
while status_code in status_codes_for_update_incomplete \
and datetime.datetime.now() - start_time < datetime.timedelta(minutes=max_wait_minutes):
if iteration != 0:
time.sleep(polling_interval_seconds)
iteration += 1
result = self.get_simple_update_status()
if result['ret'] is False:
continue # We may get timeouts, just keep trying until we give up
update_status = result['entries']
status_code = update_status['StatusCode']
status_description = update_status['Description']
if status_code == self.UPDATE_STATUS_CODE_FW_UPDATE_IN_PROGRESS:
# Once it says update in progress, "ready for update" is no longer a valid status code
status_codes_for_update_incomplete = [self.UPDATE_STATUS_CODE_FW_UPDATE_IN_PROGRESS]
# Update no longer in progress -- verify that it finished
if status_code != self.UPDATE_STATUS_CODE_FW_UPDATE_COMPLETED_WAITING_FOR_ACTIVATION:
return {
'ret': False,
'msg': 'Target is not ready for FW activation after update. Current status: {0} ({1})'.format(
status_code, status_description
)}
self.firmware_activate(update_opts)
return {'ret': True, 'changed': True,
'msg': "Firmware updated and activation initiated."}
def _get_installed_firmware_version_of_multi_tenant_system(self,
iom_a_firmware_version,
iom_b_firmware_version):
"""Return the version for the active IOM on a multi-tenant system.
Only call this on a multi-tenant system.
Given the installed firmware versions for IOM A, B, this method will determine which IOM is active
for this tenanat, and return that IOM's firmware version.
"""
# To determine which IOM we are on, try to GET each IOM resource
# The one we are on will return valid data.
# The other will return an error with message "IOM Module A/B cannot be read"
which_iom_is_this = None
for iom_letter in ['A', 'B']:
iom_uri = "Chassis/IOModule{0}FRU".format(iom_letter)
response = self.get_request(self.root_uri + self.service_root + iom_uri)
if response['ret'] is False:
continue
data = response['data']
if "Id" in data: # Assume if there is an "Id", it is valid
which_iom_is_this = iom_letter
break
if which_iom_is_this == 'A':
return iom_a_firmware_version
elif which_iom_is_this == 'B':
return iom_b_firmware_version
else:
return None

View File

@@ -14,7 +14,7 @@ def _values_fmt(values, value_types):
result = []
for value, value_type in zip(values, value_types):
if value_type == 'bool':
value = boolean(value)
value = 'true' if boolean(value) else 'false'
result.extend(['--type', '{0}'.format(value_type), '--set', '{0}'.format(value)])
return result

View File

@@ -743,6 +743,8 @@ def main():
module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e))
elif state == 'absent':
if not vmid:
module.exit_json(changed=False, msg='VM with hostname = %s is already absent' % hostname)
try:
vm = proxmox.get_vm(vmid, ignore_missing=True)
if not vm:

View File

@@ -1370,6 +1370,8 @@ def main():
elif state == 'absent':
status = {}
if not vmid:
module.exit_json(changed=False, msg='VM with name = %s is already absent' % name)
try:
vm = proxmox.get_vm(vmid, ignore_missing=True)
if not vm:

View File

@@ -95,6 +95,11 @@ options:
choices: [ delete, release ]
type: str
default: release
ttl:
description:
- Specifies the duration of a session in seconds (between 10 and 86400).
type: int
version_added: 5.4.0
'''
EXAMPLES = '''
@@ -121,6 +126,11 @@ EXAMPLES = '''
- name: Retrieve active sessions
community.general.consul_session:
state: list
- name: Register session with a ttl
community.general.consul_session:
name: session-with-ttl
ttl: 600 # sec
'''
try:
@@ -185,6 +195,7 @@ def update_session(module):
datacenter = module.params.get('datacenter')
node = module.params.get('node')
behavior = module.params.get('behavior')
ttl = module.params.get('ttl')
consul_client = get_consul_api(module)
@@ -192,6 +203,7 @@ def update_session(module):
session = consul_client.session.create(
name=name,
behavior=behavior,
ttl=ttl,
node=node,
lock_delay=delay,
dc=datacenter,
@@ -201,6 +213,7 @@ def update_session(module):
session_id=session,
name=name,
behavior=behavior,
ttl=ttl,
delay=delay,
checks=checks,
node=node)
@@ -241,6 +254,7 @@ def main():
checks=dict(type='list', elements='str'),
delay=dict(type='int', default='15'),
behavior=dict(type='str', default='release', choices=['release', 'delete']),
ttl=dict(type='int'),
host=dict(type='str', default='localhost'),
port=dict(type='int', default=8500),
scheme=dict(type='str', default='http'),

View File

@@ -293,7 +293,7 @@ def build_payload_for_slack(text, channel, thread_id, username, icon_url, icon_e
# With a custom color we have to set the message as attachment, and explicitly turn markdown parsing on for it.
payload = dict(attachments=[dict(text=escape_quotes(text), color=color, mrkdwn_in=["text"])])
if channel is not None:
if channel.startswith(('#', '@', 'C0')):
if channel.startswith(('#', '@', 'C0', 'GF', 'G0')):
payload['channel'] = channel
else:
payload['channel'] = '#' + channel

View File

@@ -150,6 +150,15 @@ options:
default: 'md5'
choices: ['md5', 'sha1']
version_added: 3.2.0
unredirected_headers:
type: list
elements: str
version_added: 5.2.0
description:
- A list of headers that should not be included in the redirection. This headers are sent to the fetch_url C(fetch_url) function.
- On ansible-core version 2.12 or later, the default of this option is C([Authorization, Cookie]).
- Useful if the redirection URL does not need to have sensitive headers in the request.
- Requires ansible-core version 2.12 or later.
directory_mode:
type: str
description:
@@ -230,6 +239,7 @@ import tempfile
import traceback
import re
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
from ansible.module_utils.ansible_release import __version__ as ansible_version
from re import match
@@ -509,7 +519,18 @@ class MavenDownloader:
self.module.params['url_password'] = self.module.params.get('password', '')
self.module.params['http_agent'] = self.user_agent
response, info = fetch_url(self.module, url_to_use, timeout=req_timeout, headers=self.headers)
kwargs = {}
if self.module.params['unredirected_headers']:
kwargs['unredirected_headers'] = self.module.params['unredirected_headers']
response, info = fetch_url(
self.module,
url_to_use,
timeout=req_timeout,
headers=self.headers,
**kwargs
)
if info['status'] == 200:
return response
if force:
@@ -614,12 +635,20 @@ def main():
keep_name=dict(required=False, default=False, type='bool'),
verify_checksum=dict(required=False, default='download', choices=['never', 'download', 'change', 'always']),
checksum_alg=dict(required=False, default='md5', choices=['md5', 'sha1']),
unredirected_headers=dict(type='list', elements='str', required=False),
directory_mode=dict(type='str'),
),
add_file_common_args=True,
mutually_exclusive=([('version', 'version_by_spec')])
)
if LooseVersion(ansible_version) < LooseVersion("2.12") and module.params['unredirected_headers']:
module.fail_json(msg="Unredirected Headers parameter provided, but your ansible-core version does not support it. Minimum version is 2.12")
if LooseVersion(ansible_version) >= LooseVersion("2.12") and module.params['unredirected_headers'] is None:
# if the user did not supply unredirected params, we use the default, ONLY on ansible core 2.12 and above
module.params['unredirected_headers'] = ['Authorization', 'Cookie']
if not HAS_LXML_ETREE:
module.fail_json(msg=missing_required_lib('lxml'), exception=LXML_ETREE_IMP_ERR)

View File

@@ -61,6 +61,12 @@ options:
- Upgrade all installed packages to their latest version.
type: bool
default: no
world:
description:
- Use a custom world file when checking for explicitly installed packages.
type: str
default: /etc/apk/world
version_added: 5.4.0
notes:
- 'I(name) and I(upgrade) are mutually exclusive.'
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
@@ -134,6 +140,12 @@ EXAMPLES = '''
name: foo
state: latest
no_cache: yes
- name: Install package checking a custom world
community.general.apk:
name: foo
state: latest
world: /etc/apk/world.custom
'''
RETURN = '''
@@ -171,11 +183,11 @@ def update_package_db(module, exit):
return True
def query_toplevel(module, name):
# /etc/apk/world contains a list of top-level packages separated by ' ' or \n
def query_toplevel(module, name, world):
# world contains a list of top-level packages separated by ' ' or \n
# packages may contain repository (@) or version (=<>~) separator characters or start with negation !
regex = re.compile(r'^' + re.escape(name) + r'([@=<>~].+)?$')
with open('/etc/apk/world') as f:
with open(world) as f:
content = f.read().split()
for p in content:
if regex.search(p):
@@ -237,7 +249,7 @@ def upgrade_packages(module, available):
module.exit_json(changed=True, msg="upgraded packages", stdout=stdout, stderr=stderr, packages=packagelist)
def install_packages(module, names, state):
def install_packages(module, names, state, world):
upgrade = False
to_install = []
to_upgrade = []
@@ -250,7 +262,7 @@ def install_packages(module, names, state):
if state == 'latest' and not query_latest(module, dependency):
to_upgrade.append(dependency)
else:
if not query_toplevel(module, name):
if not query_toplevel(module, name, world):
to_install.append(name)
elif state == 'latest' and not query_latest(module, name):
to_upgrade.append(name)
@@ -313,6 +325,7 @@ def main():
update_cache=dict(default=False, type='bool'),
upgrade=dict(default=False, type='bool'),
available=dict(default=False, type='bool'),
world=dict(default='/etc/apk/world', type='str'),
),
required_one_of=[['name', 'update_cache', 'upgrade']],
mutually_exclusive=[['name', 'upgrade']],
@@ -348,7 +361,7 @@ def main():
upgrade_packages(module, p['available'])
if p['state'] in ['present', 'latest']:
install_packages(module, p['name'], p['state'])
install_packages(module, p['name'], p['state'], p['world'])
elif p['state'] == 'absent':
remove_packages(module, p['name'])

View File

@@ -104,6 +104,22 @@ options:
default:
type: str
reason:
description:
- The install reason to set for the packages.
choices: [ dependency, explicit ]
type: str
version_added: 5.4.0
reason_for:
description:
- Set the install reason for C(all) packages or only for C(new) packages.
- In case of C(state=latest) already installed packages which will be updated to a newer version are not counted as C(new).
default: new
choices: [ all, new ]
type: str
version_added: 5.4.0
notes:
- When used with a C(loop:) each package will be processed individually,
it is much more efficient to pass the list directly to the I(name) option.
@@ -223,6 +239,20 @@ EXAMPLES = """
name: baz
state: absent
force: yes
- name: Install foo as dependency and leave reason untouched if already installed
community.general.pacman:
name: foo
state: present
reason: dependency
reason_for: new
- name: Run the equivalent of "pacman -S --asexplicit", mark foo as explicit and install it if not present
community.general.pacman:
name: foo
state: present
reason: explicit
reason_for: all
"""
import shlex
@@ -331,7 +361,14 @@ class Pacman(object):
def install_packages(self, pkgs):
pkgs_to_install = []
pkgs_to_install_from_url = []
pkgs_to_set_reason = []
for p in pkgs:
if self.m.params["reason"] and (
p.name not in self.inventory["pkg_reasons"]
or self.m.params["reason_for"] == "all"
and self.inventory["pkg_reasons"][p.name] != self.m.params["reason"]
):
pkgs_to_set_reason.append(p.name)
if p.source_is_URL:
# URL packages bypass the latest / upgradable_pkgs test
# They go through the dry-run to let pacman decide if they will be installed
@@ -344,7 +381,7 @@ class Pacman(object):
):
pkgs_to_install.append(p)
if len(pkgs_to_install) == 0 and len(pkgs_to_install_from_url) == 0:
if len(pkgs_to_install) == 0 and len(pkgs_to_install_from_url) == 0 and len(pkgs_to_set_reason) == 0:
self.exit_params["packages"] = []
self.add_exit_infos("package(s) already installed")
return
@@ -377,8 +414,13 @@ class Pacman(object):
continue
name, version = p.split()
if name in self.inventory["installed_pkgs"]:
before.append("%s-%s" % (name, self.inventory["installed_pkgs"][name]))
after.append("%s-%s" % (name, version))
before.append("%s-%s-%s" % (name, self.inventory["installed_pkgs"][name], self.inventory["pkg_reasons"][name]))
if name in pkgs_to_set_reason:
after.append("%s-%s-%s" % (name, version, self.m.params["reason"]))
elif name in self.inventory["pkg_reasons"]:
after.append("%s-%s-%s" % (name, version, self.inventory["pkg_reasons"][name]))
else:
after.append("%s-%s" % (name, version))
to_be_installed.append(name)
return (to_be_installed, before, after)
@@ -398,7 +440,7 @@ class Pacman(object):
before.extend(b)
after.extend(a)
if len(installed_pkgs) == 0:
if len(installed_pkgs) == 0 and len(pkgs_to_set_reason) == 0:
# This can happen with URL packages if pacman decides there's nothing to do
self.exit_params["packages"] = []
self.add_exit_infos("package(s) already installed")
@@ -411,9 +453,11 @@ class Pacman(object):
"after": "\n".join(sorted(after)) + "\n" if after else "",
}
changed_reason_pkgs = [p for p in pkgs_to_set_reason if p not in installed_pkgs]
if self.m.check_mode:
self.add_exit_infos("Would have installed %d packages" % len(installed_pkgs))
self.exit_params["packages"] = sorted(installed_pkgs)
self.add_exit_infos("Would have installed %d packages" % (len(installed_pkgs) + len(changed_reason_pkgs)))
self.exit_params["packages"] = sorted(installed_pkgs + changed_reason_pkgs)
return
# actually do it
@@ -430,8 +474,22 @@ class Pacman(object):
if pkgs_to_install_from_url:
_install_packages_for_real("--upgrade", pkgs_to_install_from_url)
self.exit_params["packages"] = installed_pkgs
self.add_exit_infos("Installed %d package(s)" % len(installed_pkgs))
# set reason
if pkgs_to_set_reason:
cmd = [self.pacman_path, "--noconfirm", "--database"]
if self.m.params["reason"] == "dependency":
cmd.append("--asdeps")
else:
cmd.append("--asexplicit")
cmd.extend(pkgs_to_set_reason)
rc, stdout, stderr = self.m.run_command(cmd, check_rc=False)
if rc != 0:
self.fail("Failed to install package(s)", cmd=cmd, stdout=stdout, stderr=stderr)
self.add_exit_infos(stdout=stdout, stderr=stderr)
self.exit_params["packages"] = sorted(installed_pkgs + changed_reason_pkgs)
self.add_exit_infos("Installed %d package(s)" % (len(installed_pkgs) + len(changed_reason_pkgs)))
def remove_packages(self, pkgs):
# filter out pkgs that are already absent
@@ -613,8 +671,9 @@ class Pacman(object):
stderr=stderr,
rc=rc,
)
# With Pacman v6.0.1 - libalpm v13.0.1, --upgrade outputs "loading packages..." on stdout. strip that
stdout = stdout.replace("loading packages...\n", "")
# With Pacman v6.0.1 - libalpm v13.0.1, --upgrade outputs " filename_without_extension downloading..." if the URL is unseen.
# In all cases, pacman outputs "loading packages..." on stdout. strip both
stdout = stdout.splitlines()[-1]
is_URL = True
pkg_name = stdout.strip()
pkg_list.append(Package(name=pkg_name, source=pkg, source_is_URL=is_URL))
@@ -630,6 +689,7 @@ class Pacman(object):
"available_pkgs": {pkgname: version},
"available_groups": {groupname: set(pkgnames)},
"upgradable_pkgs": {pkgname: (current_version,latest_version)},
"pkg_reasons": {pkgname: reason},
}
Fails the module if a package requested for install cannot be found
@@ -722,12 +782,31 @@ class Pacman(object):
rc=rc,
)
pkg_reasons = {}
dummy, stdout, dummy = self.m.run_command([self.pacman_path, "--query", "--explicit"], check_rc=True)
# Format of a line: "pacman 6.0.1-2"
for l in stdout.splitlines():
l = l.strip()
if not l:
continue
pkg = l.split()[0]
pkg_reasons[pkg] = "explicit"
dummy, stdout, dummy = self.m.run_command([self.pacman_path, "--query", "--deps"], check_rc=True)
# Format of a line: "pacman 6.0.1-2"
for l in stdout.splitlines():
l = l.strip()
if not l:
continue
pkg = l.split()[0]
pkg_reasons[pkg] = "dependency"
return dict(
installed_pkgs=installed_pkgs,
installed_groups=installed_groups,
available_pkgs=available_pkgs,
available_groups=available_groups,
upgradable_pkgs=upgradable_pkgs,
pkg_reasons=pkg_reasons,
)
@@ -748,6 +827,8 @@ def setup_module():
upgrade_extra_args=dict(type="str", default=""),
update_cache=dict(type="bool"),
update_cache_extra_args=dict(type="str", default=""),
reason=dict(type="str", choices=["explicit", "dependency"]),
reason_for=dict(type="str", default="new", choices=["new", "all"]),
),
required_one_of=[["name", "update_cache", "upgrade"]],
mutually_exclusive=[["name", "upgrade"]],

View File

@@ -1,6 +1,6 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Florian Paul Hoberg <florian.hoberg@credativ.de>
# Copyright: (c) 2018, Florian Paul Azim Hoberg <florian.hoberg@credativ.de>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -35,7 +35,7 @@ requirements:
- yum
- yum-versionlock
author:
- Florian Paul Hoberg (@florianpaulhoberg)
- Florian Paul Azim Hoberg (@gyptazy)
- Amin Vakil (@aminvakil)
'''

View File

@@ -0,0 +1,252 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2022 Western Digital Corporation
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: wdc_redfish_command
short_description: Manages WDC UltraStar Data102 Out-Of-Band controllers using Redfish APIs
version_added: 5.4.0
description:
- Builds Redfish URIs locally and sends them to remote OOB controllers to
perform an action.
- Manages OOB controller firmware. For example, Firmware Activate, Update and Activate.
options:
category:
required: true
description:
- Category to execute on OOB controller.
type: str
command:
required: true
description:
- List of commands to execute on OOB controller.
type: list
elements: str
baseuri:
description:
- Base URI of OOB controller. Must include this or I(ioms).
type: str
ioms:
description:
- List of IOM FQDNs for the enclosure. Must include this or I(baseuri).
type: list
elements: str
username:
description:
- User for authentication with OOB controller.
type: str
password:
description:
- Password for authentication with OOB controller.
type: str
auth_token:
description:
- Security token for authentication with OOB controller.
type: str
timeout:
description:
- Timeout in seconds for URL requests to OOB controller.
default: 10
type: int
update_image_uri:
required: false
description:
- The URI of the image for the update.
type: str
update_creds:
required: false
description:
- The credentials for retrieving the update image.
type: dict
suboptions:
username:
required: false
description:
- The username for retrieving the update image.
type: str
password:
required: false
description:
- The password for retrieving the update image.
type: str
requirements:
- dnspython (2.1.0 for Python 3, 1.16.0 for Python 2)
notes:
- In the inventory, you can specify baseuri or ioms. See the EXAMPLES section.
- ioms is a list of FQDNs for the enclosure's IOMs.
author: Mike Moerk (@mikemoerk)
'''
EXAMPLES = '''
- name: Firmware Activate (required after SimpleUpdate to apply the new firmware)
community.general.wdc_redfish_command:
category: Update
command: FWActivate
ioms: "{{ ioms }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Firmware Activate with individual IOMs specified
community.general.wdc_redfish_command:
category: Update
command: FWActivate
ioms:
- iom1.wdc.com
- iom2.wdc.com
username: "{{ username }}"
password: "{{ password }}"
- name: Firmware Activate with baseuri specified
community.general.wdc_redfish_command:
category: Update
command: FWActivate
baseuri: "iom1.wdc.com"
username: "{{ username }}"
password: "{{ password }}"
- name: Update and Activate (orchestrates firmware update and activation with a single command)
community.general.wdc_redfish_command:
category: Update
command: UpdateAndActivate
ioms: "{{ ioms }}"
username: "{{ username }}"
password: "{{ password }}"
update_image_uri: "{{ update_image_uri }}"
update_creds:
username: operator
password: supersecretpwd
'''
RETURN = '''
msg:
description: Message with action result or error description
returned: always
type: str
sample: "Action was successful"
'''
from ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils import WdcRedfishUtils
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
CATEGORY_COMMANDS_ALL = {
"Update": [
"FWActivate",
"UpdateAndActivate"
]
}
def main():
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
ioms=dict(type='list', elements='str'),
baseuri=dict(),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
update_creds=dict(
type='dict',
options=dict(
username=dict(),
password=dict(no_log=True)
)
),
update_image_uri=dict(),
timeout=dict(type='int', default=10)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
('baseuri', 'ioms')
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=True
)
category = module.params['category']
command_list = module.params['command']
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']
# Check that Category is valid
if category not in CATEGORY_COMMANDS_ALL:
module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, sorted(CATEGORY_COMMANDS_ALL.keys()))))
# Check that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category])))
# Build root URI(s)
if module.params.get("baseuri") is not None:
root_uris = ["https://" + module.params['baseuri']]
else:
root_uris = [
"https://" + iom for iom in module.params['ioms']
]
rf_utils = WdcRedfishUtils(creds, root_uris, timeout, module,
resource_id=None, data_modification=True)
# Organize by Categories / Commands
if category == "Update":
# execute only if we find UpdateService resources
resource = rf_utils._find_updateservice_resource()
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
# update options
update_opts = {
'update_creds': module.params['update_creds']
}
for command in command_list:
if command == "FWActivate":
if module.check_mode:
result = {
'ret': True,
'changed': True,
'msg': 'FWActivate not performed in check mode.'
}
else:
result = rf_utils.firmware_activate(update_opts)
elif command == "UpdateAndActivate":
update_opts["update_image_uri"] = module.params['update_image_uri']
result = rf_utils.update_and_activate(update_opts)
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
else:
del result['ret']
changed = result.get('changed', True)
session = result.get('session', dict())
module.exit_json(changed=changed,
session=session,
msg='Action was successful' if not module.check_mode else result.get(
'msg', "No action performed in check mode."
))
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,214 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2022 Western Digital Corporation
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: wdc_redfish_info
short_description: Manages WDC UltraStar Data102 Out-Of-Band controllers using Redfish APIs
version_added: 5.4.0
description:
- Builds Redfish URIs locally and sends them to remote OOB controllers to
get information back.
options:
category:
required: true
description:
- Category to execute on OOB controller.
type: str
command:
required: true
description:
- List of commands to execute on OOB controller.
type: list
elements: str
baseuri:
description:
- Base URI of OOB controller. Must include this or I(ioms).
type: str
ioms:
description:
- List of IOM FQDNs for the enclosure. Must include this or I(baseuri).
type: list
elements: str
username:
description:
- User for authentication with OOB controller.
type: str
password:
description:
- Password for authentication with OOB controller.
type: str
auth_token:
description:
- Security token for authentication with OOB controller.
type: str
timeout:
description:
- Timeout in seconds for URL requests to OOB controller.
default: 10
type: int
notes:
- In the inventory, you can specify baseuri or ioms. See the EXAMPLES section.
- ioms is a list of FQDNs for the enclosure's IOMs.
author: Mike Moerk (@mikemoerk)
'''
EXAMPLES = '''
- name: Get Simple Update Status with individual IOMs specified
community.general.wdc_redfish_info:
category: Update
command: SimpleUpdateStatus
ioms:
- iom1.wdc.com
- iom2.wdc.com
username: "{{ username }}"
password: "{{ password }}"
register: result
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.simple_update_status.entries | to_nice_json }}"
- name: Get Simple Update Status with baseuri specified
community.general.wdc_redfish_info:
category: Update
command: SimpleUpdateStatus
baseuri: "iom1.wdc.com"
username: "{{ username }}"
password: "{{ password }}"
register: result
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.simple_update_status.entries | to_nice_json }}"
'''
RETURN = '''
Description:
description: Firmware update status description.
returned: always
type: str
sample:
- Ready for FW update
- FW update in progress
- FW update completed. Waiting for activation.
ErrorCode:
description: Numeric error code for firmware update status. Non-zero indicates an error condition.
returned: always
type: int
sample:
- 0
EstimatedRemainingMinutes:
description: Estimated number of minutes remaining in firmware update operation.
returned: always
type: int
sample:
- 0
- 20
StatusCode:
description: Firmware update status code.
returned: always
type: int
sample:
- 0 (Ready for FW update)
- 1 (FW update in progress)
- 2 (FW update completed. Waiting for activation.)
- 3 (FW update failed.)
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils import WdcRedfishUtils
CATEGORY_COMMANDS_ALL = {
"Update": ["SimpleUpdateStatus"]
}
def main():
result = {}
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
ioms=dict(type='list', elements='str'),
baseuri=dict(),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
timeout=dict(type='int', default=10)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
('baseuri', 'ioms')
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=True
)
category = module.params['category']
command_list = module.params['command']
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']
# Check that Category is valid
if category not in CATEGORY_COMMANDS_ALL:
module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, sorted(CATEGORY_COMMANDS_ALL.keys()))))
# Check that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category])))
# Build root URI(s)
if module.params.get("baseuri") is not None:
root_uris = ["https://" + module.params['baseuri']]
else:
root_uris = [
"https://" + iom for iom in module.params['ioms']
]
rf_utils = WdcRedfishUtils(creds, root_uris, timeout, module,
resource_id=None,
data_modification=False
)
# Organize by Categories / Commands
if category == "Update":
# execute only if we find UpdateService resources
resource = rf_utils._find_updateservice_resource()
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "SimpleUpdateStatus":
simple_update_status_result = rf_utils.get_simple_update_status()
if simple_update_status_result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
else:
del simple_update_status_result['ret']
result["simple_update_status"] = simple_update_status_result
module.exit_json(changed=False, redfish_facts=result)
if __name__ == '__main__':
main()

View File

@@ -122,7 +122,9 @@ def run_module():
pass
except AttributeError:
pass
passphrase = _alternate_retrieval_method(module)
if passphrase is None:
passphrase = _alternate_retrieval_method(module)
if passphrase is not None:
result["msg"] = "Successfully retrieved password for %s@%s" % (

View File

@@ -32,6 +32,13 @@ options:
- netstat
- ss
version_added: 4.1.0
include_non_listening:
description:
- Show both listening and non-listening sockets (for TCP this means established connections).
- Adds the return values C(state) and C(foreign_address) to the returned facts.
type: bool
default: false
version_added: 5.4.0
'''
EXAMPLES = r'''
@@ -59,6 +66,11 @@ EXAMPLES = r'''
- name: List all ports
ansible.builtin.debug:
msg: "{{ (ansible_facts.tcp_listen + ansible_facts.udp_listen) | map(attribute='port') | unique | sort | list }}"
- name: Gather facts on all ports and override which command to use
community.general.listen_ports_facts:
command: 'netstat'
include_non_listening: 'yes'
'''
RETURN = r'''
@@ -77,6 +89,18 @@ ansible_facts:
returned: always
type: str
sample: "0.0.0.0"
foreign_address:
description: The address of the remote end of the socket.
returned: if I(include_non_listening=true)
type: str
sample: "10.80.0.1"
version_added: 5.4.0
state:
description: The state of the socket.
returned: if I(include_non_listening=true)
type: str
sample: "ESTABLISHED"
version_added: 5.4.0
name:
description: The name of the listening process.
returned: if user permissions allow
@@ -117,6 +141,18 @@ ansible_facts:
returned: always
type: str
sample: "0.0.0.0"
foreign_address:
description: The address of the remote end of the socket.
returned: if I(include_non_listening=true)
type: str
sample: "10.80.0.1"
version_added: 5.4.0
state:
description: The state of the socket. UDP is a connectionless protocol. Shows UCONN or ESTAB.
returned: if I(include_non_listening=true)
type: str
sample: "UCONN"
version_added: 5.4.0
name:
description: The name of the listening process.
returned: if user permissions allow
@@ -155,47 +191,84 @@ from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.basic import AnsibleModule
def split_pid_name(pid_name):
"""
Split the entry PID/Program name into the PID (int) and the name (str)
:param pid_name: PID/Program String seperated with a dash. E.g 51/sshd: returns pid = 51 and name = sshd
:return: PID (int) and the program name (str)
"""
try:
pid, name = pid_name.split("/", 1)
except ValueError:
# likely unprivileged user, so add empty name & pid
return 0, ""
else:
name = name.rstrip(":")
return int(pid), name
def netStatParse(raw):
"""
The netstat result can be either split in 6,7 or 8 elements depending on the values of state, process and name.
For UDP the state is always empty. For UDP and TCP the process can be empty.
So these cases have to be checked.
:param raw: Netstat raw output String. First line explains the format, each following line contains a connection.
:return: List of dicts, each dict contains protocol, state, local address, foreign address, port, name, pid for one
connection.
"""
results = list()
for line in raw.splitlines():
listening_search = re.search('[^ ]+:[0-9]+', line)
if listening_search:
splitted = line.split()
conns = re.search('([^ ]+):([0-9]+)', splitted[3])
pidstr = ''
if 'tcp' in splitted[0]:
protocol = 'tcp'
pidstr = splitted[6]
elif 'udp' in splitted[0]:
protocol = 'udp'
pidstr = splitted[5]
pids = re.search(r'(([0-9]+)/(.*)|-)', pidstr)
if conns and pids:
address = conns.group(1)
port = conns.group(2)
if (pids.group(2)):
pid = pids.group(2)
else:
pid = 0
if (pids.group(3)):
name = pids.group(3)
else:
name = ''
result = {
'pid': int(pid),
'address': address,
'port': int(port),
'protocol': protocol,
'name': name,
}
if result not in results:
results.append(result)
if line.startswith(("tcp", "udp")):
# set variables to default state, in case they are not specified
state = ""
pid_and_name = ""
process = ""
formatted_line = line.split()
protocol, recv_q, send_q, address, foreign_address, rest = \
formatted_line[0], formatted_line[1], formatted_line[2], formatted_line[3], formatted_line[4], formatted_line[5:]
address, port = address.rsplit(":", 1)
if protocol.startswith("tcp"):
# nestat distinguishes between tcp6 and tcp
protocol = "tcp"
if len(rest) == 3:
state, pid_and_name, process = rest
if len(rest) == 2:
state, pid_and_name = rest
if protocol.startswith("udp"):
# safety measure, similar to tcp6
protocol = "udp"
if len(rest) == 2:
pid_and_name, process = rest
if len(rest) == 1:
pid_and_name = rest[0]
pid, name = split_pid_name(pid_name=pid_and_name)
result = {
'protocol': protocol,
'state': state,
'address': address,
'foreign_address': foreign_address,
'port': int(port),
'name': name,
'pid': int(pid),
}
if result not in results:
results.append(result)
else:
raise EnvironmentError('Could not get process information for the listening ports.')
return results
def ss_parse(raw):
"""
The ss_parse result can be either split in 6 or 7 elements depending on the process column,
e.g. due to unprivileged user.
:param raw: ss raw output String. First line explains the format, each following line contains a connection.
:return: List of dicts, each dict contains protocol, state, local address, foreign address, port, name, pid for one
connection.
"""
results = list()
regex_conns = re.compile(pattern=r'\[?(.+?)\]?:([0-9]+)$')
regex_pid = re.compile(pattern=r'"(.*?)",pid=(\d+)')
@@ -221,8 +294,8 @@ def ss_parse(raw):
except ValueError:
# unexpected stdout from ss
raise EnvironmentError(
'Expected `ss` table layout "Netid, State, Recv-Q, Send-Q, Local Address:Port, Peer Address:Port" and optionally "Process", \
but got something else: {0}'.format(line)
'Expected `ss` table layout "Netid, State, Recv-Q, Send-Q, Local Address:Port, Peer Address:Port" and \
optionally "Process", but got something else: {0}'.format(line)
)
conns = regex_conns.search(local_addr_port)
@@ -239,46 +312,44 @@ def ss_parse(raw):
port = conns.group(2)
for name, pid in pids:
result = {
'pid': int(pid),
'address': address,
'port': int(port),
'protocol': protocol,
'name': name
'state': state,
'address': address,
'foreign_address': peer_addr_port,
'port': int(port),
'name': name,
'pid': int(pid),
}
results.append(result)
return results
def main():
command_args = ['-p', '-l', '-u', '-n', '-t']
commands_map = {
'netstat': {
'args': [
'-p',
'-l',
'-u',
'-n',
'-t',
],
'args': [],
'parse_func': netStatParse
},
'ss': {
'args': [
'-p',
'-l',
'-u',
'-n',
'-t',
],
'args': [],
'parse_func': ss_parse
},
}
module = AnsibleModule(
argument_spec=dict(
command=dict(type='str', choices=list(sorted(commands_map)))
command=dict(type='str', choices=list(sorted(commands_map))),
include_non_listening=dict(default=False, type='bool'),
),
supports_check_mode=True,
)
if module.params['include_non_listening']:
command_args = ['-p', '-u', '-n', '-t', '-a']
commands_map['netstat']['args'] = command_args
commands_map['ss']['args'] = command_args
if platform.system() != 'Linux':
module.fail_json(msg='This module requires Linux.')
@@ -333,13 +404,17 @@ def main():
parse_func = commands_map[command]['parse_func']
results = parse_func(stdout)
for p in results:
p['stime'] = getPidSTime(p['pid'])
p['user'] = getPidUser(p['pid'])
if p['protocol'].startswith('tcp'):
result['ansible_facts']['tcp_listen'].append(p)
elif p['protocol'].startswith('udp'):
result['ansible_facts']['udp_listen'].append(p)
for connection in results:
# only display state and foreign_address for include_non_listening.
if not module.params['include_non_listening']:
connection.pop('state', None)
connection.pop('foreign_address', None)
connection['stime'] = getPidSTime(connection['pid'])
connection['user'] = getPidUser(connection['pid'])
if connection['protocol'].startswith('tcp'):
result['ansible_facts']['tcp_listen'].append(connection)
elif connection['protocol'].startswith('udp'):
result['ansible_facts']['udp_listen'].append(connection)
except (KeyError, EnvironmentError) as e:
module.fail_json(msg=to_native(e))

View File

@@ -143,16 +143,30 @@ RETURN = '''
returned: success
type: any
sample: '"96" or ["red", "blue", "green"]'
cmd:
description:
- A list with the resulting C(xfconf-query) command executed by the module.
returned: success
type: list
elements: str
version_added: 5.4.0
sample:
- /usr/bin/xfconf-query
- --channel
- xfce4-panel
- --property
- /plugins/plugin-19/timezone
- --create
- --type
- string
- --set
- Pacific/Auckland
'''
from ansible_collections.community.general.plugins.module_utils.module_helper import StateModuleHelper
from ansible_collections.community.general.plugins.module_utils.xfconf import xfconf_runner
class XFConfException(Exception):
pass
class XFConfProperty(StateModuleHelper):
change_params = 'value',
diff_params = 'value',
@@ -194,7 +208,7 @@ class XFConfProperty(StateModuleHelper):
if err.rstrip() == self.does_not:
return None
if rc or len(err):
raise XFConfException('xfconf-query failed with error (rc={0}): {1}'.format(rc, err))
self.do_raise('xfconf-query failed with error (rc={0}): {1}'.format(rc, err))
result = out.rstrip()
if "Value is an array with" in result:
@@ -211,6 +225,11 @@ class XFConfProperty(StateModuleHelper):
def state_absent(self):
with self.runner('channel property reset', check_mode_skip=True) as ctx:
ctx.run(reset=True)
self.vars.stdout = ctx.results_out
self.vars.stderr = ctx.results_err
self.vars.cmd = ctx.cmd
if self.verbosity >= 4:
self.vars.run_info = ctx.run_info
self.vars.value = None
def state_present(self):
@@ -227,7 +246,7 @@ class XFConfProperty(StateModuleHelper):
value_type = value_type * values_len
elif types_len != values_len:
# or complain if lists' lengths are different
raise XFConfException('Number of elements in "value" and "value_type" must be the same')
self.do_raise('Number of elements in "value" and "value_type" must be the same')
# calculates if it is an array
self.vars.is_array = \
@@ -237,6 +256,11 @@ class XFConfProperty(StateModuleHelper):
with self.runner('channel property create force_array values_and_types', check_mode_skip=True) as ctx:
ctx.run(create=True, force_array=self.vars.is_array, values_and_types=(self.vars.value, value_type))
self.vars.stdout = ctx.results_out
self.vars.stderr = ctx.results_err
self.vars.cmd = ctx.cmd
if self.verbosity >= 4:
self.vars.run_info = ctx.run_info
if not self.vars.is_array:
self.vars.value = self.vars.value[0]

View File

@@ -9,7 +9,7 @@ __metaclass__ = type
DOCUMENTATION = '''
module: xfconf_info
author:
- "Alexei Znamensky (@russoz)"
- "Alexei Znamensky (@russoz)"
short_description: Retrieve XFCE4 configurations
version_added: 3.5.0
description:
@@ -61,8 +61,8 @@ EXAMPLES = """
RETURN = '''
channels:
description:
- List of available channels.
- Returned when the module receives no parameter at all.
- List of available channels.
- Returned when the module receives no parameter at all.
returned: success
type: list
elements: str
@@ -73,57 +73,53 @@ RETURN = '''
- xfwm4
properties:
description:
- List of available properties for a specific channel.
- Returned by passing only the I(channel) parameter to the module.
- List of available properties for a specific channel.
- Returned by passing only the I(channel) parameter to the module.
returned: success
type: list
elements: str
sample:
- /Gdk/WindowScalingFactor
- /Gtk/ButtonImages
- /Gtk/CursorThemeSize
- /Gtk/DecorationLayout
- /Gtk/FontName
- /Gtk/MenuImages
- /Gtk/MonospaceFontName
- /Net/DoubleClickTime
- /Net/IconThemeName
- /Net/ThemeName
- /Xft/Antialias
- /Xft/Hinting
- /Xft/HintStyle
- /Xft/RGBA
- /Gdk/WindowScalingFactor
- /Gtk/ButtonImages
- /Gtk/CursorThemeSize
- /Gtk/DecorationLayout
- /Gtk/FontName
- /Gtk/MenuImages
- /Gtk/MonospaceFontName
- /Net/DoubleClickTime
- /Net/IconThemeName
- /Net/ThemeName
- /Xft/Antialias
- /Xft/Hinting
- /Xft/HintStyle
- /Xft/RGBA
is_array:
description:
- Flag indicating whether the property is an array or not.
- Flag indicating whether the property is an array or not.
returned: success
type: bool
value:
description:
- The value of the property. Empty if the property is of array type.
- The value of the property. Empty if the property is of array type.
returned: success
type: str
sample: Monospace 10
value_array:
description:
- The array value of the property. Empty if the property is not of array type.
- The array value of the property. Empty if the property is not of array type.
returned: success
type: list
elements: str
sample:
- Main
- Work
- Tmp
- Main
- Work
- Tmp
'''
from ansible_collections.community.general.plugins.module_utils.module_helper import ModuleHelper
from ansible_collections.community.general.plugins.module_utils.xfconf import xfconf_runner
class XFConfException(Exception):
pass
class XFConfInfo(ModuleHelper):
module = dict(
argument_spec=dict(
@@ -170,8 +166,10 @@ class XFConfInfo(ModuleHelper):
elif self.vars.property is None:
output = 'properties'
proc = self._process_list_properties
with self.runner.context('list_arg channel property', output_process=proc) as ctx:
result = ctx.run(**self.vars)
if not self.vars.list_arg and self.vars.is_array:
output = "value_array"
self.vars.set(output, result)

View File

@@ -1,5 +1,15 @@
- import_tasks: setup.yml
- name: Set default environment
set_fact:
cargo_environment: {}
- name: Set special environment to work around cargo bugs
set_fact:
cargo_environment:
# See https://github.com/rust-lang/cargo/issues/10230#issuecomment-1201662729:
CARGO_NET_GIT_FETCH_WITH_CLI: "true"
when: has_cargo | default(false) and ansible_distribution == 'Alpine'
- block:
- import_tasks: test_general.yml
- import_tasks: test_version.yml
environment: "{{ cargo_environment }}"
when: has_cargo | default(false)

View File

@@ -6,5 +6,8 @@ set -eux
# Run connection tests with both the default and C locale.
ansible-playbook test_connection.yml -i "${INVENTORY}" "$@"
LC_ALL=C LANG=C ansible-playbook test_connection.yml -i "${INVENTORY}" "$@"
ansible-playbook test_connection.yml -i "${INVENTORY}" "$@"
if ansible --version | grep ansible | grep -E ' 2\.(9|10|11|12|13)\.'; then
LC_ALL=C LANG=C ansible-playbook test_connection.yml -i "${INVENTORY}" "$@"
fi

View File

@@ -158,3 +158,15 @@
that:
- search_deleted is skipped # each iteration is skipped
- search_deleted is not changed # and then unchanged
- name: ensure session can be created with a ttl
consul_session:
state: present
name: session-with-ttl
ttl: 180 # sec
register: result
- assert:
that:
- result is changed
- result['ttl'] == 180

View File

@@ -58,14 +58,23 @@
listen_ports_facts:
when: ansible_os_family == "RedHat" or ansible_os_family == "Debian"
- name: Gather listening ports facts explicitly via netstat
- name: check that the include_non_listening parameters ('state' and 'foreign_address') are not active in default setting
assert:
that:
- ansible_facts.tcp_listen | selectattr('state', 'defined') | list | length == 0
- ansible_facts.tcp_listen | selectattr('foreign_address', 'defined') | list | length == 0
when: ansible_os_family == "RedHat" or ansible_os_family == "Debian"
- name: Gather listening ports facts explicitly via netstat and include_non_listening
listen_ports_facts:
command: 'netstat'
include_non_listening: 'yes'
when: (ansible_os_family == "RedHat" and ansible_distribution_major_version|int < 7) or ansible_os_family == "Debian"
- name: Gather listening ports facts explicitly via ss
- name: Gather listening ports facts explicitly via ss and include_non_listening
listen_ports_facts:
command: 'ss'
include_non_listening: 'yes'
when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int >= 7
- name: check for ansible_facts.udp_listen exists
@@ -78,6 +87,13 @@
that: ansible_facts.tcp_listen is defined
when: ansible_os_family == "RedHat" or ansible_os_family == "Debian"
- name: check that the include_non_listening parameter 'state' and 'foreign_address' exists
assert:
that:
- ansible_facts.tcp_listen | selectattr('state', 'defined') | list | length > 0
- ansible_facts.tcp_listen | selectattr('foreign_address', 'defined') | list | length > 0
when: ansible_os_family == "RedHat" or ansible_os_family == "Debian"
- name: check TCP 5556 is in listening ports
assert:
that: 5556 in ansible_facts.tcp_listen | map(attribute='port') | sort | list

View File

@@ -31,6 +31,26 @@
disable_gpg_check: yes
when: ansible_facts.pkg_mgr in ['zypper', 'community.general.zypper']
# See https://github.com/gopasspw/gopass/issues/1849#issuecomment-802789285
- name: Install gopass on Debian
when: ansible_facts.os_family == 'Debian'
become: yes
block:
- name: Fetch gopass repo keyring
ansible.builtin.get_url:
url: https://packages.gopass.pw/repos/gopass/gopass-archive-keyring.gpg
dest: /usr/share/keyrings/gopass-archive-keyring.gpg
- name: Add gopass repo
ansible.builtin.apt_repository:
repo: "deb [arch=amd64,arm64,armhf \
signed-by=/usr/share/keyrings/gopass-archive-keyring.gpg] \
https://packages.gopass.pw/repos/gopass stable main"
state: present
- name: Update apt-cache and install gopass package
ansible.builtin.apt:
name: gopass
update_cache: yes
- name: Install on macOS
when: ansible_facts.distribution == 'MacOSX'
block:
@@ -48,6 +68,7 @@
name:
- gnupg2
- pass
- gopass
state: present
update_homebrew: no
become: yes

View File

@@ -0,0 +1,125 @@
- name: Create a password ({{ backend }})
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-pass length=8 create=yes', backend=backend) }}"
- name: Fetch password from an existing file ({{ backend }})
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-pass', backend=backend) }}"
- name: Verify password ({{ backend }})
assert:
that:
- readpass == newpass
- name: Create a password with equal sign ({{ backend }})
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-pass-equal userpass=SimpleSample= create=yes', backend=backend) }}"
- name: Fetch a password with equal sign ({{ backend }})
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-pass-equal', backend=backend) }}"
- name: Verify password ({{ backend }})
assert:
that:
- readpass == newpass
- name: Create a password using missing=create ({{ backend }})
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-missing-create missing=create length=8', backend=backend) }}"
- name: Fetch password from an existing file ({{ backend }})
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-missing-create', backend=backend) }}"
- name: Verify password ({{ backend }})
assert:
that:
- readpass == newpass
- name: Fetch password from existing file using missing=empty ({{ backend }})
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-missing-create missing=empty', backend=backend) }}"
- name: Verify password ({{ backend }})
assert:
that:
- readpass == newpass
- name: Fetch password from non-existing file using missing=empty ({{ backend }})
set_fact:
readpass: "{{ query('community.general.passwordstore', 'test-missing-pass missing=empty', backend=backend) }}"
- name: Verify password ({{ backend }})
assert:
that:
- readpass == [ none ]
- name: Create the YAML password ({{ backend }})
command: "{{ backend }} insert -m -f test-yaml-pass"
args:
stdin: |
testpassword
key: |
multi
line
- name: Fetch a password with YAML subkey ({{ backend }})
set_fact:
readyamlpass: "{{ lookup('community.general.passwordstore', 'test-yaml-pass subkey=key', backend=backend) }}"
- name: Read a yaml subkey ({{ backend }})
assert:
that:
- readyamlpass == 'multi\nline\n'
- name: Create a non-YAML multiline file ({{ backend }})
command: "{{ backend }} insert -m -f test-multiline-pass"
args:
stdin: |
testpassword
random additional line
- name: Fetch password from multiline file ({{ backend }})
set_fact:
readyamlpass: "{{ lookup('community.general.passwordstore', 'test-multiline-pass', backend=backend) }}"
- name: Multiline pass only returns first line ({{ backend }})
assert:
that:
- readyamlpass == 'testpassword'
- name: Fetch all from multiline file ({{ backend }})
set_fact:
readyamlpass: "{{ lookup('community.general.passwordstore', 'test-multiline-pass returnall=yes', backend=backend) }}"
- name: Multiline pass returnall returns everything in the file ({{ backend }})
assert:
that:
- readyamlpass == 'testpassword\nrandom additional line\n'
- name: Create a password in a folder ({{ backend }})
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'folder/test-pass length=8 create=yes', backend=backend) }}"
- name: Fetch password from folder ({{ backend }})
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'folder/test-pass', backend=backend) }}"
- name: Verify password from folder ({{ backend }})
assert:
that:
- readpass == newpass
- name: Try to read folder as passname ({{ backend }})
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'folder', backend=backend) }}"
ignore_errors: true
register: eval_error
- name: Make sure reading folder as passname failed ({{ backend }})
assert:
that:
- eval_error is failed
- '"passname folder not found" in eval_error.msg'
when: backend != "gopass" # Remove this line once gopass backend can handle this

View File

@@ -41,12 +41,10 @@
- name: Try to find gopass in path
command: which gopass
register: result
ignore_errors: yes
- name: Store path of gopass executable
set_fact:
gopasspath: "{{ (result.rc == 0) |
ternary(result.stdout, (passpath | dirname, 'gopass') | path_join) }}"
gopasspath: "{{ result.stdout }}"
- name: Move original gopass into place if there was a leftover
command:
@@ -57,6 +55,18 @@
args:
removes: "{{ gopasspath }}.testorig"
- name: Get versions of tools
command: "{{ item }} --version"
register: versions
loop:
- "{{ gpg2_bin }}"
- pass
- gopass
- name: Output versions of tools
debug:
msg: "{{ versions.results | map(attribute='stdout_lines') }}"
# How to generate a new GPG key:
# gpg2 --batch --gen-key input # See templates/input
# gpg2 --list-secret-keys --keyid-format LONG
@@ -70,150 +80,22 @@
- name: Trust key
shell: echo "D3E1CC8934E97270CEB066023AF1BD3619AB496A:6:" | {{ gpg2_bin }} --import-ownertrust
- name: Initialise passwordstore
- name: Initialise pass passwordstore
command: pass init ansible-test
- name: Create a password
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-pass length=8 create=yes') }}"
- name: Initialise gopass passwordstore
command: gopass init --path $HOME/.gopass-store ansible-test
args:
creates: "{{ lookup('env','HOME') }}/.gopass-store"
- name: Fetch password from an existing file
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-pass') }}"
- name: Verify password
assert:
that:
- readpass == newpass
- name: Create a password with equal sign
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-pass-equal userpass=SimpleSample= create=yes') }}"
- name: Fetch a password with equal sign
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-pass-equal') }}"
- name: Verify password
assert:
that:
- readpass == newpass
- name: Create a password using missing=create
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'test-missing-create missing=create length=8') }}"
- name: Fetch password from an existing file
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-missing-create') }}"
- name: Verify password
assert:
that:
- readpass == newpass
- name: Fetch password from existing file using missing=empty
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'test-missing-create missing=empty') }}"
- name: Verify password
assert:
that:
- readpass == newpass
- name: Fetch password from non-existing file using missing=empty
set_fact:
readpass: "{{ query('community.general.passwordstore', 'test-missing-pass missing=empty') }}"
- name: Verify password
assert:
that:
- readpass == [ none ]
# As inserting multiline passwords on the commandline would require something
# like expect, simply create it by using default gpg on a file with the correct
# structure.
- name: Create the YAML password content
copy:
dest: "~/.password-store/test-yaml-pass"
content: |
testpassword
key: |
multi
line
- name: Read .gpg-id from .password-store
set_fact:
gpgid: "{{ lookup('file', '~/.password-store/.gpg-id') }}"
- name: Encrypt the file using the gpg key
command: "{{ gpg2_bin }} --batch --encrypt -r {{ gpgid }} ~/.password-store/test-yaml-pass"
- name: Fetch a password with YAML subkey
set_fact:
readyamlpass: "{{ lookup('community.general.passwordstore', 'test-yaml-pass subkey=key') }}"
- name: Read a yaml subkey
assert:
that:
- readyamlpass == 'multi\nline'
- name: Create a non-YAML multiline file
copy:
dest: "~/.password-store/test-multiline-pass"
content: |
testpassword
random additional line
- name: Read .gpg-id from .password-store
set_fact:
gpgid: "{{ lookup('file', '~/.password-store/.gpg-id') }}"
- name: Encrypt the file using the gpg key
command: "{{ gpg2_bin }} --batch --encrypt -r {{ gpgid }} ~/.password-store/test-multiline-pass"
- name: Fetch password from multiline file
set_fact:
readyamlpass: "{{ lookup('community.general.passwordstore', 'test-multiline-pass') }}"
- name: Multiline pass only returns first line
assert:
that:
- readyamlpass == 'testpassword'
- name: Fetch all from multiline file
set_fact:
readyamlpass: "{{ lookup('community.general.passwordstore', 'test-multiline-pass returnall=yes') }}"
- name: Multiline pass returnall returns everything in the file
assert:
that:
- readyamlpass == 'testpassword\nrandom additional line'
- name: Create a password in a folder
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'folder/test-pass length=8 create=yes') }}"
- name: Fetch password from folder
set_fact:
readpass: "{{ lookup('community.general.passwordstore', 'folder/test-pass') }}"
- name: Verify password from folder
assert:
that:
- readpass == newpass
- name: Try to read folder as passname
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'folder') }}"
ignore_errors: true
register: eval_error
- name: Make sure reading folder as passname failed
assert:
that:
- eval_error is failed
- '"passname folder not found" in eval_error.msg'
# these tests should apply to all backends
- name: Password tests
include_tasks: password_tests.yml
loop:
- pass
- gopass
loop_control:
loop_var: backend
- name: Change passwordstore location explicitly
set_fact:
@@ -289,11 +171,13 @@
args:
removes: "{{ passpath }}.testorig"
- name: Very basic gopass compatibility test
# This are in addition to the real gopass tests above
# and verify plugin logic
- name: gopass plugin logic tests
vars:
passwordstore_backend: "gopass"
block:
- name: check if gopass executable exists
- name: Check if gopass executable exists
stat:
path: "{{ gopasspath }}"
register: gopass_check
@@ -318,18 +202,15 @@
if [ "$1" = "--version" ]; then
exit 2
fi
if [ "$1" = "show" ] && [ "$2" != "--password" ]; then
exit 3
fi
echo "gopass_ok"
dest: "{{ gopasspath }}"
mode: '0755'
- name: Try to read folder as passname using gopass
- name: Try to read folder as passname using gopass mock
set_fact:
newpass: "{{ lookup('community.general.passwordstore', 'folder') }}"
- name: Verify password received from gopass
- name: Verify password received from gopass mock
assert:
that:
- newpass == "gopass_ok"

View File

@@ -1,2 +1,3 @@
passwordstore_packages:
- gopass
- pass

View File

@@ -1,2 +1,3 @@
passwordstore_packages:
- gopass
- pass

View File

@@ -1,2 +1,3 @@
passwordstore_packages:
- gopass
- pass

View File

@@ -1,3 +1,4 @@
passwordstore_packages:
- gopass
- gnupg
- password-store

View File

@@ -1,2 +1,3 @@
dependencies:
- setup_pkg_mgr
- setup_remote_constraints

View File

@@ -62,6 +62,7 @@
pip:
name: "{{ item }}"
virtualenv: "{{ process_venv }}"
extra_args: "-c {{ remote_constraints }}"
loop:
- setuptools==44
- python-daemon

View File

@@ -12,3 +12,4 @@
- include: 'remove_nosave.yml'
- include: 'update_cache.yml'
- include: 'locally_installed_package.yml'
- include: 'reason.yml'

View File

@@ -1,9 +1,13 @@
---
- vars:
http_port: 27617
reg_pkg: ed
url_pkg: lemon
url_pkg_filename: url.pkg.zst
url_pkg_path: '/tmp/'
url_pkg_url: 'http://localhost:{{http_port}}/{{url_pkg_filename}}'
file_pkg: hdparm
file_pkg_path: /tmp/pkg.zst
file_pkg_path: /tmp/file.pkg.zst
extra_pkg: core/sdparm
extra_pkg_outfmt: sdparm
block:
@@ -15,11 +19,33 @@
- '{{file_pkg}}'
- '{{extra_pkg}}'
state: absent
- name: Make sure that url package is not cached
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Get URL for {{url_pkg}}
command:
cmd: pacman --sync --print-format "%l" {{url_pkg}}
register: url_pkg_url
register: url_pkg_stdout
- name: Download {{url_pkg}} pkg
get_url:
url: '{{url_pkg_stdout.stdout}}'
dest: '{{url_pkg_path}}/{{url_pkg_filename}}'
- name: Download {{url_pkg}} pkg sig
get_url:
url: '{{url_pkg_stdout.stdout}}.sig'
dest: '{{url_pkg_path}}/{{url_pkg_filename}}.sig'
- name: Host {{url_pkg}}
shell:
cmd: 'python -m http.server --directory {{url_pkg_path}} {{http_port}} >/dev/null 2>&1'
async: 90
poll: 0
- name: Wait for http.server to come up online
wait_for:
host: 'localhost'
port: '{{http_port}}'
state: started
- name: Get URL for {{file_pkg}}
command:
@@ -34,26 +60,50 @@
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
check_mode: True
register: install_1
- name: Install packages from url (check mode, cached)
pacman:
name:
- '{{url_pkg_url}}'
check_mode: True
register: install_1c
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Install packages from mixed sources
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
register: install_2
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Install packages from mixed sources - (idempotency)
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
register: install_3
- name: Install packages from url - (idempotency, cached)
pacman:
name:
- '{{url_pkg_url}}'
register: install_3c
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Install packages with their regular names (idempotency)
pacman:
@@ -62,54 +112,89 @@
- '{{url_pkg}}'
- '{{file_pkg}}'
register: install_4
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Install new package with already installed packages from mixed sources
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
- '{{extra_pkg}}'
register: install_5
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Uninstall packages - mixed sources (check mode)
pacman:
state: absent
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
check_mode: True
register: uninstall_1
- name: Uninstall packages - url (check mode, cached)
pacman:
state: absent
name:
- '{{url_pkg_url}}'
check_mode: True
register: uninstall_1c
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Uninstall packages - mixed sources
pacman:
state: absent
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
register: uninstall_2
- name: Delete cached {{url_pkg}}
file:
path: '/var/cache/pacman/pkg/{{url_pkg_filename}}'
state: absent
- name: Uninstall packages - mixed sources (idempotency)
pacman:
state: absent
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{url_pkg_url}}'
- '{{file_pkg_path}}'
register: uninstall_3
- name: Uninstall package - url (idempotency, cached)
pacman:
state: absent
name:
- '{{url_pkg_url}}'
register: uninstall_3c
- assert:
that:
- install_1 is changed
- install_1.msg == 'Would have installed 3 packages'
- install_1.packages|sort() == [reg_pkg, url_pkg, file_pkg]|sort()
- install_1c is changed
- install_1c.msg == 'Would have installed 1 packages'
- install_1c.packages|sort() == [url_pkg]
- install_2 is changed
- install_2.msg == 'Installed 3 package(s)'
- install_1.packages|sort() == [reg_pkg, url_pkg, file_pkg]|sort()
- install_2.packages|sort() == [reg_pkg, url_pkg, file_pkg]|sort()
- install_3 is not changed
- install_3.msg == 'package(s) already installed'
- install_3c is not changed
- install_3c.msg == 'package(s) already installed'
- install_4 is not changed
- install_4.msg == 'package(s) already installed'
- install_5 is changed
@@ -118,8 +203,13 @@
- uninstall_1 is changed
- uninstall_1.msg == 'Would have removed 3 packages'
- uninstall_1.packages | length() == 3 # pkgs have versions here
- uninstall_1c is changed
- uninstall_1c.msg == 'Would have removed 1 packages'
- uninstall_1c.packages | length() == 1 # pkgs have versions here
- uninstall_2 is changed
- uninstall_2.msg == 'Removed 3 package(s)'
- uninstall_2.packages | length() == 3
- uninstall_3 is not changed
- uninstall_3.msg == 'package(s) already absent'
- uninstall_3c is not changed
- uninstall_3c.msg == 'package(s) already absent'

View File

@@ -0,0 +1,97 @@
---
- vars:
reg_pkg: ed
url_pkg: lemon
file_pkg: hdparm
file_pkg_path: /tmp/pkg.zst
extra_pkg: core/sdparm
extra_pkg_outfmt: sdparm
block:
- name: Make sure that test packages are not installed
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg}}'
- '{{file_pkg}}'
- '{{extra_pkg}}'
state: absent
- name: Get URL for {{url_pkg}}
command:
cmd: pacman --sync --print-format "%l" {{url_pkg}}
register: url_pkg_url
- name: Get URL for {{file_pkg}}
command:
cmd: pacman --sync --print-format "%l" {{file_pkg}}
register: file_pkg_url
- name: Download {{file_pkg}} pkg
get_url:
url: '{{file_pkg_url.stdout}}'
dest: '{{file_pkg_path}}'
- name: Install packages from mixed sources as dependency (check mode)
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{file_pkg_path}}'
reason: dependency
check_mode: True
register: install_1
- name: Install packages from mixed sources as explicit
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{file_pkg_path}}'
reason: explicit
register: install_2
- name: Install packages from mixed sources with new packages being installed as dependency - (idempotency)
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{file_pkg_path}}'
reason: dependency
register: install_3
- name: Install new package with already installed packages from mixed sources as dependency
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{file_pkg_path}}'
- '{{extra_pkg}}'
reason: dependency
register: install_4
- name: Set install reason for all packages to dependency
pacman:
name:
- '{{reg_pkg}}'
- '{{url_pkg_url.stdout}}'
- '{{file_pkg_path}}'
- '{{extra_pkg}}'
reason: dependency
reason_for: all
register: install_5
- assert:
that:
- install_1 is changed
- install_1.msg == 'Would have installed 3 packages'
- install_1.packages|sort() == [reg_pkg, url_pkg, file_pkg]|sort()
- install_2 is changed
- install_2.msg == 'Installed 3 package(s)'
- install_2.packages|sort() == [reg_pkg, url_pkg, file_pkg]|sort()
- install_3 is not changed
- install_3.msg == 'package(s) already installed'
- install_4 is changed
- install_4.msg == 'Installed 1 package(s)'
- install_4.packages == [extra_pkg_outfmt]
- install_5 is changed
- install_5.msg == 'Installed 3 package(s)'
- install_5.packages|sort() == [reg_pkg, url_pkg, file_pkg]|sort()

View File

@@ -4,3 +4,4 @@ skip/freebsd
skip/osx
skip/macos
skip/rhel8.4 # TODO make sure that tests work on 8.4 as well!
disabled # TODO

View File

@@ -13,7 +13,7 @@ plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-in
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc # unused param - removed in 6.0.0
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter

View File

@@ -8,7 +8,7 @@ plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-in
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc # unused param - removed in 6.0.0
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter

View File

@@ -8,7 +8,7 @@ plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-in
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc # unused param - removed in 6.0.0
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter

View File

@@ -9,7 +9,7 @@ plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-in
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc # unused param - removed in 6.0.0
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter

View File

@@ -0,0 +1,161 @@
# -*- coding: utf-8 -*-
# (c) 2022, Jonathan Lung <lungj@heresjono.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from argparse import ArgumentParser
from ansible_collections.community.general.tests.unit.compat import unittest
from ansible_collections.community.general.tests.unit.compat.mock import patch
from ansible.errors import AnsibleError
from ansible.module_utils import six
from ansible.plugins.loader import lookup_loader
from ansible_collections.community.general.plugins.lookup.bitwarden import LookupModule, Bitwarden, BitwardenException
MOCK_RECORDS = [
{
"collectionIds": [],
"deletedDate": None,
"favorite": False,
"fields": [
{
"linkedId": None,
"name": "a_new_secret",
"type": 1,
"value": "this is a new secret"
},
{
"linkedId": None,
"name": "not so secret",
"type": 0,
"value": "not secret"
}
],
"folderId": "3b12a9da-7c49-40b8-ad33-aede017a7ead",
"id": "90992f63-ddb6-4e76-8bfc-aede016ca5eb",
"login": {
"password": "passwordA3",
"passwordRevisionDate": "2022-07-26T23:03:23.399Z",
"totp": None,
"username": "userA"
},
"name": "a_test",
"notes": None,
"object": "item",
"organizationId": None,
"passwordHistory": [
{
"lastUsedDate": "2022-07-26T23:03:23.405Z",
"password": "a_new_secret: this is secret"
},
{
"lastUsedDate": "2022-07-26T23:03:23.399Z",
"password": "passwordA2"
},
{
"lastUsedDate": "2022-07-26T22:59:52.885Z",
"password": "passwordA"
}
],
"reprompt": 0,
"revisionDate": "2022-07-26T23:03:23.743Z",
"type": 1
},
{
"collectionIds": [],
"deletedDate": None,
"favorite": False,
"folderId": None,
"id": "5ebd4d31-104c-49fc-a09c-aedf003d28ad",
"login": {
"password": "b",
"passwordRevisionDate": None,
"totp": None,
"username": "a"
},
"name": "dupe_name",
"notes": None,
"object": "item",
"organizationId": None,
"reprompt": 0,
"revisionDate": "2022-07-27T03:42:40.353Z",
"type": 1
},
{
"collectionIds": [],
"deletedDate": None,
"favorite": False,
"folderId": None,
"id": "90657653-6695-496d-9431-aedf003d3015",
"login": {
"password": "d",
"passwordRevisionDate": None,
"totp": None,
"username": "c"
},
"name": "dupe_name",
"notes": None,
"object": "item",
"organizationId": None,
"reprompt": 0,
"revisionDate": "2022-07-27T03:42:46.673Z",
"type": 1
}
]
class MockBitwarden(Bitwarden):
logged_in = True
def _get_matches(self, search_value, search_field="name"):
return list(filter(lambda record: record[search_field] == search_value, MOCK_RECORDS))
class LoggedOutMockBitwarden(MockBitwarden):
logged_in = False
class TestLookupModule(unittest.TestCase):
def setUp(self):
self.lookup = lookup_loader.get('community.general.bitwarden')
@patch('ansible_collections.community.general.plugins.lookup.bitwarden._bitwarden', new=MockBitwarden())
def test_bitwarden_plugin_no_match(self):
# Entry 0, "a_test" of the test input should have no duplicates.
self.assertEqual([], self.lookup.run(['not_here'], field='password')[0])
@patch('ansible_collections.community.general.plugins.lookup.bitwarden._bitwarden', new=MockBitwarden())
def test_bitwarden_plugin_fields(self):
# Entry 0, "a_test" of the test input should have no duplicates.
record = MOCK_RECORDS[0]
record_name = record['name']
for k, v in six.iteritems(record['login']):
self.assertEqual([v],
self.lookup.run([record_name], field=k)[0])
@patch('ansible_collections.community.general.plugins.lookup.bitwarden._bitwarden', new=MockBitwarden())
def test_bitwarden_plugin_duplicates(self):
# There are two records with name dupe_name; we need to be order-insensitive with
# checking what was retrieved.
self.assertEqual(set(['b', 'd']),
set(self.lookup.run(['dupe_name'], field='password')[0]))
@patch('ansible_collections.community.general.plugins.lookup.bitwarden._bitwarden', new=MockBitwarden())
def test_bitwarden_plugin_full_item(self):
# Try to retrieve the full record of the first entry where the name is "a_name".
self.assertEqual([MOCK_RECORDS[0]],
self.lookup.run(['a_test'])[0])
@patch('ansible_collections.community.general.plugins.lookup.bitwarden._bitwarden', LoggedOutMockBitwarden())
def test_bitwarden_plugin_logged_out(self):
record = MOCK_RECORDS[0]
record_name = record['name']
with self.assertRaises(AnsibleError):
self.lookup.run([record_name], field='password')

View File

@@ -26,6 +26,7 @@ from ansible_collections.community.general.tests.unit.compat.mock import patch
from ansible.errors import AnsibleError
from ansible.module_utils import six
from ansible.plugins.loader import lookup_loader
from ansible_collections.community.general.plugins.lookup.lastpass import LookupModule, LPass, LPassException
@@ -126,6 +127,9 @@ class LoggedOutMockLPass(MockLPass):
class TestLPass(unittest.TestCase):
def setUp(self):
self.lookup = lookup_loader.get('community.general.lastpass')
def test_lastpass_cli_path(self):
lp = MockLPass(path='/dev/null')
self.assertEqual('/dev/null', lp.cli_path)
@@ -158,30 +162,27 @@ class TestLPass(unittest.TestCase):
class TestLastpassPlugin(unittest.TestCase):
def setUp(self):
self.lookup = lookup_loader.get('community.general.lastpass')
@patch('ansible_collections.community.general.plugins.lookup.lastpass.LPass', new=MockLPass)
def test_lastpass_plugin_normal(self):
lookup_plugin = LookupModule()
for entry in MOCK_ENTRIES:
entry_id = entry.get('id')
for k, v in six.iteritems(entry):
self.assertEqual(v.strip(),
lookup_plugin.run([entry_id], field=k)[0])
self.lookup.run([entry_id], field=k)[0])
@patch('ansible_collections.community.general.plugins.lookup.lastpass.LPass', LoggedOutMockLPass)
def test_lastpass_plugin_logged_out(self):
lookup_plugin = LookupModule()
entry = MOCK_ENTRIES[0]
entry_id = entry.get('id')
with self.assertRaises(AnsibleError):
lookup_plugin.run([entry_id], field='password')
self.lookup.run([entry_id], field='password')
@patch('ansible_collections.community.general.plugins.lookup.lastpass.LPass', DisconnectedMockLPass)
def test_lastpass_plugin_disconnected(self):
lookup_plugin = LookupModule()
entry = MOCK_ENTRIES[0]
entry_id = entry.get('id')
with self.assertRaises(AnsibleError):
lookup_plugin.run([entry_id], field='password')
self.lookup.run([entry_id], field='password')

View File

@@ -100,6 +100,19 @@ valid_inventory = {
"upgradable_pkgs": {
"sqlite": VersionTuple(current="3.36.0-1", latest="3.37.0-1"),
},
"pkg_reasons": {
"file": "explicit",
"filesystem": "explicit",
"findutils": "explicit",
"gawk": "explicit",
"gettext": "explicit",
"grep": "explicit",
"gzip": "explicit",
"pacman": "explicit",
"pacman-mirrorlist": "dependency",
"sed": "explicit",
"sqlite": "explicit",
},
}
empty_inventory = {
@@ -108,6 +121,7 @@ empty_inventory = {
"installed_groups": {},
"available_groups": {},
"upgradable_pkgs": {},
"pkg_reasons": {},
}
@@ -255,6 +269,27 @@ class TestPacman:
""",
"",
),
( # pacman --query --explicit
0,
"""file 5.41-1
filesystem 2021.11.11-1
findutils 4.8.0-1
gawk 5.1.1-1
gettext 0.21-1
grep 3.7-1
gzip 1.11-1
pacman 6.0.1-2
sed 4.8-1
sqlite 3.36.0-1
""",
"",
),
( # pacman --query --deps
0,
"""pacman-mirrorlist 20211114-1
""",
"",
),
],
None,
),
@@ -272,6 +307,8 @@ class TestPacman:
"",
"warning: config file /etc/pacman.conf, line 34: directive 'TotalDownload' in section 'options' not recognized.",
),
(0, "", ""),
(0, "", ""),
],
None,
),
@@ -288,6 +325,8 @@ class TestPacman:
"partial\npkg\\nlist",
"some warning",
),
(0, "", ""),
(0, "", ""),
],
AnsibleFailJson,
),
@@ -375,6 +414,8 @@ class TestPacman:
(["pacman", "--query", "--groups"], {'check_rc': True}, 0, '', ''),
(["pacman", "--sync", "--groups", "--groups"], {'check_rc': True}, 0, '', ''),
(["pacman", "--query", "--upgrades"], {'check_rc': False}, 0, '', ''),
(["pacman", "--query", "--explicit"], {'check_rc': True}, 0, 'foo 1.0.0-1', ''),
(["pacman", "--query", "--deps"], {'check_rc': True}, 0, '', ''),
],
False,
),
@@ -843,7 +884,7 @@ class TestPacman:
],
"state": "present",
},
["sudo", "somepackage", "otherpkg"],
["otherpkg", "somepackage", "sudo"],
[
Package("sudo", "sudo"),
Package("grep", "grep"),

View File

@@ -0,0 +1,733 @@
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import shutil
import uuid
import tarfile
import tempfile
import os
from ansible_collections.community.general.tests.unit.compat.mock import patch
from ansible_collections.community.general.tests.unit.compat import unittest
from ansible.module_utils import basic
import ansible_collections.community.general.plugins.modules.remote_management.redfish.wdc_redfish_command as module
from ansible_collections.community.general.tests.unit.plugins.modules.utils import AnsibleExitJson, AnsibleFailJson
from ansible_collections.community.general.tests.unit.plugins.modules.utils import set_module_args, exit_json, fail_json
MOCK_SUCCESSFUL_HTTP_EMPTY_RESPONSE = {
"ret": True,
"data": {
}
}
MOCK_GET_ENCLOSURE_RESPONSE_SINGLE_TENANT = {
"ret": True,
"data": {
"SerialNumber": "12345"
}
}
MOCK_GET_ENCLOSURE_RESPONSE_MULTI_TENANT = {
"ret": True,
"data": {
"SerialNumber": "12345-A"
}
}
MOCK_URL_ERROR = {
"ret": False,
"msg": "This is a mock URL error",
"status": 500
}
MOCK_SUCCESSFUL_RESPONSE_WITH_UPDATE_SERVICE_RESOURCE = {
"ret": True,
"data": {
"UpdateService": {
"@odata.id": "/UpdateService"
}
}
}
MOCK_SUCCESSFUL_RESPONSE_WITH_SIMPLE_UPDATE_AND_FW_ACTIVATE = {
"ret": True,
"data": {
"Actions": {
"#UpdateService.SimpleUpdate": {
"target": "mocked value"
},
"Oem": {
"WDC": {
"#UpdateService.FWActivate": {
"title": "Activate the downloaded firmware.",
"target": "/redfish/v1/UpdateService/Actions/UpdateService.FWActivate"
}
}
}
}
}
}
MOCK_SUCCESSFUL_RESPONSE_WITH_ACTIONS = {
"ret": True,
"data": {
"Actions": {}
}
}
MOCK_GET_IOM_A_MULTI_TENANT = {
"ret": True,
"data": {
"Id": "IOModuleAFRU"
}
}
MOCK_GET_IOM_B_MULTI_TENANAT = {
"ret": True,
"data": {
"error": {
"message": "IOM Module B cannot be read"
}
}
}
MOCK_READY_FOR_FW_UPDATE = {
"ret": True,
"entries": {
"Description": "Ready for FW update",
"StatusCode": 0
}
}
MOCK_FW_UPDATE_IN_PROGRESS = {
"ret": True,
"entries": {
"Description": "FW update in progress",
"StatusCode": 1
}
}
MOCK_WAITING_FOR_ACTIVATION = {
"ret": True,
"entries": {
"Description": "FW update completed. Waiting for activation.",
"StatusCode": 2
}
}
MOCK_SIMPLE_UPDATE_STATUS_LIST = [
MOCK_READY_FOR_FW_UPDATE,
MOCK_FW_UPDATE_IN_PROGRESS,
MOCK_WAITING_FOR_ACTIVATION
]
def get_bin_path(self, arg, required=False):
"""Mock AnsibleModule.get_bin_path"""
return arg
def get_exception_message(ansible_exit_json):
"""From an AnsibleExitJson exception, get the message string."""
return ansible_exit_json.exception.args[0]["msg"]
def is_changed(ansible_exit_json):
"""From an AnsibleExitJson exception, return the value of the changed flag"""
return ansible_exit_json.exception.args[0]["changed"]
def mock_simple_update(*args, **kwargs):
return {
"ret": True
}
def mocked_url_response(*args, **kwargs):
"""Mock to just return a generic string."""
return "/mockedUrl"
def mock_update_url(*args, **kwargs):
"""Mock of the update url"""
return "/UpdateService"
def mock_fw_activate_url(*args, **kwargs):
"""Mock of the FW Activate URL"""
return "/UpdateService.FWActivate"
def empty_return(*args, **kwargs):
"""Mock to just return an empty successful return."""
return {"ret": True}
def mock_get_simple_update_status_ready_for_fw_update(*args, **kwargs):
"""Mock to return simple update status Ready for FW update"""
return MOCK_READY_FOR_FW_UPDATE
def mock_get_request_enclosure_single_tenant(*args, **kwargs):
"""Mock for get_request for single-tenant enclosure."""
if args[1].endswith("/redfish/v1") or args[1].endswith("/redfish/v1/"):
return MOCK_SUCCESSFUL_RESPONSE_WITH_UPDATE_SERVICE_RESOURCE
elif args[1].endswith("/mockedUrl"):
return MOCK_SUCCESSFUL_HTTP_EMPTY_RESPONSE
elif args[1].endswith("Chassis/Enclosure"):
return MOCK_GET_ENCLOSURE_RESPONSE_SINGLE_TENANT
elif args[1].endswith("/UpdateService"):
return MOCK_SUCCESSFUL_RESPONSE_WITH_SIMPLE_UPDATE_AND_FW_ACTIVATE
else:
raise RuntimeError("Illegal call to get_request in test: " + args[1])
def mock_get_request_enclosure_multi_tenant(*args, **kwargs):
"""Mock for get_request with multi-tenant enclosure."""
if args[1].endswith("/redfish/v1") or args[1].endswith("/redfish/v1/"):
return MOCK_SUCCESSFUL_RESPONSE_WITH_UPDATE_SERVICE_RESOURCE
elif args[1].endswith("/mockedUrl"):
return MOCK_SUCCESSFUL_HTTP_EMPTY_RESPONSE
elif args[1].endswith("Chassis/Enclosure"):
return MOCK_GET_ENCLOSURE_RESPONSE_MULTI_TENANT
elif args[1].endswith("/UpdateService"):
return MOCK_SUCCESSFUL_RESPONSE_WITH_SIMPLE_UPDATE_AND_FW_ACTIVATE
elif args[1].endswith("/IOModuleAFRU"):
return MOCK_GET_IOM_A_MULTI_TENANT
elif args[1].endswith("/IOModuleBFRU"):
return MOCK_GET_IOM_B_MULTI_TENANAT
else:
raise RuntimeError("Illegal call to get_request in test: " + args[1])
def mock_post_request(*args, **kwargs):
"""Mock post_request with successful response."""
if args[1].endswith("/UpdateService.FWActivate"):
return {
"ret": True,
"data": ACTION_WAS_SUCCESSFUL_MESSAGE
}
else:
raise RuntimeError("Illegal POST call to: " + args[1])
def mock_get_firmware_inventory_version_1_2_3(*args, **kwargs):
return {
"ret": True,
"entries": [
{
"Id": "IOModuleA_OOBM",
"Version": "1.2.3"
},
{
"Id": "IOModuleB_OOBM",
"Version": "1.2.3"
}
]
}
ERROR_MESSAGE_UNABLE_TO_EXTRACT_BUNDLE_VERSION = "Unable to extract bundle version or multi-tenant status from update image tarfile"
ACTION_WAS_SUCCESSFUL_MESSAGE = "Action was successful"
class TestWdcRedfishCommand(unittest.TestCase):
def setUp(self):
self.mock_module_helper = patch.multiple(basic.AnsibleModule,
exit_json=exit_json,
fail_json=fail_json,
get_bin_path=get_bin_path)
self.mock_module_helper.start()
self.addCleanup(self.mock_module_helper.stop)
self.tempdir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.tempdir)
def test_module_fail_when_required_args_missing(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({})
module.main()
def test_module_fail_when_unknown_category(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({
'category': 'unknown',
'command': 'FWActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': [],
})
module.main()
def test_module_fail_when_unknown_command(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({
'category': 'Update',
'command': 'unknown',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': [],
})
module.main()
def test_module_fw_activate_first_iom_unavailable(self):
"""Test that if the first IOM is not available, the 2nd one is used."""
ioms = [
"bad.example.com",
"good.example.com"
]
module_args = {
'category': 'Update',
'command': 'FWActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ioms
}
set_module_args(module_args)
def mock_get_request(*args, **kwargs):
"""Mock for get_request that will fail on the 'bad' IOM."""
if "bad.example.com" in args[1]:
return MOCK_URL_ERROR
else:
return mock_get_request_enclosure_single_tenant(*args, **kwargs)
with patch.multiple(module.WdcRedfishUtils,
_firmware_activate_uri=mock_fw_activate_url,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_get_request,
post_request=mock_post_request):
with self.assertRaises(AnsibleExitJson) as cm:
module.main()
self.assertEqual(ACTION_WAS_SUCCESSFUL_MESSAGE,
get_exception_message(cm))
def test_module_fw_activate_pass(self):
"""Test the FW Activate command in a passing scenario."""
# Run the same test twice -- once specifying ioms, and once specifying baseuri.
# Both should work the same way.
uri_specifiers = [
{
"ioms": ["example1.example.com"]
},
{
"baseuri": "example1.example.com"
}
]
for uri_specifier in uri_specifiers:
module_args = {
'category': 'Update',
'command': 'FWActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
}
module_args.update(uri_specifier)
set_module_args(module_args)
with patch.multiple("ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.WdcRedfishUtils",
_firmware_activate_uri=mock_fw_activate_url,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_single_tenant,
post_request=mock_post_request):
with self.assertRaises(AnsibleExitJson) as ansible_exit_json:
module.main()
self.assertEqual(ACTION_WAS_SUCCESSFUL_MESSAGE,
get_exception_message(ansible_exit_json))
self.assertTrue(is_changed(ansible_exit_json))
def test_module_fw_activate_service_does_not_support_fw_activate(self):
"""Test FW Activate when it is not supported."""
expected_error_message = "Service does not support FWActivate"
set_module_args({
'category': 'Update',
'command': 'FWActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"]
})
def mock_update_uri_response(*args, **kwargs):
return {
"ret": True,
"data": {} # No Actions
}
with patch.multiple(module.WdcRedfishUtils,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_update_uri_response):
with self.assertRaises(AnsibleFailJson) as cm:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(cm))
def test_module_update_and_activate_image_uri_not_http(self):
"""Test Update and Activate when URI is not http(s)"""
expected_error_message = "Bundle URI must be HTTP or HTTPS"
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "ftp://example.com/image"
})
with patch.multiple(module.WdcRedfishUtils,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return):
with self.assertRaises(AnsibleFailJson) as cm:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(cm))
def test_module_update_and_activate_target_not_ready_for_fw_update(self):
"""Test Update and Activate when target is not in the correct state."""
mock_status_code = 999
mock_status_description = "mock status description"
expected_error_message = "Target is not ready for FW update. Current status: {0} ({1})".format(
mock_status_code,
mock_status_description
)
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image"
})
with patch.object(module.WdcRedfishUtils, "get_simple_update_status") as mock_get_simple_update_status:
mock_get_simple_update_status.return_value = {
"ret": True,
"entries": {
"StatusCode": mock_status_code,
"Description": mock_status_description
}
}
with patch.multiple(module.WdcRedfishUtils,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return):
with self.assertRaises(AnsibleFailJson) as cm:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(cm))
def test_module_update_and_activate_bundle_not_a_tarfile(self):
"""Test Update and Activate when bundle is not a tarfile"""
mock_filename = os.path.abspath(__file__)
expected_error_message = ERROR_MESSAGE_UNABLE_TO_EXTRACT_BUNDLE_VERSION
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = mock_filename
with patch.multiple(module.WdcRedfishUtils,
get_simple_update_status=mock_get_simple_update_status_ready_for_fw_update,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return):
with self.assertRaises(AnsibleFailJson) as cm:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(cm))
def test_module_update_and_activate_bundle_contains_no_firmware_version(self):
"""Test Update and Activate when bundle contains no firmware version"""
expected_error_message = ERROR_MESSAGE_UNABLE_TO_EXTRACT_BUNDLE_VERSION
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = "empty_tarfile{0}.tar".format(uuid.uuid4())
empty_tarfile = tarfile.open(os.path.join(self.tempdir, tar_name), "w")
empty_tarfile.close()
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple(module.WdcRedfishUtils,
get_simple_update_status=mock_get_simple_update_status_ready_for_fw_update,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return):
with self.assertRaises(AnsibleFailJson) as cm:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(cm))
def test_module_update_and_activate_version_already_installed(self):
"""Test Update and Activate when the bundle version is already installed"""
mock_firmware_version = "1.2.3"
expected_error_message = ACTION_WAS_SUCCESSFUL_MESSAGE
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = self.generate_temp_bundlefile(mock_firmware_version=mock_firmware_version,
is_multi_tenant=False)
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple(module.WdcRedfishUtils,
get_firmware_inventory=mock_get_firmware_inventory_version_1_2_3,
get_simple_update_status=mock_get_simple_update_status_ready_for_fw_update,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_single_tenant):
with self.assertRaises(AnsibleExitJson) as result:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(result))
self.assertFalse(is_changed(result))
def test_module_update_and_activate_version_already_installed_multi_tenant(self):
"""Test Update and Activate on multi-tenant when version is already installed"""
mock_firmware_version = "1.2.3"
expected_error_message = ACTION_WAS_SUCCESSFUL_MESSAGE
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = self.generate_temp_bundlefile(mock_firmware_version=mock_firmware_version,
is_multi_tenant=True)
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple(module.WdcRedfishUtils,
get_firmware_inventory=mock_get_firmware_inventory_version_1_2_3,
get_simple_update_status=mock_get_simple_update_status_ready_for_fw_update,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_multi_tenant):
with self.assertRaises(AnsibleExitJson) as result:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(result))
self.assertFalse(is_changed(result))
def test_module_update_and_activate_pass(self):
"""Test Update and Activate (happy path)"""
mock_firmware_version = "1.2.2"
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = self.generate_temp_bundlefile(mock_firmware_version=mock_firmware_version,
is_multi_tenant=False)
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple("ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.WdcRedfishUtils",
get_firmware_inventory=mock_get_firmware_inventory_version_1_2_3,
simple_update=mock_simple_update,
_simple_update_status_uri=mocked_url_response,
# _find_updateservice_resource=empty_return,
# _find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_single_tenant,
post_request=mock_post_request):
with patch("ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.WdcRedfishUtils.get_simple_update_status"
) as mock_get_simple_update_status:
mock_get_simple_update_status.side_effect = MOCK_SIMPLE_UPDATE_STATUS_LIST
with self.assertRaises(AnsibleExitJson) as ansible_exit_json:
module.main()
self.assertTrue(is_changed(ansible_exit_json))
self.assertEqual(ACTION_WAS_SUCCESSFUL_MESSAGE, get_exception_message(ansible_exit_json))
def test_module_update_and_activate_pass_multi_tenant(self):
"""Test Update and Activate with multi-tenant (happy path)"""
mock_firmware_version = "1.2.2"
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = self.generate_temp_bundlefile(mock_firmware_version=mock_firmware_version,
is_multi_tenant=True)
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple(module.WdcRedfishUtils,
get_firmware_inventory=mock_get_firmware_inventory_version_1_2_3,
simple_update=mock_simple_update,
_simple_update_status_uri=mocked_url_response,
# _find_updateservice_resource=empty_return,
# _find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_multi_tenant,
post_request=mock_post_request):
with patch("ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.WdcRedfishUtils.get_simple_update_status"
) as mock_get_simple_update_status:
mock_get_simple_update_status.side_effect = MOCK_SIMPLE_UPDATE_STATUS_LIST
with self.assertRaises(AnsibleExitJson) as ansible_exit_json:
module.main()
self.assertTrue(is_changed(ansible_exit_json))
self.assertEqual(ACTION_WAS_SUCCESSFUL_MESSAGE, get_exception_message(ansible_exit_json))
def test_module_fw_update_multi_tenant_firmware_single_tenant_enclosure(self):
"""Test Update and Activate using multi-tenant bundle on single-tenant enclosure"""
mock_firmware_version = "1.1.1"
expected_error_message = "Enclosure multi-tenant is False but bundle multi-tenant is True"
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = self.generate_temp_bundlefile(mock_firmware_version=mock_firmware_version,
is_multi_tenant=True)
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple(module.WdcRedfishUtils,
get_firmware_inventory=mock_get_firmware_inventory_version_1_2_3(),
get_simple_update_status=mock_get_simple_update_status_ready_for_fw_update,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_single_tenant):
with self.assertRaises(AnsibleFailJson) as result:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(result))
def test_module_fw_update_single_tentant_firmware_multi_tenant_enclosure(self):
"""Test Update and Activate using singe-tenant bundle on multi-tenant enclosure"""
mock_firmware_version = "1.1.1"
expected_error_message = "Enclosure multi-tenant is True but bundle multi-tenant is False"
set_module_args({
'category': 'Update',
'command': 'UpdateAndActivate',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
'update_image_uri': "http://example.com/image",
"update_creds": {
"username": "image_user",
"password": "image_password"
}
})
tar_name = self.generate_temp_bundlefile(mock_firmware_version=mock_firmware_version,
is_multi_tenant=False)
with patch('ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.fetch_file') as mock_fetch_file:
mock_fetch_file.return_value = os.path.join(self.tempdir, tar_name)
with patch.multiple(module.WdcRedfishUtils,
get_firmware_inventory=mock_get_firmware_inventory_version_1_2_3(),
get_simple_update_status=mock_get_simple_update_status_ready_for_fw_update,
_firmware_activate_uri=mocked_url_response,
_update_uri=mock_update_url,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_get_request_enclosure_multi_tenant):
with self.assertRaises(AnsibleFailJson) as result:
module.main()
self.assertEqual(expected_error_message,
get_exception_message(result))
def generate_temp_bundlefile(self,
mock_firmware_version,
is_multi_tenant):
"""Generate a temporary fake bundle file.
:param str mock_firmware_version: The simulated firmware version for the bundle.
:param bool is_multi_tenant: Is the simulated bundle multi-tenant?
This can be used for a mock FW update.
"""
tar_name = "tarfile{0}.tar".format(uuid.uuid4())
bundle_tarfile = tarfile.open(os.path.join(self.tempdir, tar_name), "w")
package_filename = "oobm-{0}.pkg".format(mock_firmware_version)
package_filename_path = os.path.join(self.tempdir, package_filename)
package_file = open(package_filename_path, "w")
package_file.close()
bundle_tarfile.add(os.path.join(self.tempdir, package_filename), arcname=package_filename)
bin_filename = "firmware.bin"
bin_filename_path = os.path.join(self.tempdir, bin_filename)
bin_file = open(bin_filename_path, "wb")
byte_to_write = b'\x80' if is_multi_tenant else b'\xFF'
bin_file.write(byte_to_write * 12)
bin_file.close()
for filename in [package_filename, bin_filename]:
bundle_tarfile.add(os.path.join(self.tempdir, filename), arcname=filename)
bundle_tarfile.close()
return tar_name

View File

@@ -0,0 +1,214 @@
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible_collections.community.general.tests.unit.compat.mock import patch
from ansible_collections.community.general.tests.unit.compat import unittest
from ansible.module_utils import basic
import ansible_collections.community.general.plugins.modules.remote_management.redfish.wdc_redfish_info as module
from ansible_collections.community.general.tests.unit.plugins.modules.utils import AnsibleExitJson, AnsibleFailJson
from ansible_collections.community.general.tests.unit.plugins.modules.utils import set_module_args, exit_json, fail_json
MOCK_SUCCESSFUL_RESPONSE_WITH_ACTIONS = {
"ret": True,
"data": {
"Actions": {}
}
}
MOCK_SUCCESSFUL_HTTP_EMPTY_RESPONSE = {
"ret": True,
"data": {
}
}
MOCK_SUCCESSFUL_RESPONSE_WITH_UPDATE_SERVICE_RESOURCE = {
"ret": True,
"data": {
"UpdateService": {
"@odata.id": "/UpdateService"
}
}
}
MOCK_SUCCESSFUL_RESPONSE_WITH_SIMPLE_UPDATE_BUT_NO_FW_ACTIVATE = {
"ret": True,
"data": {
"Actions": {
"#UpdateService.SimpleUpdate": {
"target": "mocked value"
},
"Oem": {
"WDC": {} # No #UpdateService.FWActivate
}
}
}
}
def get_bin_path(self, arg, required=False):
"""Mock AnsibleModule.get_bin_path"""
return arg
def get_redfish_facts(ansible_exit_json):
"""From an AnsibleExitJson exception, get the redfish facts dict."""
return ansible_exit_json.exception.args[0]["redfish_facts"]
def get_exception_message(ansible_exit_json):
"""From an AnsibleExitJson exception, get the message string."""
return ansible_exit_json.exception.args[0]["msg"]
class TestWdcRedfishInfo(unittest.TestCase):
def setUp(self):
self.mock_module_helper = patch.multiple(basic.AnsibleModule,
exit_json=exit_json,
fail_json=fail_json,
get_bin_path=get_bin_path)
self.mock_module_helper.start()
self.addCleanup(self.mock_module_helper.stop)
def test_module_fail_when_required_args_missing(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({})
module.main()
def test_module_fail_when_unknown_category(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({
'category': 'unknown',
'command': 'SimpleUpdateStatus',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': [],
})
module.main()
def test_module_fail_when_unknown_command(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({
'category': 'Update',
'command': 'unknown',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': [],
})
module.main()
def test_module_simple_update_status_pass(self):
set_module_args({
'category': 'Update',
'command': 'SimpleUpdateStatus',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
})
def mock_simple_update_status(*args, **kwargs):
return {
"ret": True,
"data": {
"Description": "Ready for FW update",
"ErrorCode": 0,
"EstimatedRemainingMinutes": 0,
"StatusCode": 0
}
}
def mocked_string_response(*args, **kwargs):
return "mockedUrl"
def empty_return(*args, **kwargs):
return {"ret": True}
with patch.multiple(module.WdcRedfishUtils,
_simple_update_status_uri=mocked_string_response,
_find_updateservice_resource=empty_return,
_find_updateservice_additional_uris=empty_return,
get_request=mock_simple_update_status):
with self.assertRaises(AnsibleExitJson) as ansible_exit_json:
module.main()
redfish_facts = get_redfish_facts(ansible_exit_json)
self.assertEqual(mock_simple_update_status()["data"],
redfish_facts["simple_update_status"]["entries"])
def test_module_simple_update_status_updateservice_resource_not_found(self):
set_module_args({
'category': 'Update',
'command': 'SimpleUpdateStatus',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
})
with patch.object(module.WdcRedfishUtils, 'get_request') as mock_get_request:
mock_get_request.return_value = {
"ret": True,
"data": {} # Missing UpdateService property
}
with self.assertRaises(AnsibleFailJson) as ansible_exit_json:
module.main()
self.assertEqual("UpdateService resource not found",
get_exception_message(ansible_exit_json))
def test_module_simple_update_status_service_does_not_support_simple_update(self):
set_module_args({
'category': 'Update',
'command': 'SimpleUpdateStatus',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
})
def mock_get_request_function(uri):
mock_url_string = "mockURL"
if mock_url_string in uri:
return {
"ret": True,
"data": {
"Actions": { # No #UpdateService.SimpleUpdate
}
}
}
else:
return {
"ret": True,
"data": mock_url_string
}
with patch.object(module.WdcRedfishUtils, 'get_request') as mock_get_request:
mock_get_request.side_effect = mock_get_request_function
with self.assertRaises(AnsibleFailJson) as ansible_exit_json:
module.main()
self.assertEqual("UpdateService resource not found",
get_exception_message(ansible_exit_json))
def test_module_simple_update_status_service_does_not_support_fw_activate(self):
set_module_args({
'category': 'Update',
'command': 'SimpleUpdateStatus',
'username': 'USERID',
'password': 'PASSW0RD=21',
'ioms': ["example1.example.com"],
})
def mock_get_request_function(uri):
if uri.endswith("/redfish/v1") or uri.endswith("/redfish/v1/"):
return MOCK_SUCCESSFUL_RESPONSE_WITH_UPDATE_SERVICE_RESOURCE
elif uri.endswith("/mockedUrl"):
return MOCK_SUCCESSFUL_HTTP_EMPTY_RESPONSE
elif uri.endswith("/UpdateService"):
return MOCK_SUCCESSFUL_RESPONSE_WITH_SIMPLE_UPDATE_BUT_NO_FW_ACTIVATE
else:
raise RuntimeError("Illegal call to get_request in test: " + uri)
with patch("ansible_collections.community.general.plugins.module_utils.wdc_redfish_utils.WdcRedfishUtils.get_request") as mock_get_request:
mock_get_request.side_effect = mock_get_request_function
with self.assertRaises(AnsibleFailJson) as ansible_exit_json:
module.main()
self.assertEqual("Service does not support FWActivate",
get_exception_message(ansible_exit_json))

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Author: Alexei Znamensky (russoz@gmail.com)
# Largely adapted from test_redhat_subscription by
# Jiri Hnidek (jhnidek@redhat.com)
@@ -110,6 +111,41 @@ TEST_CASES = [
'value': '90',
},
],
[
{
'channel': 'xfce4-session',
'property': '/general/SaveOnExit',
'state': 'present',
'value_type': 'bool',
'value': False,
},
{
'id': 'test_property_set_property_bool_false',
'run_command.calls': [
(
# Calling of following command will be asserted
['/testbin/xfconf-query', '--channel', 'xfce4-session', '--property', '/general/SaveOnExit'],
# Was return code checked?
{'environ_update': {'LANGUAGE': 'C', 'LC_ALL': 'C'}, 'check_rc': False},
# Mock of returned code, stdout and stderr
(0, 'true\n', '',),
),
(
# Calling of following command will be asserted
['/testbin/xfconf-query', '--channel', 'xfce4-session', '--property', '/general/SaveOnExit',
'--create', '--type', 'bool', '--set', 'false'],
# Was return code checked?
{'environ_update': {'LANGUAGE': 'C', 'LC_ALL': 'C'}, 'check_rc': False},
# Mock of returned code, stdout and stderr
(0, 'false\n', '',),
),
],
'changed': True,
'previous_value': 'true',
'value_type': 'bool',
'value': 'False',
},
],
[
{
'channel': 'xfwm4',
@@ -232,7 +268,7 @@ def test_xfconf(mocker, capfd, patch_xfconf, testcase):
# Mock function used for running commands first
call_results = [item[2] for item in testcase['run_command.calls']]
mock_run_command = mocker.patch(
'ansible_collections.community.general.plugins.module_utils.mh.module_helper.AnsibleModule.run_command',
'ansible.module_utils.basic.AnsibleModule.run_command',
side_effect=call_results)
# Try to run test case
@@ -252,12 +288,6 @@ def test_xfconf(mocker, capfd, patch_xfconf, testcase):
assert results[test_result] == results['invocation']['module_args'][test_result], \
"'{0}': '{1}' != '{2}'".format(test_result, results[test_result], results['invocation']['module_args'][test_result])
for conditional_test_result in ('msg', 'value', 'previous_value'):
if conditional_test_result in testcase:
assert conditional_test_result in results, "'{0}' not found in {1}".format(conditional_test_result, results)
assert results[conditional_test_result] == testcase[conditional_test_result], \
"'{0}': '{1}' != '{2}'".format(conditional_test_result, results[conditional_test_result], testcase[conditional_test_result])
assert mock_run_command.call_count == len(testcase['run_command.calls'])
if mock_run_command.call_count:
call_args_list = [(item[0][0], item[1]) for item in mock_run_command.call_args_list]
@@ -265,3 +295,14 @@ def test_xfconf(mocker, capfd, patch_xfconf, testcase):
print("call args list =\n%s" % call_args_list)
print("expected args list =\n%s" % expected_call_args_list)
assert call_args_list == expected_call_args_list
expected_cmd, dummy, expected_res = testcase['run_command.calls'][-1]
assert results['cmd'] == expected_cmd
assert results['stdout'] == expected_res[1]
assert results['stderr'] == expected_res[2]
for conditional_test_result in ('msg', 'value', 'previous_value'):
if conditional_test_result in testcase:
assert conditional_test_result in results, "'{0}' not found in {1}".format(conditional_test_result, results)
assert results[conditional_test_result] == testcase[conditional_test_result], \
"'{0}': '{1}' != '{2}'".format(conditional_test_result, results[conditional_test_result], testcase[conditional_test_result])

View File

@@ -50,6 +50,7 @@ redis == 2.10.6 ; python_version < '2.7'
redis < 4.0.0 ; python_version >= '2.7' and python_version < '3.6'
redis ; python_version >= '3.6'
pycdlib < 1.13.0 ; python_version < '3' # 1.13.0 does not work with Python 2, while not declaring that
python-daemon <= 2.3.0 ; python_version < '3'
# freeze pylint and its requirements for consistent test results
astroid == 2.2.5