Compare commits

..

67 Commits
3.1.0 ... 3.2.0

Author SHA1 Message Date
Felix Fontein
f0c1b1065a Release 3.2.0. 2021-06-08 14:47:09 +02:00
patchback[bot]
a44356c966 Add domain option to onepassword lookup (#2735) (#2760)
* Add domain to onepassword lookup

* Add changelog

* Add default to domain documentation

* Improve format

* Fix sanity issue

* Add option type to documentation

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add domain to init

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit dab5d941e6)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-06-08 12:09:49 +02:00
patchback[bot]
33f9f0b05f with great powers come great responsibility (#2755) (#2759)
(cherry picked from commit eef645c3f7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-06-08 10:59:02 +02:00
patchback[bot]
f0f0704d64 Fixed sanity checks for cloud/scaleway/ modules (#2678) (#2756)
* fixed validation-modules for plugins/modules/cloud/scaleway/scaleway_image_info.py

* fixed validation-modules for plugins/modules/cloud/scaleway/scaleway_ip_info.py

* fixed validation-modules for plugins/modules/cloud/scaleway/scaleway_security_group_info.py

* fixed validation-modules for plugins/modules/cloud/scaleway/scaleway_server_info.py

* fixed validation-modules for plugins/modules/cloud/scaleway/scaleway_snapshot_info.py

* fixed validation-modules for plugins/modules/cloud/scaleway/scaleway_volume_info.py

* sanity fix

(cherry picked from commit 9f344d7165)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-06-08 10:58:36 +02:00
Felix Fontein
55fe140230 Prepare 3.2.0 release. 2021-06-08 09:27:27 +02:00
patchback[bot]
ac543f5ef0 flatpak: add tests in CI, add no_dependencies parameter (#2751) (#2754)
* Similar version restrictions than flatpak_remote tests.

* ...

* Try to work around missing dependencies.

* Revert "Try to work around missing dependencies."

This reverts commit 66a4e38566.

* Add changelog.

* App8 -> App2; make sure that there are two apps App1 and App2.

* Fix forgotten variabe.

* Remove test notices.

* Seems like flatpak no longer supports file:// URLs.

The tests would need to be rewritten to offer the URL via http:// instead.

* Try local HTTP server for URL tests.

* ...

* Lint, add status check.

* Add boilerplate.

* Add 'ps aux'.

* Surrender to -f.

* Work around apparent flatpak bug.

* Fix YAML.

* Improve condition.

* Make sure test reruns behave better.

(cherry picked from commit bb37b67166)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-08 09:14:10 +02:00
patchback[bot]
dbc0fe8859 zypper_repository: fix idempotency on adding repo with releasever and basearch variables (#2722) (#2753)
* zypper_repository: Check idempotency on adding repo with releasever

* Name required when adding non-repo files.

* Initial try to fix releasever

* Replace re.sub with .replace

* name releaseverrepo releaseverrepo

* Change  to ansible_distribution_version for removing repo

* improve asserts format

* add changelog

* Fix changelog formatting

Co-authored-by: Felix Fontein <felix@fontein.de>

* improve command used for retrieving releasever variable

Co-authored-by: Felix Fontein <felix@fontein.de>

* add basearch replace

* Add basearch to changelog fragment

* Check for releasever and basearch only when they are there

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 94a53adff1)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-06-08 08:47:15 +02:00
patchback[bot]
42a1318fe3 Re-enable flatpak_remote tests (#2747) (#2749)
* Automate test repo creation, re-enable flatpak_remote tests.

* Linting.

* Another try.

(cherry picked from commit 4c50f1add7)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-07 21:39:17 +02:00
patchback[bot]
d25352dc06 Remove aminvakil from supershipit section as it is not needed anymore (#2743) (#2746)
(cherry picked from commit 7c3f2ae4af)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-06-07 17:02:17 +02:00
patchback[bot]
55682c52df Add aminvakil to committers (#2739) (#2742)
(cherry picked from commit 1e34df7ca0)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-06-07 16:01:30 +02:00
patchback[bot]
46781d9fd1 [PR #2731/6a41fba2 backport][stable-3] ModuleHelper - also uses LC_ALL to force language (#2736)
* ModuleHelper - also uses LC_ALL to force language (#2731)

* also uses LC_ALL to force language

* adjusted test_xfconf and test_cpanm

* added changelog fragment

* Update changelogs/fragments/2731-mh-cmd-locale.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* adjusted chglog frag per PR

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6a41fba2f8)

* snap revamp hasn't been backported yet.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-07 13:27:30 +02:00
patchback[bot]
4545d1c91e Bugfix + sanity checks for stacki_host (#2681) (#2733)
* fixed validation-modules for plugins/modules/remote_management/stacki/stacki_host.py

* sanity fix

* added changelog fragment

* extra fix to the documentation

* Update plugins/modules/remote_management/stacki/stacki_host.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/stacki/stacki_host.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* rollback params

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f74b83663b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-06-07 08:17:04 +02:00
patchback[bot]
6570dfeb7d iptables_state: fix async status call (-> action plugin) (#2711) (#2729)
* fix call to async_status (-> action plugin)

* add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* rename a local variable for readability

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 463c576a67)

Co-authored-by: quidame <quidame@poivron.org>
2021-06-06 18:10:29 +02:00
patchback[bot]
94c368f7df open_iscsi: allow same target selected portals login and override (#2684) (#2727)
* fix: include portal and port for logged on check

* refactor: remove extra space

* fix: allow None portal and port on target_loggedon test

* add auto_portal_startup argument

* fix: change param name for automatic_portal

* add changelog fragment

* refactor: Update changelogs/fragments/2684-open_iscsi-single-target-multiple-portal-overrides.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* add version added info to auto_portal_startup arg

* add example for auto_portal_startup

* fix: remove alias for auto_portal form arg_spec as well

* refactor: elaborate in fragment changelogs

Elaborate change

Co-authored-by: Amin Vakil <info@aminvakil.com>

* open_iscsi: elaborate changelog fragment

* Update plugins/modules/system/open_iscsi.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Amin Vakil <info@aminvakil.com>
(cherry picked from commit 9d8bea9d36)

Co-authored-by: The Binary <binary4bytes@gmail.com>
2021-06-05 23:04:03 +02:00
patchback[bot]
4cba1e60d9 Wire token param into consul_api #2124 (#2126) (#2726)
* Wire token param into consul_api #2124

* Update changelogs/fragments/2124-consul_kv-pass-token.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* #2124 renamed release fragment to match pr, removed parse_params.

* putting look back in, do some linting   #2124

* try more linting

* linting

* try overwriting defaults in parse_params with get_option vals, instead of removing that function completely.

* Revert "back to start, from 2nd approach: allow keyword arguments via parse_params for compatibility."

This reverts commit 748be8e366.

* Revert " linting"

This reverts commit 1d57374c3e.

* Revert " try more linting"

This reverts commit 91c8d06e6a.

* Revert "putting look back in, do some linting   #2124"

This reverts commit 87eeec7180.

* Revert " #2124 renamed release fragment to match pr, removed parse_params."

This reverts commit d2869b2f22.

* Revert "Update changelogs/fragments/2124-consul_kv-pass-token.yml"

This reverts commit c50b1cf9d4.

* Revert "Wire token param into consul_api #2124"

This reverts commit b60b6433a8.

* minimal chnages for this PR relative to current upstream.

* superfluous newline in changlog fragment.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 0e6d70697c)

Co-authored-by: fkuep <flo.kuepper@gmail.com>
2021-06-05 23:03:53 +02:00
patchback[bot]
321fb6c974 Reduce stormssh searches based on host (#2568) (#2724)
* Reduce stormssh searches based on host

Due to the stormssh searches in the whole config values, we need to reduce the search results based on the full matching of the hosts

* Removed whitespaces in the blank line

* Added changelog fragment and tests for the fix.

* Added newline at the end of the changelog fragment

* Added newline at the end of the tests

* Fixed bug with name in tests

* Changed assertion for the existing host

* Update changelogs/fragments/2568-ssh_config-reduce-stormssh-searches-based-on-host.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Adjusted tests

* New line at the end of the tests

Co-authored-by: Anton Nikolaev <anikolaev@apple.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1a4af9bfc3)

Co-authored-by: Anton Nikolaev <drenout@gmail.com>
2021-06-05 18:03:14 +02:00
patchback[bot]
eb4d7a4199 Terraform: ensure workspace is reset to current value (#2634) (#2720)
* fix: ensure workspace is reset to current value

* chore: linter

* chore: changelog

(cherry picked from commit c49a384a65)

Co-authored-by: christophemorio <49184206+christophemorio@users.noreply.github.com>
2021-06-04 21:12:53 +02:00
patchback[bot]
4b07d45b7e Fix repeated word in description of fs_type (#2717) (#2719)
(cherry picked from commit a343756e6f)

Co-authored-by: Alex Willmer <al.willmer@cgi.com>
2021-06-04 21:12:40 +02:00
Felix Fontein
d4a33433b4 Mention removal version more prominently. 2021-06-04 12:37:04 +02:00
patchback[bot]
e30b91cb8d Add new module/plugin maintainers to BOTMETA. (#2708) (#2712)
(cherry picked from commit d49783280e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-04 11:13:32 +02:00
patchback[bot]
b2b65c431b Fix action plugin BOTMETA entries. (#2707) (#2714)
(cherry picked from commit 4396ec9631)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-04 11:13:22 +02:00
Felix Fontein
9ade4f6dd6 Announce script removal. (#2697) 2021-06-04 10:38:04 +02:00
patchback[bot]
635d4f2138 Fix spurious test errors. (#2709) (#2710)
(cherry picked from commit 2e8746a8aa)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-04 10:24:55 +02:00
patchback[bot]
6549e41ab8 Add module sapcar_extract to make SAP administration easier. (#2596) (#2705)
* add sapcar

* integrate test

* test integration

* Revert "integrate test"

This reverts commit 17cbff4f02.

* add requiered

* change test

* change binary

* test

* add bin bath

* change future

* change download logic

* change logic

* sanity

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* add url and error handling

* sanity

* Apply suggestions from code review

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* cleanup and fixes

* sanity

* add sec library

* add description

* remove blanks

* sanity

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Rainer Leber <rainer.leber@sva.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit a4f46b881a)

Co-authored-by: rainerleber <39616583+rainerleber@users.noreply.github.com>
2021-06-04 07:55:41 +02:00
patchback[bot]
6faface39e add module pacman_key (#778) (#2704)
* add module pacman_key

* add symlink and fix documentation for pacman_key

* documentation fix for pacman_key

* improve logic around user input

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Improve parameter checking

required_one_of=[] is neat.

Co-authored-by: Alexei Znamensky

* Revert "Improve parameter checking"

This reverts commit 044b0cbc85.

* Simplify a bunch of code.

* fix typos pointed out by yan12125

* replaced manual checks with required-if invocation

* added default keyring to documentation

* some initial tests

* updated metadata

* refactored to make sanity tests pass

* refactor to make sanity tests pass ... part deux

* refactor: simplify run_command invocations

* test: cover check-mode and some normal operation

* docs: fix grammatical errors

* rip out fingerprint code

a full length (40 characters) key ID is equivalent to the fingerprint.

* refactor tests, add a couple more

* test: added testcase for method: data

* Update plugins/modules/packaging/os/pacman_key.py

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* docs: correct yaml boolean type

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 5ddf0041ec)

Co-authored-by: George Rawlinson <george@rawlinson.net.nz>
2021-06-04 07:36:29 +02:00
patchback[bot]
3b893ec421 BOTMETA.yml: remove myself from zypper_repository (#2701) (#2703)
(cherry picked from commit d93bc039b2)

Co-authored-by: Matthias Vogelgesang <matthias.vogelgesang@gmail.com>
2021-06-04 04:40:31 +00:00
patchback[bot]
65805e2dd6 keycloak_realm.py: Mark 'reset_password_allowed' as no_log=False (#2694) (#2698)
* keycloak_realm.py: Mark 'reset_password_allowed' as no_log=False

This value is not sensitive but Ansible will complain about it otherwise

* fixup! keycloak_realm.py: Mark 'reset_password_allowed' as no_log=False

* Apply all suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit fe5717c1aa)

Co-authored-by: Benjamin Schubert <contact@benschubert.me>
2021-06-03 22:17:50 +02:00
patchback[bot]
297b50fb96 keycloak_realm.py: Fix the ssl_required parameter according to the API (#2693) (#2699)
* keycloak_realm.py: Fix the `ssl_required` parameter according to the API

The `ssl_required` parameter is a string and must be one of 'all',
'external' or 'none'. Passing a bool will make the server return a 500.

* fixup! keycloak_realm.py: Fix the `ssl_required` parameter according to the API

* Update changelogs/fragments/keycloak_realm_ssl_required.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit efbda2389d)

Co-authored-by: Benjamin Schubert <contact@benschubert.me>
2021-06-03 22:14:59 +02:00
patchback[bot]
2edadb42fb Added SHA1 option to maven_artifact (#2662) (#2690)
* Added SHA1 option

* Add changelog fragment

* Update plugins/modules/packaging/language/maven_artifact.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/packaging/language/maven_artifact.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Combined hash functions

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/packaging/language/maven_artifact.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/packaging/language/maven_artifact.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Removed unused functions (rolled into _local_checksum)

* Update changelogs/fragments/2661-maven_artifact-add-sha1-option.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ca1506fb26)

Co-authored-by: Gene Gotimer <eugene.gotimer@steampunk.com>
2021-06-01 22:23:08 +02:00
patchback[bot]
4e1bf2d4ba nmcli: new arguments to ignore automatic dns servers and gateways (#2635) (#2689)
* nmcli: new arguments to ignore automatic dns servers and gateways

Closes #1087

* Add changelog fragment

* Address review comments

(cherry picked from commit 1ad85849af)

Co-authored-by: Chih-Hsuan Yen <yan12125@gmail.com>
2021-06-01 22:17:19 +02:00
patchback[bot]
b1a4a0ff21 Add filter docs (#2680) (#2687)
* Began with filter docs.

* Add more filters.

* Add time unit filters.

* Add TOC and filters to create identifiers.

* Add more filters.

* Add documentation from ansible/ansible for json_query and random_mac.

* Update docs/docsite/rst/filter_guide.rst

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 3516acf8d4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-06-01 20:38:12 +02:00
patchback[bot]
e74ea7c8b8 archive - Adding exclusion_patterns option (#2616) (#2686)
* Adding exclusion_patterns option

* Adding changelog fragment and Python 2.6 compatability

* Minor refactoring for readability

* Removing unneccessary conditional

* Applying initial review suggestions

* Adding missed review suggestion

(cherry picked from commit b6c0cc0b61)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-05-31 08:16:40 +02:00
patchback[bot]
6590f5e082 Fixed sanity checks for cloud/online/ modules (#2677) (#2679)
* fixed validation-modules for plugins/modules/cloud/online/online_server_info.py

* fixed validation-modules for plugins/modules/cloud/online/online_user_info.py

* sanity fix

(cherry picked from commit bef3c04d1c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-29 17:08:56 +02:00
patchback[bot]
7483f71d31 iptables_state: fix broken query of async_status result (#2671) (#2676)
* use get() rather than querying the key directly

* add a changelog fragment

* re-enable CI tests

* Update changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f09c39b71e)

Co-authored-by: quidame <quidame@poivron.org>
2021-05-29 13:58:03 +02:00
patchback[bot]
6b215e3a9c proxmox_kvm - Fixed vmid result when VM with name exists (#2648) (#2674)
* Fixed vmid result when VM with name exists

* Adding changelog fragment

(cherry picked from commit b281d3d699)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-05-29 10:50:11 +02:00
patchback[bot]
3723e458d3 composer: add composer_executable (#2650) (#2670)
* composer: add composer_executable

* Add changelog

* Improve documentation thanks to felixfontein

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c3cab7c68c)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-05-28 13:24:44 +02:00
patchback[bot]
0f8bb43723 Stop mentioning Freenode. We're on Libera.chat. (#2666) (#2669)
(cherry picked from commit 14813a6287)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-28 07:29:02 +02:00
patchback[bot]
f33530dd61 Add extra docs tests (#2663) (#2665)
* Add extra docs tests.

* Linting.

* Fix copy'n'paste error.

(cherry picked from commit 14f13904d6)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-28 06:49:01 +02:00
patchback[bot]
8f3043058e Fix drain example with correct wait values (#2603) (#2658)
(cherry picked from commit 95794f31e3)

Co-authored-by: Merouane Atig <merwan@users.noreply.github.com>
2021-05-27 20:18:19 +02:00
patchback[bot]
3987b8a291 xml: Add an example for absent (#2644) (#2656)
Element node can be deleted based upon the attribute
value.

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 795125fec4)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-05-27 20:18:07 +02:00
patchback[bot]
f7403a0b34 random_string: a new lookup plugin (#2572) (#2659)
New lookup plugin to generate random string based upon
constraints.

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 43c12b82fa)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-05-27 20:16:50 +02:00
patchback[bot]
0a676406b3 minor refactors on plugins/modules/cloud/misc (#2557) (#2660)
* minor refactors on plugins/modules/cloud/misc

* added changelog fragment

* removed unreachable statement

* Update plugins/modules/cloud/misc/terraform.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/misc/rhevm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* adjusted per PR comment

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3afcf7e75d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-27 20:15:49 +02:00
patchback[bot]
5a7d234d80 Terraform overwrite init (#2573) (#2654)
* feat: implement overwrite_init option

* chore: changelog

(cherry picked from commit 285639a4f9)

Co-authored-by: christophemorio <49184206+christophemorio@users.noreply.github.com>
2021-05-27 20:15:33 +02:00
patchback[bot]
fb9730f75e meta/runtime.yml and __init__.py cleanup (#2632) (#2653)
* Remove superfluous __init__.py files.

* Reformat and sort meta/runtime.yml.

* The ovirt modules have been removed.

* Add changelog entry.

(cherry picked from commit 7cd96d963e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-27 20:15:22 +02:00
patchback[bot]
928aeafe1d hana_query module: add a maintainer (#2647) (#2652)
(cherry picked from commit dc793ea32b)

Co-authored-by: Andrew Klychkov <aklychko@redhat.com>
2021-05-27 19:07:11 +02:00
patchback[bot]
5b68665571 Add module hana_query to make SAP HANA administration easier. (#2623) (#2651)
* new

* move link

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* add more interesting return value in test

* remove unused objects

* removed unneeded function

* extend test output

* Update tests/unit/plugins/modules/database/saphana/test_hana_query.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Rainer Leber <rainer.leber@sva.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit b79969da68)

Co-authored-by: rainerleber <39616583+rainerleber@users.noreply.github.com>
2021-05-27 19:07:04 +02:00
patchback[bot]
e6b84acd1e fix a regression in initialization_from_null_state() (iptables-nft > 1.8.2) (#2604) (#2646)
(cherry picked from commit 909e9fe950)

Co-authored-by: quidame <quidame@poivron.org>
2021-05-27 07:16:36 +00:00
patchback[bot]
c242993291 Temporarily disable iptables_state tests. (#2641) (#2643)
(cherry picked from commit b45298bc43)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-27 08:28:45 +02:00
patchback[bot]
4f3de5658e Add one-liner lookup example (#2615) (#2638)
* Add one-liner lookup example

* Remove trailing whitespace

* Update plugins/lookup/tss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/tss.py

Co-authored-by: Amin Vakil <info@aminvakil.com>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Amin Vakil <info@aminvakil.com>
(cherry picked from commit 26757edfb2)

Co-authored-by: Sylvia van Os <sylvia@hackerchick.me>
2021-05-27 08:25:49 +02:00
patchback[bot]
301fcc3b7e influxdb_user: Fix bug introduced by PR 2499 (#2614) (#2640)
* Update influxdb_user.py

Fixed function name

* Create 2614-influxdb_user-fix-issue-introduced-in-PR#2499

Added changelog

* Rename 2614-influxdb_user-fix-issue-introduced-in-PR#2499 to 2614-influxdb_user-fix-issue-introduced-in-PR#2499.yml

Fixed extension

* Update changelogs/fragments/2614-influxdb_user-fix-issue-introduced-in-PR#2499.yml

Co-authored-by: Amin Vakil <info@aminvakil.com>

Co-authored-by: Amin Vakil <info@aminvakil.com>
(cherry picked from commit 4aa50962cb)

Co-authored-by: sgalea87 <43749726+sgalea87@users.noreply.github.com>
2021-05-27 08:23:21 +02:00
patchback[bot]
0f0e9b2dca Use become test framework for sudosu tests. (#2629) (#2631)
(cherry picked from commit 0b4a2bea01)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-05-26 10:48:33 +02:00
patchback[bot]
ed0636dc27 redis cache - better parsing of connection uri (#2579) (#2622)
* better parsing of connection uri

* added changelog fragment

* fixed tests for ansible 2.9

* Update tests/unit/plugins/cache/test_redis.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/unit/plugins/cache/test_redis.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Adjustments from PR

* Update test_redis.py

* Update test_redis.py

* Update plugins/cache/redis.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/cache/redis.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/unit/plugins/cache/test_redis.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 4764a5deba)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-26 10:00:02 +02:00
patchback[bot]
057321c6c6 Add CONTRIBUTING.md (#2602) (#2626)
* Initial file shamelessly copied from community.mysql

* Add some notes on pull requests

* Add CONTRIBUTING.md link to README.md

* Add quick-start development guide link

* Apply felixfontein's suggestions

Co-authored-by: Felix Fontein <felix@fontein.de>

* add note about rebasing and merge commits

Co-authored-by: Felix Fontein <felix@fontein.de>

* add note about easyfix and waiting_on_contributor tags

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit d0f8eac7fd)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2021-05-26 09:59:52 +02:00
patchback[bot]
1a4814de53 ini_file - added note in documentation for utf-8 bom (#2599) (#2620)
* added note in documentation for utf-8 bom

* Update plugins/modules/files/ini_file.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit aa74cf4d61)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-26 07:29:19 +02:00
patchback[bot]
89b67a014b jenkins_plugin: HTTP Error 405: Method Not Allowed on disable/enable plugin #2510 (#2511) (#2619)
* define POST method for pluginManager api requests

Jenkins makeEnable/makeDisable api requests requires to use POST method

* add changelog fragment

* fix my yoda lang thx to aminvakil

Co-authored-by: Amin Vakil <info@aminvakil.com>

* update changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Amin Vakil <info@aminvakil.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6df3685d42)

Co-authored-by: Alexander Moiseenko <brainsam@yandex.ru>
2021-05-26 07:22:03 +02:00
patchback[bot]
57bfbdc407 Use str() to get exception message (#2590) (#2611)
(cherry picked from commit 63012eef82)

Co-authored-by: DasSkelett <dasskelett@gmail.com>
2021-05-25 13:59:54 +02:00
patchback[bot]
e19dffbf29 json_query, no more 'unknown type' errors (#2607) (#2613)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit d871399220)

Co-authored-by: Brian Coca <bcoca@users.noreply.github.com>
2021-05-25 13:59:42 +02:00
patchback[bot]
113e7cdfa0 rhsm_release: Fix the issue that rhsm_release module considers 8, 7Client and 7Workstation as invalid releases (#2571) (#2606)
* rhsm_release: Fix the issue that rhsm_release module considers 8, 7Client and 7Workstation as invalid releases.

* Fix the unit test error: The new release_matcher could pass a wider range of patterns but that would not cause extra issue to the whole module.

* Submit the changelog fragment.

* Update changelogs/fragments/2571-rhsm_release-fix-release_matcher.yaml

Co-authored-by: Amin Vakil <info@aminvakil.com>

Co-authored-by: Amin Vakil <info@aminvakil.com>
(cherry picked from commit 593d622438)

Co-authored-by: Tong He <68936428+unnecessary-username@users.noreply.github.com>
2021-05-24 20:28:18 +00:00
patchback[bot]
c12be67a69 ini_file - opening file as utf-8-sig (#2578) (#2591)
* opening file as utf-8-sig

* added changelog fragment

* using io.open()

* Update tests/integration/targets/ini_file/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit cc293f90a2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-22 22:42:02 +02:00
patchback[bot]
3a076fd585 Massive adjustment in integration tests for changed and failed (#2577) (#2584)
* Replaced ".changed ==" with "is [not] changed". Same for failed

* Mr Quote refused to go

(cherry picked from commit d7e55db99b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-22 14:53:16 +02:00
patchback[bot]
4ef05a6483 ovir4 inventory script (#2461) (#2583)
* update configparser

* changelog

* handle multiple python version

* Update changelogs/fragments/2461-ovirt4-fix-configparser.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update ovirt4.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3100c32a00)

Co-authored-by: abikouo <79859644+abikouo@users.noreply.github.com>
2021-05-22 14:36:50 +02:00
patchback[bot]
936dd28395 java_cert - fix incorrect certificate alias on pkcs12 import (#2560) (#2581)
* fix wrong certificate alias used when importing pkcs12, modify error output, stdout is more relevant than stderr

* add changelog fragment

* fix changelog fragment

(cherry picked from commit 8f083d5d85)

Co-authored-by: absynth76 <58172580+absynth76@users.noreply.github.com>
2021-05-22 13:46:32 +02:00
patchback[bot]
e3b47899c5 Add missing author name (#2570) (#2576)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 852e240525)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-05-21 19:44:08 +02:00
patchback[bot]
fd8193e0bd Add comment_visibility parameter for comment operation for jira module (#2556) (#2566)
* Add comment_visibility parameter for comment operation for jira module

Co-authored-by: felixfontein <felix@fontein.de>

* Update plugins/modules/web_infrastructure/jira.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/web_infrastructure/jira.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* addressed pep8 E711

* Added missing parameter.

* params is not in use anymore.

* It appears other modules are using options, where in documentation they use suboptions. Inconsistancy?

* adjusted indentation

* tweaked suboptions, fixed documentation

* Added fragment

* Update changelogs/fragments/2556-add-comment_visibility-parameter-for-comment-operation-of-jira-module.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/web_infrastructure/jira.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: felixfontein <felix@fontein.de>
(cherry picked from commit 7a169af053)

Co-authored-by: momcilo78 <momcilo@majic.rs>
2021-05-20 23:18:41 +02:00
patchback[bot]
fa477ebb35 ModuleHelper: CmdMixin custom function for processing cmd results (#2564) (#2565)
* MH: custom function for processing cmd results

* added changelog fragment

* removed case of process_output being a str

(cherry picked from commit 1403f5edcc)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-20 20:11:08 +02:00
patchback[bot]
43e766dd44 removed supporting code for testing module "nuage" - no longer exists here (#2559) (#2563)
(cherry picked from commit 452a185a23)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-05-19 22:23:36 +02:00
Felix Fontein
b25e0f360c Next release will be 3.2.0. 2021-05-18 15:00:48 +02:00
187 changed files with 4267 additions and 830 deletions

23
.github/BOTMETA.yml vendored
View File

@@ -1,17 +1,12 @@
automerge: true
files:
plugins/:
supershipit: aminvakil russoz
changelogs/fragments/:
support: community
$actions:
labels: action
$actions/aireos.py:
labels: aireos cisco networking
$actions/ironware.py:
maintainers: paulquack
labels: ironware networking
$actions/shutdown.py:
$actions/system/iptables_state.py:
maintainers: quidame
$actions/system/shutdown.py:
maintainers: nitzmahone samdoran aminvakil
$becomes/:
labels: become
@@ -120,6 +115,8 @@ files:
$lookups/nios:
maintainers: $team_networking sganesh-infoblox
labels: infoblox networking
$lookups/random_string.py:
maintainers: Akasurde
$module_utils/:
labels: module_utils
$module_utils/gitlab.py:
@@ -347,6 +344,8 @@ files:
$modules/database/mssql/mssql_db.py:
maintainers: vedit Jmainguy kenichi-ogawa-1988
labels: mssql_db
$modules/database/saphana/hana_query.py:
maintainers: rainerleber
$modules/database/vertica/:
maintainers: dareko
$modules/files/archive.py:
@@ -650,6 +649,9 @@ files:
maintainers: elasticdog indrajitr tchernomax
labels: pacman
ignore: elasticdog
$modules/packaging/os/pacman_key.py:
maintainers: grawlinson
labels: pacman
$modules/packaging/os/pkgin.py:
maintainers: $team_solaris L2G jasperla szinck martinm82
labels: pkgin solaris
@@ -715,8 +717,9 @@ files:
labels: zypper
ignore: dirtyharrycallahan robinro
$modules/packaging/os/zypper_repository.py:
maintainers: $team_suse matze
maintainers: $team_suse
labels: zypper
ignore: matze
$modules/remote_management/cobbler/:
maintainers: dagwieers
$modules/remote_management/hpilo/:
@@ -845,6 +848,8 @@ files:
labels: interfaces_file
$modules/system/iptables_state.py:
maintainers: quidame
$modules/system/shutdown.py:
maintainers: nitzmahone samdoran aminvakil
$modules/system/java_cert.py:
maintainers: haad absynth76
$modules/system/java_keystore.py:

View File

@@ -6,6 +6,100 @@ Community General Release Notes
This changelog describes changes after version 2.0.0.
v3.2.0
======
Release Summary
---------------
Regular bugfix and feature release.
Minor Changes
-------------
- Remove unnecessary ``__init__.py`` files from ``plugins/`` (https://github.com/ansible-collections/community.general/pull/2632).
- archive - added ``exclusion_patterns`` option to exclude files or subdirectories from archives (https://github.com/ansible-collections/community.general/pull/2616).
- cloud_init_data_facts - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- composer - add ``composer_executable`` option (https://github.com/ansible-collections/community.general/issues/2649).
- flatpak - add ``no_dependencies`` parameter (https://github.com/ansible/ansible/pull/55452, https://github.com/ansible-collections/community.general/pull/2751).
- ini_file - opening file with encoding ``utf-8-sig`` (https://github.com/ansible-collections/community.general/issues/2189).
- jira - add comment visibility parameter for comment operation (https://github.com/ansible-collections/community.general/pull/2556).
- maven_artifact - added ``checksum_alg`` option to support SHA1 checksums in order to support FIPS systems (https://github.com/ansible-collections/community.general/pull/2662).
- module_helper module utils - method ``CmdMixin.run_command()`` now accepts ``process_output`` specifying a function to process the outcome of the underlying ``module.run_command()`` (https://github.com/ansible-collections/community.general/pull/2564).
- nmcli - add new options to ignore automatic DNS servers and gateways (https://github.com/ansible-collections/community.general/issues/1087).
- onepassword lookup plugin - add ``domain`` option (https://github.com/ansible-collections/community.general/issues/2734).
- open_iscsi - add ``auto_portal_startup`` parameter to allow ``node.startup`` setting per portal (https://github.com/ansible-collections/community.general/issues/2685).
- open_iscsi - also consider ``portal`` and ``port`` to check if already logged in or not (https://github.com/ansible-collections/community.general/issues/2683).
- proxmox_group_info - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- proxmox_kvm - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- rhevm - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- serverless - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- stacki_host - minor refactoring (https://github.com/ansible-collections/community.general/pull/2681).
- terraform - add option ``overwrite_init`` to skip init if exists (https://github.com/ansible-collections/community.general/pull/2573).
- terraform - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
Deprecated Features
-------------------
- All inventory and vault scripts will be removed from community.general in version 4.0.0. If you are referencing them, please update your references to the new `contrib-scripts GitHub repository <https://github.com/ansible-community/contrib-scripts>`_ so your workflow will not break once community.general 4.0.0 is released (https://github.com/ansible-collections/community.general/pull/2697).
Bugfixes
--------
- consul_kv lookup plugin - allow to set ``recurse``, ``index``, ``datacenter`` and ``token`` as keyword arguments (https://github.com/ansible-collections/community.general/issues/2124).
- cpanm - also use ``LC_ALL`` to enforce locale choice (https://github.com/ansible-collections/community.general/pull/2731).
- influxdb_user - fix bug which removed current privileges instead of appending them to existing ones (https://github.com/ansible-collections/community.general/issues/2609, https://github.com/ansible-collections/community.general/pull/2614).
- iptables_state - call ``async_status`` action plugin rather than its module (https://github.com/ansible-collections/community.general/issues/2700).
- iptables_state - fix a broken query of ``async_status`` result with current ansible-core development version (https://github.com/ansible-collections/community.general/issues/2627, https://github.com/ansible-collections/community.general/pull/2671).
- java_cert - fix issue with incorrect alias used on PKCS#12 certificate import (https://github.com/ansible-collections/community.general/pull/2560).
- jenkins_plugin - use POST method for sending request to jenkins API when ``state`` option is one of ``enabled``, ``disabled``, ``pinned``, ``unpinned``, or ``absent`` (https://github.com/ansible-collections/community.general/issues/2510).
- json_query filter plugin - avoid 'unknown type' errors for more Ansible internal types (https://github.com/ansible-collections/community.general/pull/2607).
- keycloak_realm - ``ssl_required`` changed from a boolean type to accept the strings ``none``, ``external`` or ``all``. This is not a breaking change since the module always failed when a boolean was supplied (https://github.com/ansible-collections/community.general/pull/2693).
- keycloak_realm - remove warning that ``reset_password_allowed`` needs to be marked as ``no_log`` (https://github.com/ansible-collections/community.general/pull/2694).
- module_helper module utils - ``CmdMixin`` must also use ``LC_ALL`` to enforce locale choice (https://github.com/ansible-collections/community.general/pull/2731).
- netcup_dns - use ``str(ex)`` instead of unreliable ``ex.message`` in exception handling to fix ``AttributeError`` in error cases (https://github.com/ansible-collections/community.general/pull/2590).
- ovir4 inventory script - improve configparser creation to avoid crashes for options without values (https://github.com/ansible-collections/community.general/issues/674).
- proxmox_kvm - fixed ``vmid`` return value when VM with ``name`` already exists (https://github.com/ansible-collections/community.general/issues/2648).
- redis cache - improved connection string parsing (https://github.com/ansible-collections/community.general/issues/497).
- rhsm_release - fix the issue that module considers 8, 7Client and 7Workstation as invalid releases (https://github.com/ansible-collections/community.general/pull/2571).
- ssh_config - reduce stormssh searches based on host (https://github.com/ansible-collections/community.general/pull/2568/).
- stacki_host - when adding a new server, ``rack`` and ``rank`` must be passed, and network parameters are optional (https://github.com/ansible-collections/community.general/pull/2681).
- terraform - ensure the workspace is set back to its previous value when the apply fails (https://github.com/ansible-collections/community.general/pull/2634).
- xfconf - also use ``LC_ALL`` to enforce locale choice (https://github.com/ansible-collections/community.general/issues/2715).
- zypper_repository - fix idempotency on adding repository with ``$releasever`` and ``$basearch`` variables (https://github.com/ansible-collections/community.general/issues/1985).
New Plugins
-----------
Lookup
~~~~~~
- random_string - Generates random string
New Modules
-----------
Database
~~~~~~~~
saphana
^^^^^^^
- hana_query - Execute SQL on HANA
Files
~~~~~
- sapcar_extract - Manages SAP SAPCAR archives
Packaging
~~~~~~~~~
os
^^
- pacman_key - Manage pacman's list of trusted keys
v3.1.0
======

32
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,32 @@
# Contributing
We follow [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) in all our contributions and interactions within this repository.
If you are a committer, also refer to the [collection's committer guidelines](https://github.com/ansible-collections/community.general/blob/main/commit-rights.md).
## Issue tracker
Whether you are looking for an opportunity to contribute or you found a bug and already know how to solve it, please go to the [issue tracker](https://github.com/ansible-collections/community.general/issues).
There you can find feature ideas to implement, reports about bugs to solve, or submit an issue to discuss your idea before implementing it which can help choose a right direction at the beginning of your work and potentially save a lot of time and effort.
Also somebody may already have started discussing or working on implementing the same or a similar idea,
so you can cooperate to create a better solution together.
* If you are interested in starting with an easy issue, look for [issues with an `easyfix` label](https://github.com/ansible-collections/community.general/labels/easyfix).
* Often issues that are waiting for contributors to pick up have [the `waiting_on_contributor` label](https://github.com/ansible-collections/community.general/labels/waiting_on_contributor).
## Open pull requests
Look through currently [open pull requests](https://github.com/ansible-collections/community.general/pulls).
You can help by reviewing them. Reviews help move pull requests to merge state. Some good pull requests cannot be merged only due to a lack of reviews. And it is always worth saying that good reviews are often more valuable than pull requests themselves.
Note that reviewing does not only mean code review, but also offering comments on new interfaces added to existing plugins/modules, interfaces of new plugins/modules, improving language (not everyone is a native english speaker), or testing bugfixes and new features!
Also, consider taking up a valuable, reviewed, but abandoned pull request which you could politely ask the original authors to complete yourself.
* Try committing your changes with an informative but short commit message.
* All commits of a pull request branch will be squashed into one commit at last. That does not mean you must have only one commit on your pull request, though!
* Please try not to force-push if it is not needed, so reviewers and other users looking at your pull request later can see the pull request commit history.
* Do not add merge commits to your PR. The bot will complain and you will have to rebase ([instructions for rebasing](https://docs.ansible.com/ansible/latest/dev_guide/developing_rebasing.html)) to remove them before your PR can be merged. To avoid that git automatically does merges during pulls, you can configure it to do rebases instead by running `git config pull.rebase true` inside the respository checkout.
You can also read [our Quick-start development guide](https://github.com/ansible/community-docs/blob/main/create_pr_quick_start_guide.rst).
If you find any inconsistencies or places in this document which can be improved, feel free to raise an issue or pull request to fix it.

View File

@@ -50,6 +50,8 @@ export COLLECTIONS_PATH=$(pwd)/collections:$COLLECTIONS_PATH
You can find more information in the [developer guide for collections](https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html#contributing-to-collections), and in the [Ansible Community Guide](https://docs.ansible.com/ansible/latest/community/index.html).
Also for some notes specific to this collection see [our CONTRIBUTING documentation](https://github.com/ansible-collections/community.general/blob/main/CONTRIBUTING.md).
### Running tests
See [here](https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html#testing-collections).
@@ -58,10 +60,10 @@ See [here](https://docs.ansible.com/ansible/devel/dev_guide/developing_collectio
We have a dedicated Working Group for Ansible development.
You can find other people interested on the following Freenode IRC channels -
You can find other people interested on the following [Libera.chat](https://libera.chat/) IRC channels -
- `#ansible` - For general use questions and support.
- `#ansible-devel` - For discussions on developer topics and code related to features or bugs.
- `#ansible-community` - For discussions on community topics and community meetings.
- `#ansible-devel` - For discussions on developer topics and code related to features or bugs in ansible-core.
- `#ansible-community` - For discussions on community topics and community meetings, and for general development questions for community collections.
For more information about communities, meetings and agendas see [Community Wiki](https://github.com/ansible/community/wiki/Community).

View File

@@ -1156,3 +1156,133 @@ releases:
name: random_pet
namespace: null
release_date: '2021-05-18'
3.2.0:
changes:
bugfixes:
- consul_kv lookup plugin - allow to set ``recurse``, ``index``, ``datacenter``
and ``token`` as keyword arguments (https://github.com/ansible-collections/community.general/issues/2124).
- cpanm - also use ``LC_ALL`` to enforce locale choice (https://github.com/ansible-collections/community.general/pull/2731).
- influxdb_user - fix bug which removed current privileges instead of appending
them to existing ones (https://github.com/ansible-collections/community.general/issues/2609,
https://github.com/ansible-collections/community.general/pull/2614).
- iptables_state - call ``async_status`` action plugin rather than its module
(https://github.com/ansible-collections/community.general/issues/2700).
- iptables_state - fix a broken query of ``async_status`` result with current
ansible-core development version (https://github.com/ansible-collections/community.general/issues/2627,
https://github.com/ansible-collections/community.general/pull/2671).
- java_cert - fix issue with incorrect alias used on PKCS#12 certificate import
(https://github.com/ansible-collections/community.general/pull/2560).
- jenkins_plugin - use POST method for sending request to jenkins API when ``state``
option is one of ``enabled``, ``disabled``, ``pinned``, ``unpinned``, or ``absent``
(https://github.com/ansible-collections/community.general/issues/2510).
- json_query filter plugin - avoid 'unknown type' errors for more Ansible internal
types (https://github.com/ansible-collections/community.general/pull/2607).
- keycloak_realm - ``ssl_required`` changed from a boolean type to accept the
strings ``none``, ``external`` or ``all``. This is not a breaking change since
the module always failed when a boolean was supplied (https://github.com/ansible-collections/community.general/pull/2693).
- keycloak_realm - remove warning that ``reset_password_allowed`` needs to be
marked as ``no_log`` (https://github.com/ansible-collections/community.general/pull/2694).
- module_helper module utils - ``CmdMixin`` must also use ``LC_ALL`` to enforce
locale choice (https://github.com/ansible-collections/community.general/pull/2731).
- netcup_dns - use ``str(ex)`` instead of unreliable ``ex.message`` in exception
handling to fix ``AttributeError`` in error cases (https://github.com/ansible-collections/community.general/pull/2590).
- ovir4 inventory script - improve configparser creation to avoid crashes for
options without values (https://github.com/ansible-collections/community.general/issues/674).
- proxmox_kvm - fixed ``vmid`` return value when VM with ``name`` already exists
(https://github.com/ansible-collections/community.general/issues/2648).
- redis cache - improved connection string parsing (https://github.com/ansible-collections/community.general/issues/497).
- rhsm_release - fix the issue that module considers 8, 7Client and 7Workstation
as invalid releases (https://github.com/ansible-collections/community.general/pull/2571).
- ssh_config - reduce stormssh searches based on host (https://github.com/ansible-collections/community.general/pull/2568/).
- stacki_host - when adding a new server, ``rack`` and ``rank`` must be passed,
and network parameters are optional (https://github.com/ansible-collections/community.general/pull/2681).
- terraform - ensure the workspace is set back to its previous value when the
apply fails (https://github.com/ansible-collections/community.general/pull/2634).
- xfconf - also use ``LC_ALL`` to enforce locale choice (https://github.com/ansible-collections/community.general/issues/2715).
- zypper_repository - fix idempotency on adding repository with ``$releasever``
and ``$basearch`` variables (https://github.com/ansible-collections/community.general/issues/1985).
deprecated_features:
- All inventory and vault scripts will be removed from community.general in
version 4.0.0. If you are referencing them, please update your references
to the new `contrib-scripts GitHub repository <https://github.com/ansible-community/contrib-scripts>`_
so your workflow will not break once community.general 4.0.0 is released (https://github.com/ansible-collections/community.general/pull/2697).
minor_changes:
- Remove unnecessary ``__init__.py`` files from ``plugins/`` (https://github.com/ansible-collections/community.general/pull/2632).
- archive - added ``exclusion_patterns`` option to exclude files or subdirectories
from archives (https://github.com/ansible-collections/community.general/pull/2616).
- cloud_init_data_facts - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- composer - add ``composer_executable`` option (https://github.com/ansible-collections/community.general/issues/2649).
- flatpak - add ``no_dependencies`` parameter (https://github.com/ansible/ansible/pull/55452,
https://github.com/ansible-collections/community.general/pull/2751).
- ini_file - opening file with encoding ``utf-8-sig`` (https://github.com/ansible-collections/community.general/issues/2189).
- jira - add comment visibility parameter for comment operation (https://github.com/ansible-collections/community.general/pull/2556).
- maven_artifact - added ``checksum_alg`` option to support SHA1 checksums in
order to support FIPS systems (https://github.com/ansible-collections/community.general/pull/2662).
- module_helper module utils - method ``CmdMixin.run_command()`` now accepts
``process_output`` specifying a function to process the outcome of the underlying
``module.run_command()`` (https://github.com/ansible-collections/community.general/pull/2564).
- nmcli - add new options to ignore automatic DNS servers and gateways (https://github.com/ansible-collections/community.general/issues/1087).
- onepassword lookup plugin - add ``domain`` option (https://github.com/ansible-collections/community.general/issues/2734).
- open_iscsi - add ``auto_portal_startup`` parameter to allow ``node.startup``
setting per portal (https://github.com/ansible-collections/community.general/issues/2685).
- open_iscsi - also consider ``portal`` and ``port`` to check if already logged
in or not (https://github.com/ansible-collections/community.general/issues/2683).
- proxmox_group_info - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- proxmox_kvm - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- rhevm - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- serverless - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
- stacki_host - minor refactoring (https://github.com/ansible-collections/community.general/pull/2681).
- terraform - add option ``overwrite_init`` to skip init if exists (https://github.com/ansible-collections/community.general/pull/2573).
- terraform - minor refactor (https://github.com/ansible-collections/community.general/pull/2557).
release_summary: Regular bugfix and feature release.
fragments:
- 2126-consul_kv-pass-token.yml
- 2461-ovirt4-fix-configparser.yml
- 2510-jenkins_plugin_use_post_method.yml
- 2556-add-comment_visibility-parameter-for-comment-operation-of-jira-module.yml
- 2557-cloud-misc-refactor.yml
- 2560-java_cert-pkcs12-alias-bugfix.yml
- 2564-mh-cmd-process-output.yml
- 2568-ssh_config-reduce-stormssh-searches-based-on-host.yml
- 2571-rhsm_release-fix-release_matcher.yaml
- 2573-terraform-overwrite-init.yml
- 2578-ini-file-utf8-bom.yml
- 2579-redis-cache-ipv6.yml
- 2590-netcup_dns-exception-no-message-attr.yml
- 2614-influxdb_user-fix-issue-introduced-in-PR#2499.yml
- 2616-archive-exclusion_patterns-option.yml
- 2632-cleanup.yml
- 2634-terraform-switch-workspace.yml
- 2635-nmcli-add-ignore-auto-arguments.yml
- 2648-proxmox_kvm-fix-vmid-return-value.yml
- 2650-composer-add_composer_executable.yml
- 2661-maven_artifact-add-sha1-option.yml
- 2671-fix-broken-query-of-async_status-result.yml
- 2681-stacki-host-bugfix.yml
- 2684-open_iscsi-single-target-multiple-portal-overrides.yml
- 2711-fix-iptables_state-2700-async_status-call.yml
- 2722-zypper_repository-fix_idempotency_on_adding_repo_with_releasever.yml
- 2731-mh-cmd-locale.yml
- 2735-onepassword-add_domain_option.yml
- 2751-flatpak-no_dependencies.yml
- 3.2.0.yml
- json_query_more_types.yml
- keycloak-realm-no-log-password-reset.yml
- keycloak_realm_ssl_required.yml
- script-removal.yml
modules:
- description: Execute SQL on HANA
name: hana_query
namespace: database.saphana
- description: Manage pacman's list of trusted keys
name: pacman_key
namespace: packaging.os
- description: Manages SAP SAPCAR archives
name: sapcar_extract
namespace: files
plugins:
lookup:
- description: Generates random string
name: random_string
namespace: null
release_date: '2021-06-08'

View File

@@ -67,6 +67,8 @@ Individuals who have been asked to become a part of this group have generally be
| Name | GitHub ID | IRC Nick | Other |
| ------------------- | -------------------- | ------------------ | -------------------- |
| Alexei Znamensky | russoz | russoz | |
| Amin Vakil | aminvakil | aminvakil | |
| Andrew Klychkov | andersson007 | andersson007_ | |
| Felix Fontein | felixfontein | felixfontein | |
| John R Barker | gundalow | gundalow | |

View File

@@ -0,0 +1,5 @@
---
sections:
- title: Guides
toctree:
- filter_guide

View File

@@ -0,0 +1,753 @@
.. _ansible_collections.community.general.docsite.filter_guide:
community.general Filter Guide
==============================
The :ref:`community.general collection <plugins_in_community.general>` offers several useful filter plugins.
.. contents:: Topics
Paths
-----
The ``path_join`` filter has been added in ansible-base 2.10. If you want to use this filter, but also need to support Ansible 2.9, you can use ``community.general``'s ``path_join`` shim, ``community.general.path_join``. This filter redirects to ``path_join`` for ansible-base 2.10 and ansible-core 2.11 or newer, and re-implements the filter for Ansible 2.9.
.. code-block:: yaml+jinja
# ansible-base 2.10 or newer:
path: {{ ('/etc', path, 'subdir', file) | path_join }}
# Also works with Ansible 2.9:
path: {{ ('/etc', path, 'subdir', file) | community.general.path_join }}
.. versionadded:: 3.0.0
Abstract transformations
------------------------
Dictionaries
^^^^^^^^^^^^
You can use the ``dict_kv`` filter to create a single-entry dictionary with ``value | community.general.dict_kv(key)``:
.. code-block:: yaml+jinja
- name: Create a single-entry dictionary
debug:
msg: "{{ myvar | community.general.dict_kv('thatsmyvar') }}"
vars:
myvar: myvalue
- name: Create a list of dictionaries where the 'server' field is taken from a list
debug:
msg: >-
{{ myservers | map('community.general.dict_kv', 'server')
| map('combine', common_config) }}
vars:
common_config:
type: host
database: all
myservers:
- server1
- server2
This produces:
.. code-block:: ansible-output
TASK [Create a single-entry dictionary] **************************************************
ok: [localhost] => {
"msg": {
"thatsmyvar": "myvalue"
}
}
TASK [Create a list of dictionaries where the 'server' field is taken from a list] *******
ok: [localhost] => {
"msg": [
{
"database": "all",
"server": "server1",
"type": "host"
},
{
"database": "all",
"server": "server2",
"type": "host"
}
]
}
.. versionadded:: 2.0.0
If you need to convert a list of key-value pairs to a dictionary, you can use the ``dict`` function. Unfortunately, this function cannot be used with ``map``. For this, the ``community.general.dict`` filter can be used:
.. code-block:: yaml+jinja
- name: Create a dictionary with the dict function
debug:
msg: "{{ dict([[1, 2], ['a', 'b']]) }}"
- name: Create a dictionary with the community.general.dict filter
debug:
msg: "{{ [[1, 2], ['a', 'b']] | community.general.dict }}"
- name: Create a list of dictionaries with map and the community.general.dict filter
debug:
msg: >-
{{ values | map('zip', ['k1', 'k2', 'k3'])
| map('map', 'reverse')
| map('community.general.dict') }}
vars:
values:
- - foo
- 23
- a
- - bar
- 42
- b
This produces:
.. code-block:: ansible-output
TASK [Create a dictionary with the dict function] ****************************************
ok: [localhost] => {
"msg": {
"1": 2,
"a": "b"
}
}
TASK [Create a dictionary with the community.general.dict filter] ************************
ok: [localhost] => {
"msg": {
"1": 2,
"a": "b"
}
}
TASK [Create a list of dictionaries with map and the community.general.dict filter] ******
ok: [localhost] => {
"msg": [
{
"k1": "foo",
"k2": 23,
"k3": "a"
},
{
"k1": "bar",
"k2": 42,
"k3": "b"
}
]
}
.. versionadded:: 3.0.0
Grouping
^^^^^^^^
If you have a list of dictionaries, the Jinja2 ``groupby`` filter allows to group the list by an attribute. This results in a list of ``(grouper, list)`` namedtuples, where ``list`` contains all dictionaries where the selected attribute equals ``grouper``. If you know that for every ``grouper``, there will be a most one entry in that list, you can use the ``community.general.groupby_as_dict`` filter to convert the original list into a dictionary which maps ``grouper`` to the corresponding dictionary.
One example is ``ansible_facts.mounts``, which is a list of dictionaries where each has one ``device`` element to indicate the device which is mounted. Therefore, ``ansible_facts.mounts | community.general.groupby_as_dict('device')`` is a dictionary mapping a device to the mount information:
.. code-block:: yaml+jinja
- name: Output mount facts grouped by device name
debug:
var: ansible_facts.mounts | community.general.groupby_as_dict('device')
- name: Output mount facts grouped by mount point
debug:
var: ansible_facts.mounts | community.general.groupby_as_dict('mount')
This produces:
.. code-block:: ansible-output
TASK [Output mount facts grouped by device name] ******************************************
ok: [localhost] => {
"ansible_facts.mounts | community.general.groupby_as_dict('device')": {
"/dev/sda1": {
"block_available": 2000,
"block_size": 4096,
"block_total": 2345,
"block_used": 345,
"device": "/dev/sda1",
"fstype": "ext4",
"inode_available": 500,
"inode_total": 512,
"inode_used": 12,
"mount": "/boot",
"options": "rw,relatime,data=ordered",
"size_available": 56821,
"size_total": 543210,
"uuid": "ab31cade-d9c1-484d-8482-8a4cbee5241a"
},
"/dev/sda2": {
"block_available": 1234,
"block_size": 4096,
"block_total": 12345,
"block_used": 11111,
"device": "/dev/sda2",
"fstype": "ext4",
"inode_available": 1111,
"inode_total": 1234,
"inode_used": 123,
"mount": "/",
"options": "rw,relatime",
"size_available": 42143,
"size_total": 543210,
"uuid": "abcdef01-2345-6789-0abc-def012345678"
}
}
}
TASK [Output mount facts grouped by mount point] ******************************************
ok: [localhost] => {
"ansible_facts.mounts | community.general.groupby_as_dict('mount')": {
"/": {
"block_available": 1234,
"block_size": 4096,
"block_total": 12345,
"block_used": 11111,
"device": "/dev/sda2",
"fstype": "ext4",
"inode_available": 1111,
"inode_total": 1234,
"inode_used": 123,
"mount": "/",
"options": "rw,relatime",
"size_available": 42143,
"size_total": 543210,
"uuid": "bdf50b7d-4859-40af-8665-c637ee7a7808"
},
"/boot": {
"block_available": 2000,
"block_size": 4096,
"block_total": 2345,
"block_used": 345,
"device": "/dev/sda1",
"fstype": "ext4",
"inode_available": 500,
"inode_total": 512,
"inode_used": 12,
"mount": "/boot",
"options": "rw,relatime,data=ordered",
"size_available": 56821,
"size_total": 543210,
"uuid": "ab31cade-d9c1-484d-8482-8a4cbee5241a"
}
}
}
.. versionadded: 3.0.0
Merging lists of dictionaries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have two lists of dictionaries and want to combine them into a list of merged dictionaries, where two dictionaries are merged if they coincide in one attribute, you can use the ``lists_mergeby`` filter.
.. code-block:: yaml+jinja
- name: Merge two lists by common attribute 'name'
debug:
var: list1 | community.general.lists_mergeby(list2, 'name')
vars:
list1:
- name: foo
extra: true
- name: bar
extra: false
- name: meh
extra: true
list2:
- name: foo
path: /foo
- name: baz
path: /bazzz
This produces:
.. code-block:: ansible-output
TASK [Merge two lists by common attribute 'name'] ****************************************
ok: [localhost] => {
"list1 | community.general.lists_mergeby(list2, 'name')": [
{
"extra": false,
"name": "bar"
},
{
"name": "baz",
"path": "/bazzz"
},
{
"extra": true,
"name": "foo",
"path": "/foo"
},
{
"extra": true,
"name": "meh"
}
]
}
.. versionadded: 2.0.0
Working with times
------------------
The ``to_time_unit`` filter allows to convert times from a human-readable string to a unit. For example, ``'4h 30min 12second' | community.general.to_time_unit('hour')`` gives the number of hours that correspond to 4 hours, 30 minutes and 12 seconds.
There are shorthands to directly convert to various units, like ``to_hours``, ``to_minutes``, ``to_seconds``, and so on. The following table lists all units that can be used:
.. list-table:: Units
:widths: 25 25 25 25
:header-rows: 1
* - Unit name
- Unit value in seconds
- Unit strings for filter
- Shorthand filter
* - Millisecond
- 1/1000 second
- ``ms``, ``millisecond``, ``milliseconds``, ``msec``, ``msecs``, ``msecond``, ``mseconds``
- ``to_milliseconds``
* - Second
- 1 second
- ``s``, ``sec``, ``secs``, ``second``, ``seconds``
- ``to_seconds``
* - Minute
- 60 seconds
- ``m``, ``min``, ``mins``, ``minute``, ``minutes``
- ``to_minutes``
* - Hour
- 60*60 seconds
- ``h``, ``hour``, ``hours``
- ``to_hours``
* - Day
- 24*60*60 seconds
- ``d``, ``day``, ``days``
- ``to_days``
* - Week
- 7*24*60*60 seconds
- ``w``, ``week``, ``weeks``
- ``to_weeks``
* - Month
- 30*24*60*60 seconds
- ``mo``, ``month``, ``months``
- ``to_months``
* - Year
- 365*24*60*60 seconds
- ``y``, ``year``, ``years``
- ``to_years``
Note that months and years are using a simplified representation: a month is 30 days, and a year is 365 days. If you need different definitions of months or years, you can pass them as keyword arguments. For example, if you want a year to be 365.25 days, and a month to be 30.5 days, you can write ``'11months 4' | community.general.to_years(year=365.25, month=30.5)``. These keyword arguments can be specified to ``to_time_unit`` and to all shorthand filters.
.. code-block:: yaml+jinja
- name: Convert string to seconds
debug:
msg: "{{ '30h 20m 10s 123ms' | community.general.to_time_unit('seconds') }}"
- name: Convert string to hours
debug:
msg: "{{ '30h 20m 10s 123ms' | community.general.to_hours }}"
- name: Convert string to years (using 365.25 days == 1 year)
debug:
msg: "{{ '400d 15h' | community.general.to_years(year=365.25) }}"
This produces:
.. code-block:: ansible-output
TASK [Convert string to seconds] **********************************************************
ok: [localhost] => {
"msg": "109210.123"
}
TASK [Convert string to hours] ************************************************************
ok: [localhost] => {
"msg": "30.336145277778"
}
TASK [Convert string to years (using 365.25 days == 1 year)] ******************************
ok: [localhost] => {
"msg": "1.096851471595"
}
.. versionadded: 0.2.0
Working with versions
---------------------
If you need to sort a list of version numbers, the Jinja ``sort`` filter is problematic. Since it sorts lexicographically, ``2.10`` will come before ``2.9``. To treat version numbers correctly, you can use the ``version_sort`` filter:
.. code-block:: yaml+jinja
- name: Sort list by version number
debug:
var: ansible_versions | community.general.version_sort
vars:
ansible_versions:
- '2.8.0'
- '2.11.0'
- '2.7.0'
- '2.10.0'
- '2.9.0'
This produces:
.. code-block:: ansible-output
TASK [Sort list by version number] ********************************************************
ok: [localhost] => {
"ansible_versions | community.general.version_sort": [
"2.7.0",
"2.8.0",
"2.9.0",
"2.10.0",
"2.11.0"
]
}
.. versionadded: 2.2.0
Creating identifiers
--------------------
The following filters allow to create identifiers.
Hashids
^^^^^^^
`Hashids <https://hashids.org/>`_ allow to convert sequences of integers to short unique string identifiers. This filter needs the `hashids Python library <https://pypi.org/project/hashids/>`_ installed on the controller.
.. code-block:: yaml+jinja
- name: "Create hashid"
debug:
msg: "{{ [1234, 5, 6] | community.general.hashids_encode }}"
- name: "Decode hashid"
debug:
msg: "{{ 'jm2Cytn' | community.general.hashids_decode }}"
This produces:
.. code-block:: ansible-output
TASK [Create hashid] **********************************************************************
ok: [localhost] => {
"msg": "jm2Cytn"
}
TASK [Decode hashid] **********************************************************************
ok: [localhost] => {
"msg": [
1234,
5,
6
]
}
The hashids filters accept keyword arguments to allow fine-tuning the hashids generated:
:salt: String to use as salt when hashing.
:alphabet: String of 16 or more unique characters to produce a hash.
:min_length: Minimum length of hash produced.
.. versionadded: 3.0.0
Random MACs
^^^^^^^^^^^
You can use the ``random_mac`` filter to complete a partial `MAC address <https://en.wikipedia.org/wiki/MAC_address>`_ to a random 6-byte MAC address.
.. code-block:: yaml+jinja
- name: "Create a random MAC starting with ff:"
debug:
msg: "{{ 'FF' | community.general.random_mac }}"
- name: "Create a random MAC starting with 00:11:22:"
debug:
msg: "{{ '00:11:22' | community.general.random_mac }}"
This produces:
.. code-block:: ansible-output
TASK [Create a random MAC starting with ff:] **********************************************
ok: [localhost] => {
"msg": "ff:69:d3:78:7f:b4"
}
TASK [Create a random MAC starting with 00:11:22:] ****************************************
ok: [localhost] => {
"msg": "00:11:22:71:5d:3b"
}
You can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
Conversions
-----------
Parsing CSV files
^^^^^^^^^^^^^^^^^
Ansible offers the :ref:`community.general.read_csv module <ansible_collections.community.general.read_csv_module>` to read CSV files. Sometimes you need to convert strings to CSV files instead. For this, the ``from_csv`` filter exists.
.. code-block:: yaml+jinja
- name: "Parse CSV from string"
debug:
msg: "{{ csv_string | community.general.from_csv }}"
vars:
csv_string: |
foo,bar,baz
1,2,3
you,this,then
This produces:
.. code-block:: ansible-output
TASK [Parse CSV from string] **************************************************************
ok: [localhost] => {
"msg": [
{
"bar": "2",
"baz": "3",
"foo": "1"
},
{
"bar": "this",
"baz": "then",
"foo": "you"
}
]
}
The ``from_csv`` filter has several keyword arguments to control its behavior:
:dialect: Dialect of the CSV file. Default is ``excel``. Other possible choices are ``excel-tab`` and ``unix``. If one of ``delimiter``, ``skipinitialspace`` or ``strict`` is specified, ``dialect`` is ignored.
:fieldnames: A set of column names to use. If not provided, the first line of the CSV is assumed to contain the column names.
:delimiter: Sets the delimiter to use. Default depends on the dialect used.
:skipinitialspace: Set to ``true`` to ignore space directly after the delimiter. Default depends on the dialect used (usually ``false``).
:strict: Set to ``true`` to error out on invalid CSV input.
.. versionadded: 3.0.0
Converting to JSON
^^^^^^^^^^^^^^^^^^
`JC <https://pypi.org/project/jc/>`_ is a CLI tool and Python library which allows to interpret output of various CLI programs as JSON. It is also available as a filter in community.general. This filter needs the `jc Python library <https://pypi.org/project/jc/>`_ installed on the controller.
.. code-block:: yaml+jinja
- name: Run 'ls' to list files in /
command: ls /
register: result
- name: Parse the ls output
debug:
msg: "{{ result.stdout | community.general.jc('ls') }}"
This produces:
.. code-block:: ansible-output
TASK [Run 'ls' to list files in /] ********************************************************
changed: [localhost]
TASK [Parse the ls output] ****************************************************************
ok: [localhost] => {
"msg": [
{
"filename": "bin"
},
{
"filename": "boot"
},
{
"filename": "dev"
},
{
"filename": "etc"
},
{
"filename": "home"
},
{
"filename": "lib"
},
{
"filename": "proc"
},
{
"filename": "root"
},
{
"filename": "run"
},
{
"filename": "tmp"
}
]
}
.. versionadded: 2.0.0
.. _ansible_collections.community.general.docsite.json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_.
Consider this data structure:
.. code-block:: yaml+jinja
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query:
.. code-block:: yaml+jinja
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
To extract all server names:
.. code-block:: yaml+jinja
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
To extract ports from cluster1:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster:
.. code-block:: yaml+jinja
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
To extract ports from all clusters with name starting with 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
To extract ports from all clusters with name containing 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
.. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure.

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 3.1.0
version: 3.2.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -1,31 +1,5 @@
---
requires_ansible: '>=2.9.10'
action_groups:
ovirt:
- ovirt_affinity_label_facts
- ovirt_api_facts
- ovirt_cluster_facts
- ovirt_datacenter_facts
- ovirt_disk_facts
- ovirt_event_facts
- ovirt_external_provider_facts
- ovirt_group_facts
- ovirt_host_facts
- ovirt_host_storage_facts
- ovirt_network_facts
- ovirt_nic_facts
- ovirt_permission_facts
- ovirt_quota_facts
- ovirt_scheduling_policy_facts
- ovirt_snapshot_facts
- ovirt_storage_domain_facts
- ovirt_storage_template_facts
- ovirt_storage_vm_facts
- ovirt_tag_facts
- ovirt_template_facts
- ovirt_user_facts
- ovirt_vm_facts
- ovirt_vmpool_facts
plugin_routing:
connection:
docker:
@@ -40,15 +14,18 @@ plugin_routing:
nios:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios lookup plugin has been deprecated. Please use infoblox.nios_modules.nios_lookup instead.
warning_text: The community.general.nios lookup plugin has been deprecated.
Please use infoblox.nios_modules.nios_lookup instead.
nios_next_ip:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_next_ip lookup plugin has been deprecated. Please use infoblox.nios_modules.nios_next_ip instead.
warning_text: The community.general.nios_next_ip lookup plugin has been deprecated.
Please use infoblox.nios_modules.nios_next_ip instead.
nios_next_network:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_next_network lookup plugin has been deprecated. Please use infoblox.nios_modules.nios_next_network instead.
warning_text: The community.general.nios_next_network lookup plugin has been
deprecated. Please use infoblox.nios_modules.nios_next_network instead.
modules:
ali_instance_facts:
tombstone:
@@ -153,11 +130,13 @@ plugin_routing:
gcp_forwarding_rule:
tombstone:
removal_version: 2.0.0
warning_text: Use google.cloud.gcp_compute_forwarding_rule or google.cloud.gcp_compute_global_forwarding_rule instead.
warning_text: Use google.cloud.gcp_compute_forwarding_rule or google.cloud.gcp_compute_global_forwarding_rule
instead.
gcp_healthcheck:
tombstone:
removal_version: 2.0.0
warning_text: Use google.cloud.gcp_compute_health_check, google.cloud.gcp_compute_http_health_check or google.cloud.gcp_compute_https_health_check instead.
warning_text: Use google.cloud.gcp_compute_health_check, google.cloud.gcp_compute_http_health_check
or google.cloud.gcp_compute_https_health_check instead.
gcp_target_proxy:
tombstone:
removal_version: 2.0.0
@@ -168,37 +147,22 @@ plugin_routing:
warning_text: Use google.cloud.gcp_compute_url_map instead.
gcpubsub:
redirect: community.google.gcpubsub
gcpubsub_info:
redirect: community.google.gcpubsub_info
gcpubsub_facts:
tombstone:
removal_version: 3.0.0
warning_text: Use community.google.gcpubsub_info instead.
gcpubsub_info:
redirect: community.google.gcpubsub_info
gcspanner:
tombstone:
removal_version: 2.0.0
warning_text: Use google.cloud.gcp_spanner_database and/or google.cloud.gcp_spanner_instance instead.
warning_text: Use google.cloud.gcp_spanner_database and/or google.cloud.gcp_spanner_instance
instead.
github_hooks:
tombstone:
removal_version: 2.0.0
warning_text: Use community.general.github_webhook and community.general.github_webhook_info instead.
# Adding tombstones burns the old name, so we simply remove the entries:
# gluster_heal_info:
# tombstone:
# removal_version: 3.0.0
# warning_text: The gluster modules have migrated to the gluster.gluster collection. Use gluster.gluster.gluster_heal_info instead.
# gluster_peer:
# tombstone:
# removal_version: 3.0.0
# warning_text: The gluster modules have migrated to the gluster.gluster collection. Use gluster.gluster.gluster_peer instead.
# gluster_volume:
# tombstone:
# removal_version: 3.0.0
# warning_text: The gluster modules have migrated to the gluster.gluster collection. Use gluster.gluster.gluster_volume instead.
# helm:
# tombstone:
# removal_version: 3.0.0
# warning_text: Use community.kubernetes.helm instead.
warning_text: Use community.general.github_webhook and community.general.github_webhook_info
instead.
hetzner_failover_ip:
redirect: community.hrobot.failover_ip
hetzner_failover_ip_info:
@@ -246,11 +210,13 @@ plugin_routing:
logicmonitor:
tombstone:
removal_version: 1.0.0
warning_text: The logicmonitor_facts module is no longer maintained and the API used has been disabled in 2017.
warning_text: The logicmonitor_facts module is no longer maintained and the
API used has been disabled in 2017.
logicmonitor_facts:
tombstone:
removal_version: 1.0.0
warning_text: The logicmonitor_facts module is no longer maintained and the API used has been disabled in 2017.
warning_text: The logicmonitor_facts module is no longer maintained and the
API used has been disabled in 2017.
memset_memstore_facts:
tombstone:
removal_version: 3.0.0
@@ -295,74 +261,90 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use netapp.ontap.na_ontap_info instead.
nios_a_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_a_record module has been deprecated. Please use infoblox.nios_modules.nios_a_record instead.
nios_aaaa_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_aaaa_record module has been deprecated. Please use infoblox.nios_modules.nios_aaaa_record instead.
nios_cname_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_cname_record module has been deprecated. Please use infoblox.nios_modules.nios_cname_record instead.
nios_dns_view:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_dns_view module has been deprecated. Please use infoblox.nios_modules.nios_dns_view instead.
nios_fixed_address:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_fixed_address module has been deprecated. Please use infoblox.nios_modules.nios_fixed_address instead.
nios_host_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_host_record module has been deprecated. Please use infoblox.nios_modules.nios_host_record instead.
nios_member:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_member module has been deprecated. Please use infoblox.nios_modules.nios_member instead.
nios_mx_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_mx_record module has been deprecated. Please use infoblox.nios_modules.nios_mx_record instead.
nios_naptr_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_naptr_record module has been deprecated. Please use infoblox.nios_modules.nios_naptr_record instead.
nios_network:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_network module has been deprecated. Please use infoblox.nios_modules.nios_network instead.
nios_network_view:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_network_view module has been deprecated. Please use infoblox.nios_modules.nios_network_view instead.
nios_nsgroup:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_nsgroup module has been deprecated. Please use infoblox.nios_modules.nios_nsgroup instead.
nios_ptr_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_ptr_record module has been deprecated. Please use infoblox.nios_modules.nios_ptr_record instead.
nios_srv_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_srv_record module has been deprecated. Please use infoblox.nios_modules.nios_srv_record instead.
nios_txt_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_txt_record module has been deprecated. Please use infoblox.nios_modules.nios_txt_record instead.
nios_zone:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_zone module has been deprecated. Please use infoblox.nios_modules.nios_zone instead.
nginx_status_facts:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.nginx_status_info instead.
nios_a_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_a_record module has been deprecated.
Please use infoblox.nios_modules.nios_a_record instead.
nios_aaaa_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_aaaa_record module has been deprecated.
Please use infoblox.nios_modules.nios_aaaa_record instead.
nios_cname_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_cname_record module has been deprecated.
Please use infoblox.nios_modules.nios_cname_record instead.
nios_dns_view:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_dns_view module has been deprecated.
Please use infoblox.nios_modules.nios_dns_view instead.
nios_fixed_address:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_fixed_address module has been deprecated.
Please use infoblox.nios_modules.nios_fixed_address instead.
nios_host_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_host_record module has been deprecated.
Please use infoblox.nios_modules.nios_host_record instead.
nios_member:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_member module has been deprecated.
Please use infoblox.nios_modules.nios_member instead.
nios_mx_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_mx_record module has been deprecated.
Please use infoblox.nios_modules.nios_mx_record instead.
nios_naptr_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_naptr_record module has been deprecated.
Please use infoblox.nios_modules.nios_naptr_record instead.
nios_network:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_network module has been deprecated.
Please use infoblox.nios_modules.nios_network instead.
nios_network_view:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_network_view module has been deprecated.
Please use infoblox.nios_modules.nios_network_view instead.
nios_nsgroup:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_nsgroup module has been deprecated.
Please use infoblox.nios_modules.nios_nsgroup instead.
nios_ptr_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_ptr_record module has been deprecated.
Please use infoblox.nios_modules.nios_ptr_record instead.
nios_srv_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_srv_record module has been deprecated.
Please use infoblox.nios_modules.nios_srv_record instead.
nios_txt_record:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_txt_record module has been deprecated.
Please use infoblox.nios_modules.nios_txt_record instead.
nios_zone:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios_zone module has been deprecated.
Please use infoblox.nios_modules.nios_zone instead.
ome_device_info:
redirect: dellemc.openmanage.ome_device_info
one_image_facts:
@@ -396,7 +378,8 @@ plugin_routing:
oneview_logical_interconnect_group_facts:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.oneview_logical_interconnect_group_info instead.
warning_text: Use community.general.oneview_logical_interconnect_group_info
instead.
oneview_network_set_facts:
tombstone:
removal_version: 3.0.0
@@ -553,10 +536,10 @@ plugin_routing:
redirect: community.postgresql.postgresql_table
postgresql_tablespace:
redirect: community.postgresql.postgresql_tablespace
postgresql_user_obj_stat_info:
redirect: community.postgresql.postgresql_user_obj_stat_info
postgresql_user:
redirect: community.postgresql.postgresql_user
postgresql_user_obj_stat_info:
redirect: community.postgresql.postgresql_user_obj_stat_info
purefa_facts:
tombstone:
removal_version: 3.0.0
@@ -647,7 +630,8 @@ plugin_routing:
nios:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.nios document fragment has been deprecated. Please use infoblox.nios_modules.nios instead.
warning_text: The community.general.nios document fragment has been deprecated.
Please use infoblox.nios_modules.nios instead.
postgresql:
redirect: community.postgresql.postgresql
module_utils:
@@ -668,26 +652,30 @@ plugin_routing:
net_tools.nios.api:
deprecation:
removal_version: 5.0.0
warning_text: The community.general.net_tools.nios.api module_utils has been deprecated. Please use infoblox.nios_modules.api instead.
warning_text: The community.general.net_tools.nios.api module_utils has been
deprecated. Please use infoblox.nios_modules.api instead.
postgresql:
redirect: community.postgresql.postgresql
remote_management.dellemc.dellemc_idrac:
redirect: dellemc.openmanage.dellemc_idrac
remote_management.dellemc.ome:
redirect: dellemc.openmanage.ome
postgresql:
redirect: community.postgresql.postgresql
callback:
actionable:
tombstone:
removal_version: 2.0.0
warning_text: Use the 'default' callback plugin with 'display_skipped_hosts = no' and 'display_ok_hosts = no' options.
warning_text: Use the 'default' callback plugin with 'display_skipped_hosts
= no' and 'display_ok_hosts = no' options.
full_skip:
tombstone:
removal_version: 2.0.0
warning_text: Use the 'default' callback plugin with 'display_skipped_hosts = no' option.
warning_text: Use the 'default' callback plugin with 'display_skipped_hosts
= no' option.
stderr:
tombstone:
removal_version: 2.0.0
warning_text: Use the 'default' callback plugin with 'display_failed_stderr = yes' option.
warning_text: Use the 'default' callback plugin with 'display_failed_stderr
= yes' option.
inventory:
docker_machine:
redirect: community.docker.docker_machine

View File

@@ -40,19 +40,27 @@ class ActionModule(ActionBase):
"(=%s) to 0, and 'async' (=%s) to a value >2 and not greater than "
"'ansible_timeout' (=%s) (recommended).")
def _async_result(self, module_args, task_vars, timeout):
def _async_result(self, async_status_args, task_vars, timeout):
'''
Retrieve results of the asynchonous task, and display them in place of
the async wrapper results (those with the ansible_job_id key).
'''
async_status = self._task.copy()
async_status.args = async_status_args
async_status.action = 'ansible.builtin.async_status'
async_status.async_val = 0
async_action = self._shared_loader_obj.action_loader.get(
async_status.action, task=async_status, connection=self._connection,
play_context=self._play_context, loader=self._loader, templar=self._templar,
shared_loader_obj=self._shared_loader_obj)
if async_status.args['mode'] == 'cleanup':
return async_action.run(task_vars=task_vars)
# At least one iteration is required, even if timeout is 0.
for dummy in range(max(1, timeout)):
async_result = self._execute_module(
module_name='ansible.builtin.async_status',
module_args=module_args,
task_vars=task_vars,
wrap_async=False)
if async_result['finished'] == 1:
async_result = async_action.run(task_vars=task_vars)
if async_result.get('finished', 0) == 1:
break
time.sleep(min(1, timeout))
@@ -106,7 +114,7 @@ class ActionModule(ActionBase):
# longer on the controller); and set a backup file path.
module_args['_timeout'] = task_async
module_args['_back'] = '%s/iptables.state' % async_dir
async_status_args = dict(_async_dir=async_dir)
async_status_args = dict(mode='status')
confirm_cmd = 'rm -f %s' % module_args['_back']
starter_cmd = 'touch %s.starter' % module_args['_back']
remaining_time = max(task_async, max_timeout)
@@ -168,11 +176,7 @@ class ActionModule(ActionBase):
del result['invocation']['module_args'][key]
async_status_args['mode'] = 'cleanup'
dummy = self._execute_module(
module_name='ansible.builtin.async_status',
module_args=async_status_args,
task_vars=task_vars,
wrap_async=False)
dummy = self._async_result(async_status_args, task_vars, 0)
if not wrap_async:
# remove a temporary path we created

View File

View File

@@ -61,6 +61,7 @@ DOCUMENTATION = '''
type: integer
'''
import re
import time
import json
@@ -91,6 +92,8 @@ class CacheModule(BaseCacheModule):
performance.
"""
_sentinel_service_name = None
re_url_conn = re.compile(r'^([^:]+|\[[^]]+\]):(\d+):(\d+)(?::(.*))?$')
re_sent_conn = re.compile(r'^(.*):(\d+)$')
def __init__(self, *args, **kwargs):
uri = ''
@@ -130,11 +133,18 @@ class CacheModule(BaseCacheModule):
self._db = self._get_sentinel_connection(uri, kw)
# normal connection
else:
connection = uri.split(':')
connection = self._parse_connection(self.re_url_conn, uri)
self._db = StrictRedis(*connection, **kw)
display.vv('Redis connection: %s' % self._db)
@staticmethod
def _parse_connection(re_patt, uri):
match = re_patt.match(uri)
if not match:
raise AnsibleError("Unable to parse connection string")
return match.groups()
def _get_sentinel_connection(self, uri, kw):
"""
get sentinel connection details from _uri
@@ -158,7 +168,7 @@ class CacheModule(BaseCacheModule):
except IndexError:
pass # password is optional
sentinels = [tuple(shost.split(':')) for shost in connections]
sentinels = [self._parse_connection(self.re_sent_conn, shost) for shost in connections]
display.vv('\nUsing redis sentinels: %s' % sentinels)
scon = Sentinel(sentinels, **kw)
try:

View File

@@ -35,9 +35,11 @@ def json_query(data, expr):
raise AnsibleError('You need to install "jmespath" prior to running '
'json_query filter')
# Hack to handle Ansible String Types
# Hack to handle Ansible Unsafe text, AnsibleMapping and AnsibleSequence
# See issue: https://github.com/ansible-collections/community.general/issues/320
jmespath.functions.REVERSE_TYPES_MAP['string'] = jmespath.functions.REVERSE_TYPES_MAP['string'] + ('AnsibleUnicode', 'AnsibleUnsafeText', )
jmespath.functions.REVERSE_TYPES_MAP['array'] = jmespath.functions.REVERSE_TYPES_MAP['array'] + ('AnsibleSequence', )
jmespath.functions.REVERSE_TYPES_MAP['object'] = jmespath.functions.REVERSE_TYPES_MAP['object'] + ('AnsibleMapping', )
try:
return jmespath.search(expr, data)
except jmespath.exceptions.JMESPathError as e:

View File

@@ -10,6 +10,8 @@ DOCUMENTATION = '''
name: stackpath_compute
short_description: StackPath Edge Computing inventory source
version_added: 1.2.0
author:
- UNKNOWN (@shayrybak)
extends_documentation_fragment:
- inventory_cache
- constructed

View File

@@ -171,10 +171,10 @@ class LookupModule(LookupBase):
paramvals = {
'key': params[0],
'token': None,
'recurse': False,
'index': None,
'datacenter': None
'token': self.get_option('token'),
'recurse': self.get_option('recurse'),
'index': self.get_option('index'),
'datacenter': self.get_option('datacenter')
}
# parameters specified?

View File

@@ -30,6 +30,11 @@ DOCUMENTATION = '''
aliases: ['vault_password']
section:
description: Item section containing the field to retrieve (case-insensitive). If absent will return first match from any section.
domain:
description: Domain of 1Password. Default is U(1password.com).
version_added: 3.2.0
default: '1password.com'
type: str
subdomain:
description: The 1Password subdomain to authenticate against.
username:
@@ -109,6 +114,7 @@ class OnePass(object):
self.logged_in = False
self.token = None
self.subdomain = None
self.domain = None
self.username = None
self.secret_key = None
self.master_password = None
@@ -168,7 +174,7 @@ class OnePass(object):
args = [
'signin',
'{0}.1password.com'.format(self.subdomain),
'{0}.{1}'.format(self.subdomain, self.domain),
to_bytes(self.username),
to_bytes(self.secret_key),
'--output=raw',
@@ -265,6 +271,7 @@ class LookupModule(LookupBase):
section = kwargs.get('section')
vault = kwargs.get('vault')
op.subdomain = kwargs.get('subdomain')
op.domain = kwargs.get('domain', '1password.com')
op.username = kwargs.get('username')
op.secret_key = kwargs.get('secret_key')
op.master_password = kwargs.get('master_password', kwargs.get('vault_password'))

View File

@@ -0,0 +1,220 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Abhijeet Kasurde <akasurde@redhat.com>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
name: random_string
author:
- Abhijeet Kasurde (@Akasurde)
short_description: Generates random string
version_added: '3.2.0'
description:
- Generates random string based upon the given constraints.
options:
length:
description: The length of the string.
default: 8
type: int
upper:
description:
- Include uppercase letters in the string.
default: true
type: bool
lower:
description:
- Include lowercase letters in the string.
default: true
type: bool
numbers:
description:
- Include numbers in the string.
default: true
type: bool
special:
description:
- Include special characters in the string.
- Special characters are taken from Python standard library C(string).
See L(the documentation of string.punctuation,https://docs.python.org/3/library/string.html#string.punctuation)
for which characters will be used.
- The choice of special characters can be changed to setting I(override_special).
default: true
type: bool
min_numeric:
description:
- Minimum number of numeric characters in the string.
- If set, overrides I(numbers=false).
default: 0
type: int
min_upper:
description:
- Minimum number of uppercase alphabets in the string.
- If set, overrides I(upper=false).
default: 0
type: int
min_lower:
description:
- Minimum number of lowercase alphabets in the string.
- If set, overrides I(lower=false).
default: 0
type: int
min_special:
description:
- Minimum number of special character in the string.
default: 0
type: int
override_special:
description:
- Overide a list of special characters to use in the string.
- If set I(min_special) should be set to a non-default value.
type: str
override_all:
description:
- Override all values of I(numbers), I(upper), I(lower), and I(special) with
the given list of characters.
type: str
base64:
description:
- Returns base64 encoded string.
type: bool
default: false
"""
EXAMPLES = r"""
- name: Generate random string
ansible.builtin.debug:
var: lookup('community.general.random_string')
# Example result: ['DeadBeeF']
- name: Generate random string with length 12
ansible.builtin.debug:
var: lookup('community.general.random_string', length=12)
# Example result: ['Uan0hUiX5kVG']
- name: Generate base64 encoded random string
ansible.builtin.debug:
var: lookup('community.general.random_string', base64=True)
# Example result: ['NHZ6eWN5Qk0=']
- name: Generate a random string with 1 lower, 1 upper, 1 number and 1 special char (atleast)
ansible.builtin.debug:
var: lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1)
# Example result: ['&Qw2|E[-']
- name: Generate a random string with all lower case characters
debug:
var: query('community.general.random_string', upper=false, numbers=false, special=false)
# Example result: ['exolxzyz']
- name: Generate random hexadecimal string
debug:
var: query('community.general.random_string', upper=false, lower=false, override_special=hex_chars, numbers=false)
vars:
hex_chars: '0123456789ABCDEF'
# Example result: ['D2A40737']
- name: Generate random hexadecimal string with override_all
debug:
var: query('community.general.random_string', override_all=hex_chars)
vars:
hex_chars: '0123456789ABCDEF'
# Example result: ['D2A40737']
"""
RETURN = r"""
_raw:
description: A one-element list containing a random string
type: list
elements: str
"""
import base64
import random
import string
from ansible.errors import AnsibleLookupError
from ansible.plugins.lookup import LookupBase
from ansible.module_utils._text import to_bytes, to_text
class LookupModule(LookupBase):
@staticmethod
def get_random(random_generator, chars, length):
if not chars:
raise AnsibleLookupError(
"Available characters cannot be None, please change constraints"
)
return "".join(random_generator.choice(chars) for dummy in range(length))
@staticmethod
def b64encode(string_value, encoding="utf-8"):
return to_text(
base64.b64encode(
to_bytes(string_value, encoding=encoding, errors="surrogate_or_strict")
)
)
def run(self, terms, variables=None, **kwargs):
number_chars = string.digits
lower_chars = string.ascii_lowercase
upper_chars = string.ascii_uppercase
special_chars = string.punctuation
random_generator = random.SystemRandom()
self.set_options(var_options=variables, direct=kwargs)
length = self.get_option("length")
base64_flag = self.get_option("base64")
override_all = self.get_option("override_all")
values = ""
available_chars_set = ""
if override_all:
# Override all the values
available_chars_set = override_all
else:
upper = self.get_option("upper")
lower = self.get_option("lower")
numbers = self.get_option("numbers")
special = self.get_option("special")
override_special = self.get_option("override_special")
if override_special:
special_chars = override_special
if upper:
available_chars_set += upper_chars
if lower:
available_chars_set += lower_chars
if numbers:
available_chars_set += number_chars
if special:
available_chars_set += special_chars
mapping = {
"min_numeric": number_chars,
"min_lower": lower_chars,
"min_upper": upper_chars,
"min_special": special_chars,
}
for m in mapping:
if self.get_option(m):
values += self.get_random(random_generator, mapping[m], self.get_option(m))
remaining_pass_len = length - len(values)
values += self.get_random(random_generator, available_chars_set, remaining_pass_len)
# Get pseudo randomization
shuffled_values = list(values)
# Randomize the order
random.shuffle(shuffled_values)
if base64_flag:
return [self.b64encode("".join(shuffled_values))]
return ["".join(shuffled_values)]

View File

@@ -103,6 +103,14 @@ EXAMPLES = r"""
| items2dict(key_name='slug',
value_name='itemValue'))['password']
}}
- hosts: localhost
vars:
secret_password: >-
{{ ((lookup('community.general.tss', 1) | from_json).get('items') | items2dict(key_name='slug', value_name='itemValue'))['password'] }}"
tasks:
- ansible.builtin.debug:
msg: the password is {{ secret_password }}
"""
from ansible.errors import AnsibleError, AnsibleOptionsError

View File

@@ -152,16 +152,24 @@ class CmdMixin(object):
def process_command_output(self, rc, out, err):
return rc, out, err
def run_command(self, extra_params=None, params=None, *args, **kwargs):
def run_command(self, extra_params=None, params=None, process_output=None, *args, **kwargs):
self.vars.cmd_args = self._calculate_args(extra_params, params)
options = dict(self.run_command_fixed_options)
env_update = dict(options.get('environ_update', {}))
options['check_rc'] = options.get('check_rc', self.check_rc)
options.update(kwargs)
env_update = dict(options.get('environ_update', {}))
if self.force_lang:
env_update.update({'LANGUAGE': self.force_lang})
env_update.update({
'LANGUAGE': self.force_lang,
'LC_ALL': self.force_lang,
})
self.update_output(force_lang=self.force_lang)
options['environ_update'] = env_update
options.update(kwargs)
rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)
self.update_output(rc=rc, stdout=out, stderr=err)
return self.process_command_output(rc, out, err)
if process_output is None:
_process = self.process_command_output
else:
_process = process_output
return _process(rc, out, err)

View File

@@ -88,7 +88,7 @@ from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
CLOUD_INIT_PATH = "/var/lib/cloud/data/"
CLOUD_INIT_PATH = "/var/lib/cloud/data"
def gather_cloud_init_data_facts(module):
@@ -100,7 +100,7 @@ def gather_cloud_init_data_facts(module):
filter = module.params.get('filter')
if filter is None or filter == i:
res['cloud_init_data_facts'][i] = dict()
json_file = CLOUD_INIT_PATH + i + '.json'
json_file = os.path.join(CLOUD_INIT_PATH, i + '.json')
if os.path.exists(json_file):
f = open(json_file, 'rb')

View File

@@ -95,7 +95,7 @@ class ProxmoxGroup:
self.group = dict()
# Data representation is not the same depending on API calls
for k, v in group.items():
if k == 'users' and type(v) == str:
if k == 'users' and isinstance(v, str):
self.group['users'] = v.split(',')
elif k == 'members':
self.group['users'] = group['members']

View File

@@ -808,23 +808,23 @@ def get_vminfo(module, proxmox, node, vmid, **kwargs):
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
# Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n]
# Convert all dict in kwargs to elements.
# For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n]
for k in list(kwargs.keys()):
if isinstance(kwargs[k], dict):
kwargs.update(kwargs[k])
del kwargs[k]
# Split information by type
re_net = re.compile(r'net[0-9]')
re_dev = re.compile(r'(virtio|ide|scsi|sata)[0-9]')
for k, v in kwargs.items():
if re.match(r'net[0-9]', k) is not None:
if re_net.match(k):
interface = k
k = vm[k]
k = re.search('=(.*?),', k).group(1)
mac[interface] = k
if (re.match(r'virtio[0-9]', k) is not None or
re.match(r'ide[0-9]', k) is not None or
re.match(r'scsi[0-9]', k) is not None or
re.match(r'sata[0-9]', k) is not None):
elif re_dev.match(k):
device = k
k = vm[k]
k = re.search('(.*?),', k).group(1)
@@ -835,16 +835,13 @@ def get_vminfo(module, proxmox, node, vmid, **kwargs):
results['vmid'] = int(vmid)
def settings(module, proxmox, vmid, node, name, **kwargs):
def settings(proxmox, vmid, node, **kwargs):
proxmox_node = proxmox.nodes(node)
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
if proxmox_node.qemu(vmid).config.set(**kwargs) is None:
return True
else:
return False
return proxmox_node.qemu(vmid).config.set(**kwargs) is None
def wait_for_task(module, proxmox, node, taskid):
@@ -915,7 +912,8 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
if 'pool' in kwargs:
del kwargs['pool']
# Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n], ipconfig[n]
# Convert all dict in kwargs to elements.
# For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n], ipconfig[n]
for k in list(kwargs.keys()):
if isinstance(kwargs[k], dict):
kwargs.update(kwargs[k])
@@ -938,8 +936,9 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
# VM tags are expected to be valid and presented as a comma/semi-colon delimited string
if 'tags' in kwargs:
re_tag = re.compile(r'^[a-z0-9_][a-z0-9_\-\+\.]*$')
for tag in kwargs['tags']:
if not re.match(r'^[a-z0-9_][a-z0-9_\-\+\.]*$', tag):
if not re_tag.match(tag):
module.fail_json(msg='%s is not a valid tag' % tag)
kwargs['tags'] = ",".join(kwargs['tags'])
@@ -971,7 +970,7 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
if not wait_for_task(module, proxmox, node, taskid):
module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
proxmox_node.tasks(taskid).log.get()[:1])
return False
return True
@@ -1209,14 +1208,14 @@ def main():
if delete is not None:
try:
settings(module, proxmox, vmid, node, name, delete=delete)
settings(proxmox, vmid, node, delete=delete)
module.exit_json(changed=True, vmid=vmid, msg="Settings has deleted on VM {0} with vmid {1}".format(name, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to delete settings on VM {0} with vmid {1}: '.format(name, vmid) + str(e))
if revert is not None:
try:
settings(module, proxmox, vmid, node, name, revert=revert)
settings(proxmox, vmid, node, revert=revert)
module.exit_json(changed=True, vmid=vmid, msg="Settings has reverted on VM {0} with vmid {1}".format(name, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to revert settings on VM {0} with vmid {1}: Maybe is not a pending task... '.format(name, vmid) + str(e))
@@ -1226,7 +1225,7 @@ def main():
if get_vm(proxmox, vmid) and not (update or clone):
module.exit_json(changed=False, vmid=vmid, msg="VM with vmid <%s> already exists" % vmid)
elif get_vmid(proxmox, name) and not (update or clone):
module.exit_json(changed=False, vmid=vmid, msg="VM with name <%s> already exists" % name)
module.exit_json(changed=False, vmid=get_vmid(proxmox, name)[0], msg="VM with name <%s> already exists" % name)
elif not (node, name):
module.fail_json(msg='node, name is mandatory for creating/updating vm')
elif not node_check(proxmox, node):

View File

@@ -547,7 +547,7 @@ class RHEVConn(object):
def set_Memory_Policy(self, name, memory_policy):
VM = self.get_VM(name)
VM.memory_policy.guaranteed = int(int(memory_policy) * 1024 * 1024 * 1024)
VM.memory_policy.guaranteed = int(memory_policy) * 1024 * 1024 * 1024
try:
VM.update()
setMsg("The memory policy has been updated.")
@@ -1260,7 +1260,7 @@ def core(module):
r = RHEV(module)
state = module.params.get('state', 'present')
state = module.params.get('state')
if state == 'ping':
r.test()

View File

@@ -139,16 +139,14 @@ from ansible.module_utils.basic import AnsibleModule
def read_serverless_config(module):
path = module.params.get('service_path')
full_path = os.path.join(path, 'serverless.yml')
try:
with open(os.path.join(path, 'serverless.yml')) as sls_config:
with open(full_path) as sls_config:
config = yaml.safe_load(sls_config.read())
return config
except IOError as e:
module.fail_json(msg="Could not open serverless.yml in {0}. err: {1}".format(path, str(e)))
module.fail_json(msg="Failed to open serverless config at {0}".format(
os.path.join(path, 'serverless.yml')))
module.fail_json(msg="Could not open serverless.yml in {0}. err: {1}".format(full_path, str(e)))
def get_service_name(module, stage):
@@ -182,7 +180,6 @@ def main():
service_path = module.params.get('service_path')
state = module.params.get('state')
functions = module.params.get('functions')
region = module.params.get('region')
stage = module.params.get('stage')
deploy = module.params.get('deploy', True)
@@ -193,7 +190,7 @@ def main():
if serverless_bin_path is not None:
command = serverless_bin_path + " "
else:
command = "serverless "
command = module.get_bin_path("serverless") + " "
if state == 'present':
command += 'deploy '

View File

@@ -107,6 +107,12 @@ options:
you intend to provision an entirely new Terraform deployment.
default: false
type: bool
overwrite_init:
description:
- Run init even if C(.terraform/terraform.tfstate) already exists in I(project_path).
default: true
type: bool
version_added: '3.2.0'
backend_config:
description:
- A group of key-values to provide at init stage to the -backend-config parameter.
@@ -227,7 +233,7 @@ def get_version(bin_path):
def preflight_validation(bin_path, project_path, version, variables_args=None, plan_file=None):
if project_path in [None, ''] or '/' not in project_path:
if project_path is None or '/' not in project_path:
module.fail_json(msg="Path for Terraform project can not be None or ''.")
if not os.path.exists(bin_path):
module.fail_json(msg="Path for Terraform binary '{0}' doesn't exist on this host - check the path and try again please.".format(bin_path))
@@ -348,6 +354,7 @@ def main():
backend_config=dict(type='dict', default=None),
backend_config_files=dict(type='list', elements='path', default=None),
init_reconfigure=dict(required=False, type='bool', default=False),
overwrite_init=dict(type='bool', default=True),
),
required_if=[('state', 'planned', ['plan_file'])],
supports_check_mode=True,
@@ -367,6 +374,7 @@ def main():
backend_config = module.params.get('backend_config')
backend_config_files = module.params.get('backend_config_files')
init_reconfigure = module.params.get('init_reconfigure')
overwrite_init = module.params.get('overwrite_init')
if bin_path is not None:
command = [bin_path]
@@ -383,7 +391,8 @@ def main():
APPLY_ARGS = ('apply', '-no-color', '-input=false', '-auto-approve')
if force_init:
init_plugins(command[0], project_path, backend_config, backend_config_files, init_reconfigure, plugin_paths)
if overwrite_init or not os.path.isfile(os.path.join(project_path, ".terraform", "terraform.tfstate")):
init_plugins(command[0], project_path, backend_config, backend_config_files, init_reconfigure, plugin_paths)
workspace_ctx = get_workspace_context(command[0], project_path)
if workspace_ctx["current"] != workspace:
@@ -438,7 +447,14 @@ def main():
command.append(plan_file)
if needs_application and not module.check_mode and not state == 'planned':
rc, out, err = module.run_command(command, check_rc=True, cwd=project_path)
rc, out, err = module.run_command(command, check_rc=False, cwd=project_path)
if rc != 0:
if workspace_ctx["current"] != workspace:
select_workspace(command[0], project_path, workspace_ctx["current"])
module.fail_json(msg=err.rstrip(), rc=rc, stdout=out,
stdout_lines=out.splitlines(), stderr=err,
stderr_lines=err.splitlines(),
cmd=' '.join(command))
# checks out to decide if changes were made during execution
if ' 0 added, 0 changed' not in out and not state == "absent" or ' 0 destroyed' not in out:
changed = True

View File

@@ -32,11 +32,13 @@ EXAMPLES = r'''
'''
RETURN = r'''
---
online_server_info:
description: Response from Online API
description:
- Response from Online API.
- "For more details please refer to: U(https://console.online.net/en/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"online_server_info": [
{

View File

@@ -7,7 +7,6 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: online_user_info
short_description: Gather information about Online user.
description:
@@ -16,7 +15,6 @@ author:
- "Remy Leone (@sieben)"
extends_documentation_fragment:
- community.general.online
'''
EXAMPLES = r'''
@@ -29,11 +27,12 @@ EXAMPLES = r'''
'''
RETURN = r'''
---
online_user_info:
description: Response from Online API
description:
- Response from Online API.
- "For more details please refer to: U(https://console.online.net/en/api/)."
returned: success
type: complex
type: dict
sample:
"online_user_info": {
"company": "foobar LLC",

View File

@@ -19,9 +19,7 @@ author:
extends_documentation_fragment:
- community.general.scaleway
options:
region:
type: str
description:
@@ -51,9 +49,12 @@ EXAMPLES = r'''
RETURN = r'''
---
scaleway_image_info:
description: Response from Scaleway API
description:
- Response from Scaleway API.
- "For more details please refer to: U(https://developers.scaleway.com/en/products/instance/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"scaleway_image_info": [
{

View File

@@ -49,9 +49,12 @@ EXAMPLES = r'''
RETURN = r'''
---
scaleway_ip_info:
description: Response from Scaleway API
description:
- Response from Scaleway API.
- "For more details please refer to: U(https://developers.scaleway.com/en/products/instance/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"scaleway_ip_info": [
{

View File

@@ -49,9 +49,12 @@ EXAMPLES = r'''
RETURN = r'''
---
scaleway_security_group_info:
description: Response from Scaleway API
description:
- Response from Scaleway API.
- "For more details please refer to: U(https://developers.scaleway.com/en/products/instance/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"scaleway_security_group_info": [
{

View File

@@ -49,9 +49,12 @@ EXAMPLES = r'''
RETURN = r'''
---
scaleway_server_info:
description: Response from Scaleway API
description:
- Response from Scaleway API.
- "For more details please refer to: U(https://developers.scaleway.com/en/products/instance/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"scaleway_server_info": [
{

View File

@@ -49,9 +49,12 @@ EXAMPLES = r'''
RETURN = r'''
---
scaleway_snapshot_info:
description: Response from Scaleway API
description:
- Response from Scaleway API.
- "For more details please refer to: U(https://developers.scaleway.com/en/products/instance/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"scaleway_snapshot_info": [
{

View File

@@ -49,9 +49,12 @@ EXAMPLES = r'''
RETURN = r'''
---
scaleway_volume_info:
description: Response from Scaleway API
description:
- Response from Scaleway API.
- "For more details please refer to: U(https://developers.scaleway.com/en/products/instance/api/)."
returned: success
type: complex
type: list
elements: dict
sample:
"scaleway_volume_info": [
{

View File

@@ -174,7 +174,7 @@ def set_user_grants(module, client, user_name, grants):
if v['privilege'] != 'NO PRIVILEGES':
if v['privilege'] == 'ALL PRIVILEGES':
v['privilege'] = 'ALL'
parsed_grants.add(v)
parsed_grants.append(v)
# check if the current grants are included in the desired ones
for current_grant in parsed_grants:

View File

@@ -0,0 +1,187 @@
#!/usr/bin/python
# Copyright: (c) 2021, Rainer Leber <rainerleber@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: hana_query
short_description: Execute SQL on HANA
version_added: 3.2.0
description: This module executes SQL statements on HANA with hdbsql.
options:
sid:
description: The system ID.
type: str
required: true
instance:
description: The instance number.
type: str
required: true
user:
description: A dedicated username. Defaults to C(SYSTEM).
type: str
default: SYSTEM
password:
description: The password to connect to the database.
type: str
required: true
autocommit:
description: Autocommit the statement.
type: bool
default: true
host:
description: The Host IP address. The port can be defined as well.
type: str
database:
description: Define the database on which to connect.
type: str
encrypted:
description: Use encrypted connection. Defaults to C(false).
type: bool
default: false
filepath:
description:
- One or more files each containing one SQL query to run.
- Must be a string or list containing strings.
type: list
elements: path
query:
description:
- SQL query to run.
- Must be a string or list containing strings. Please note that if you supply a string, it will be split by commas (C(,)) to a list.
It is better to supply a one-element list instead to avoid mangled input.
type: list
elements: str
notes:
- Does not support C(check_mode).
author:
- Rainer Leber (@rainerleber)
'''
EXAMPLES = r'''
- name: Simple select query
community.general.hana_query:
sid: "hdb"
instance: "01"
password: "Test123"
query: "select user_name from users"
- name: Run several queries
community.general.hana_query:
sid: "hdb"
instance: "01"
password: "Test123"
query:
- "select user_name from users;"
- select * from SYSTEM;
host: "localhost"
autocommit: False
- name: Run several queries from file
community.general.hana_query:
sid: "hdb"
instance: "01"
password: "Test123"
filepath:
- /tmp/HANA_CPU_UtilizationPerCore_2.00.020+.txt
- /tmp/HANA.txt
host: "localhost"
'''
RETURN = r'''
query_result:
description: List containing results of all queries executed (one sublist for every query).
returned: on success
type: list
elements: list
sample: [[{"Column": "Value1"}, {"Column": "Value2"}], [{"Column": "Value1"}, {"Column": "Value2"}]]
'''
import csv
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import StringIO
from ansible.module_utils._text import to_native
def csv_to_list(rawcsv):
reader_raw = csv.DictReader(StringIO(rawcsv))
reader = [dict((k, v.strip()) for k, v in row.items()) for row in reader_raw]
return list(reader)
def main():
module = AnsibleModule(
argument_spec=dict(
sid=dict(type='str', required=True),
instance=dict(type='str', required=True),
encrypted=dict(type='bool', required=False, default=False),
host=dict(type='str', required=False),
user=dict(type='str', required=False, default="SYSTEM"),
password=dict(type='str', required=True, no_log=True),
database=dict(type='str', required=False),
query=dict(type='list', elements='str', required=False),
filepath=dict(type='list', elements='path', required=False),
autocommit=dict(type='bool', required=False, default=True),
),
required_one_of=[('query', 'filepath')],
supports_check_mode=False,
)
rc, out, err, out_raw = [0, [], "", ""]
params = module.params
sid = (params['sid']).upper()
instance = params['instance']
user = params['user']
password = params['password']
autocommit = params['autocommit']
host = params['host']
database = params['database']
encrypted = params['encrypted']
filepath = params['filepath']
query = params['query']
bin_path = "/usr/sap/{sid}/HDB{instance}/exe/hdbsql".format(sid=sid, instance=instance)
try:
command = [module.get_bin_path(bin_path, required=True)]
except Exception as e:
module.fail_json(msg='Failed to find hdbsql at the expected path "{0}". Please check SID and instance number: "{1}"'.format(bin_path, to_native(e)))
if encrypted is True:
command.extend(['-attemptencrypt'])
if autocommit is False:
command.extend(['-z'])
if host is not None:
command.extend(['-n', host])
if database is not None:
command.extend(['-d', database])
# -x Suppresses additional output, such as the number of selected rows in a result set.
command.extend(['-x', '-i', instance, '-u', user, '-p', password])
if filepath is not None:
command.extend(['-I'])
for p in filepath:
# makes a command like hdbsql -i 01 -u SYSTEM -p secret123# -I /tmp/HANA_CPU_UtilizationPerCore_2.00.020+.txt,
# iterates through files and append the output to var out.
query_command = command + [p]
(rc, out_raw, err) = module.run_command(query_command)
out.append(csv_to_list(out_raw))
if query is not None:
for q in query:
# makes a command like hdbsql -i 01 -u SYSTEM -p secret123# "select user_name from users",
# iterates through multiple commands and append the output to var out.
query_command = command + [q]
(rc, out_raw, err) = module.run_command(query_command)
out.append(csv_to_list(out_raw))
changed = True
module.exit_json(changed=changed, rc=rc, query_result=out, stderr=err)
if __name__ == '__main__':
main()

View File

@@ -41,8 +41,16 @@ options:
exclude_path:
description:
- Remote absolute path, glob, or list of paths or globs for the file or files to exclude from I(path) list and glob expansion.
- Use I(exclusion_patterns) to instead exclude files or subdirectories below any of the paths from the I(path) list.
type: list
elements: path
exclusion_patterns:
description:
- Glob style patterns to exclude files or directories from the resulting archive.
- This differs from I(exclude_path) which applies only to the source paths from I(path).
type: list
elements: path
version_added: 3.2.0
force_archive:
description:
- Allows you to force the module to treat this as an archive even if only a single file is specified.
@@ -163,6 +171,8 @@ import re
import shutil
import tarfile
import zipfile
from fnmatch import fnmatch
from sys import version_info
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
@@ -186,6 +196,8 @@ else:
LZMA_IMP_ERR = format_exc()
HAS_LZMA = False
PY27 = version_info[0:2] >= (2, 7)
def to_b(s):
return to_bytes(s, errors='surrogate_or_strict')
@@ -214,6 +226,59 @@ def expand_paths(paths):
return expanded_path, is_globby
def matches_exclusion_patterns(path, exclusion_patterns):
return any(fnmatch(path, p) for p in exclusion_patterns)
def get_filter(exclusion_patterns, format):
def zip_filter(path):
return matches_exclusion_patterns(path, exclusion_patterns)
def tar_filter(tarinfo):
return None if matches_exclusion_patterns(tarinfo.name, exclusion_patterns) else tarinfo
return zip_filter if format == 'zip' or not PY27 else tar_filter
def get_archive_contains(format):
def archive_contains(archive, name):
try:
if format == 'zip':
archive.getinfo(name)
else:
archive.getmember(name)
except KeyError:
return False
return True
return archive_contains
def get_add_to_archive(format, filter):
def add_to_zip_archive(archive_file, path, archive_name):
try:
if not filter(path):
archive_file.write(path, archive_name)
except Exception as e:
return e
return None
def add_to_tar_archive(archive_file, path, archive_name):
try:
if PY27:
archive_file.add(path, archive_name, recursive=False, filter=filter)
else:
archive_file.add(path, archive_name, recursive=False, exclude=filter)
except Exception as e:
return e
return None
return add_to_zip_archive if format == 'zip' else add_to_tar_archive
def main():
module = AnsibleModule(
argument_spec=dict(
@@ -221,6 +286,7 @@ def main():
format=dict(type='str', default='gz', choices=['bz2', 'gz', 'tar', 'xz', 'zip']),
dest=dict(type='path'),
exclude_path=dict(type='list', elements='path'),
exclusion_patterns=dict(type='list', elements='path'),
force_archive=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
),
@@ -242,6 +308,8 @@ def main():
changed = False
state = 'absent'
exclusion_patterns = params['exclusion_patterns'] or []
# Simple or archive file compression (inapplicable with 'zip' since it's always an archive)
b_successes = []
@@ -262,6 +330,10 @@ def main():
# Only attempt to expand the exclude paths if it exists
b_expanded_exclude_paths = expand_paths(exclude_paths)[0] if exclude_paths else []
filter = get_filter(exclusion_patterns, fmt)
archive_contains = get_archive_contains(fmt)
add_to_archive = get_add_to_archive(fmt, filter)
# Only try to determine if we are working with an archive or not if we haven't set archive to true
if not force_archive:
# If we actually matched multiple files or TRIED to, then
@@ -384,38 +456,31 @@ def main():
n_fullpath = to_na(b_fullpath)
n_arcname = to_native(b_match_root.sub(b'', b_fullpath), errors='surrogate_or_strict')
try:
if fmt == 'zip':
arcfile.write(n_fullpath, n_arcname)
else:
arcfile.add(n_fullpath, n_arcname, recursive=False)
except Exception as e:
errors.append('%s: %s' % (n_fullpath, to_native(e)))
err = add_to_archive(arcfile, n_fullpath, n_arcname)
if err:
errors.append('%s: %s' % (n_fullpath, to_native(err)))
for b_filename in b_filenames:
b_fullpath = b_dirpath + b_filename
n_fullpath = to_na(b_fullpath)
n_arcname = to_n(b_match_root.sub(b'', b_fullpath))
try:
if fmt == 'zip':
arcfile.write(n_fullpath, n_arcname)
else:
arcfile.add(n_fullpath, n_arcname, recursive=False)
err = add_to_archive(arcfile, n_fullpath, n_arcname)
if err:
errors.append('Adding %s: %s' % (to_native(b_path), to_native(err)))
if archive_contains(arcfile, n_arcname):
b_successes.append(b_fullpath)
except Exception as e:
errors.append('Adding %s: %s' % (to_native(b_path), to_native(e)))
else:
path = to_na(b_path)
arcname = to_n(b_match_root.sub(b'', b_path))
if fmt == 'zip':
arcfile.write(path, arcname)
else:
arcfile.add(path, arcname, recursive=False)
b_successes.append(b_path)
err = add_to_archive(arcfile, path, arcname)
if err:
errors.append('Adding %s: %s' % (to_native(b_path), to_native(err)))
if archive_contains(arcfile, arcname):
b_successes.append(b_path)
except Exception as e:
expanded_fmt = 'zip' if fmt == 'zip' else ('tar.' + fmt)

View File

@@ -79,6 +79,7 @@ options:
notes:
- While it is possible to add an I(option) without specifying a I(value), this makes no sense.
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
- As of community.general 3.2.0, UTF-8 BOM markers are discarded when reading files.
author:
- Jan-Piet Mens (@jpmens)
- Ales Nosek (@noseka1)
@@ -104,6 +105,7 @@ EXAMPLES = r'''
backup: yes
'''
import io
import os
import re
import tempfile
@@ -141,7 +143,7 @@ def do_ini(module, filename, section=None, option=None, value=None,
os.makedirs(destpath)
ini_lines = []
else:
with open(filename, 'r') as ini_file:
with io.open(filename, 'r', encoding="utf-8-sig") as ini_file:
ini_lines = ini_file.readlines()
if module._diff:

View File

@@ -0,0 +1,219 @@
#!/usr/bin/python
# Copyright: (c) 2021, Rainer Leber <rainerleber@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
---
module: sapcar_extract
short_description: Manages SAP SAPCAR archives
version_added: "3.2.0"
description:
- Provides support for unpacking C(sar)/C(car) files with the SAPCAR binary from SAP and pulling
information back into Ansible.
options:
path:
description: The path to the SAR/CAR file.
type: path
required: true
dest:
description:
- The destination where SAPCAR extracts the SAR file. Missing folders will be created.
If this parameter is not provided it will unpack in the same folder as the SAR file.
type: path
binary_path:
description:
- The path to the SAPCAR binary, for example, C(/home/dummy/sapcar) or C(https://myserver/SAPCAR).
If this parameter is not provided the module will look in C(PATH).
type: path
signature:
description:
- If C(true) the signature will be extracted.
default: false
type: bool
security_library:
description:
- The path to the security library, for example, C(/usr/sap/hostctrl/exe/libsapcrytp.so), for signature operations.
type: path
manifest:
description:
- The name of the manifest.
default: "SIGNATURE.SMF"
type: str
remove:
description:
- If C(true) the SAR/CAR file will be removed. B(This should be used with caution!)
default: false
type: bool
author:
- Rainer Leber (@RainerLeber)
notes:
- Always returns C(changed=true) in C(check_mode).
'''
EXAMPLES = """
- name: Extract SAR file
community.general.sapcar_extract:
path: "~/source/hana.sar"
- name: Extract SAR file with destination
community.general.sapcar_extract:
path: "~/source/hana.sar"
dest: "~/test/"
- name: Extract SAR file with destination and download from webserver can be a fileshare as well
community.general.sapcar_extract:
path: "~/source/hana.sar"
dest: "~/dest/"
binary_path: "https://myserver/SAPCAR"
- name: Extract SAR file and delete SAR after extract
community.general.sapcar_extract:
path: "~/source/hana.sar"
remove: true
- name: Extract SAR file with manifest
community.general.sapcar_extract:
path: "~/source/hana.sar"
signature: true
- name: Extract SAR file with manifest and rename it
community.general.sapcar_extract:
path: "~/source/hana.sar"
manifest: "MyNewSignature.SMF"
signature: true
"""
import os
from tempfile import NamedTemporaryFile
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import open_url
from ansible.module_utils._text import to_native
def get_list_of_files(dir_name):
# create a list of file and directories
# names in the given directory
list_of_file = os.listdir(dir_name)
allFiles = list()
# Iterate over all the entries
for entry in list_of_file:
# Create full path
fullPath = os.path.join(dir_name, entry)
# If entry is a directory then get the list of files in this directory
if os.path.isdir(fullPath):
allFiles = allFiles + [fullPath]
allFiles = allFiles + get_list_of_files(fullPath)
else:
allFiles.append(fullPath)
return allFiles
def download_SAPCAR(binary_path, module):
bin_path = None
# download sapcar binary if url is provided otherwise path is returned
if binary_path is not None:
if binary_path.startswith('https://') or binary_path.startswith('http://'):
random_file = NamedTemporaryFile(delete=False)
with open_url(binary_path) as response:
with random_file as out_file:
data = response.read()
out_file.write(data)
os.chmod(out_file.name, 0o700)
bin_path = out_file.name
module.add_cleanup_file(bin_path)
else:
bin_path = binary_path
return bin_path
def check_if_present(command, path, dest, signature, manifest, module):
# manipuliating output from SAR file for compare with already extracted files
iter_command = [command, '-tvf', path]
sar_out = module.run_command(iter_command)[1]
sar_raw = sar_out.split("\n")[1:]
if dest[-1] != "/":
dest = dest + "/"
sar_files = [dest + x.split(" ")[-1] for x in sar_raw if x]
# remove any SIGNATURE.SMF from list because it will not unpacked if signature is false
if not signature:
sar_files = [item for item in sar_files if '.SMF' not in item]
# if signature is renamed manipulate files in list of sar file for compare.
if manifest != "SIGNATURE.SMF":
sar_files = [item for item in sar_files if '.SMF' not in item]
sar_files = sar_files + [manifest]
# get extracted files if present
files_extracted = get_list_of_files(dest)
# compare extracted files with files in sar file
present = all(elem in files_extracted for elem in sar_files)
return present
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True),
dest=dict(type='path'),
binary_path=dict(type='path'),
signature=dict(type='bool', default=False),
security_library=dict(type='path'),
manifest=dict(type='str', default="SIGNATURE.SMF"),
remove=dict(type='bool', default=False),
),
supports_check_mode=True,
)
rc, out, err = [0, "", ""]
params = module.params
check_mode = module.check_mode
path = params['path']
dest = params['dest']
signature = params['signature']
security_library = params['security_library']
manifest = params['manifest']
remove = params['remove']
bin_path = download_SAPCAR(params['binary_path'], module)
if dest is None:
dest_head_tail = os.path.split(path)
dest = dest_head_tail[0] + '/'
else:
if not os.path.exists(dest):
os.makedirs(dest, 0o755)
if bin_path is not None:
command = [module.get_bin_path(bin_path, required=True)]
else:
try:
command = [module.get_bin_path('sapcar', required=True)]
except Exception as e:
module.fail_json(msg='Failed to find SAPCAR at the expected path or URL "{0}". Please check whether it is available: {1}'
.format(bin_path, to_native(e)))
present = check_if_present(command[0], path, dest, signature, manifest, module)
if not present:
command.extend(['-xvf', path, '-R', dest])
if security_library:
command.extend(['-L', security_library])
if signature:
command.extend(['-manifest', manifest])
if not check_mode:
(rc, out, err) = module.run_command(command, check_rc=True)
changed = True
else:
changed = False
out = "allready unpacked"
if remove:
os.remove(path)
module.exit_json(changed=changed, message=rc, stdout=out,
stderr=err, command=' '.join(command))
if __name__ == '__main__':
main()

View File

@@ -301,6 +301,23 @@ EXAMPLES = r'''
- floor: Grog storage
- construction_date: "1990" # Only strings are valid
- building: Grog factory
# Consider this XML for following example -
#
# <config>
# <element name="test1">
# <text>part to remove</text>
# </element>
# <element name="test2">
# <text>part to keep</text>
# </element>
# </config>
- name: Delete element node based upon attribute
community.general.xml:
path: bar.xml
xpath: /config/element[@name='test1']
state: absent
'''
RETURN = r'''

View File

@@ -0,0 +1 @@
./database/saphana/hana_query.py

View File

@@ -439,9 +439,10 @@ options:
ssl_required:
description:
- The realm ssl required option.
choices: ['all', 'external', 'none']
aliases:
- sslRequired
type: bool
type: str
sso_session_idle_timeout:
description:
- The realm sso session idle timeout.
@@ -654,10 +655,10 @@ def main():
registration_flow=dict(type='str', aliases=['registrationFlow']),
remember_me=dict(type='bool', aliases=['rememberMe']),
reset_credentials_flow=dict(type='str', aliases=['resetCredentialsFlow']),
reset_password_allowed=dict(type='bool', aliases=['resetPasswordAllowed']),
reset_password_allowed=dict(type='bool', aliases=['resetPasswordAllowed'], no_log=False),
revoke_refresh_token=dict(type='bool', aliases=['revokeRefreshToken']),
smtp_server=dict(type='dict', aliases=['smtpServer']),
ssl_required=dict(type='bool', aliases=['sslRequired']),
ssl_required=dict(choices=["external", "all", "none"], aliases=['sslRequired']),
sso_session_idle_timeout=dict(type='int', aliases=['ssoSessionIdleTimeout']),
sso_session_idle_timeout_remember_me=dict(type='int', aliases=['ssoSessionIdleTimeoutRememberMe']),
sso_session_max_lifespan=dict(type='int', aliases=['ssoSessionMaxLifespan']),

View File

@@ -150,7 +150,7 @@ EXAMPLES = r'''
backend: www
wait: yes
drain: yes
wait_interval: 1
wait_interval: 60
wait_retries: 60
- name: Disable backend server in 'www' backend pool and drop open sessions to it

View File

@@ -255,7 +255,7 @@ def main():
has_changed = True
except Exception as ex:
module.fail_json(msg=ex.message)
module.fail_json(msg=str(ex))
module.exit_json(changed=has_changed, result={"records": [record_data(r) for r in all_records]})

View File

@@ -77,6 +77,12 @@ options:
- Use the format C(192.0.2.1).
- This parameter is mutually_exclusive with never_default4 parameter.
type: str
gw4_ignore_auto:
description:
- Ignore automatically configured IPv4 routes.
type: bool
default: false
version_added: 3.2.0
routes4:
description:
- The list of ipv4 routes.
@@ -107,6 +113,12 @@ options:
- A list of DNS search domains.
elements: str
type: list
dns4_ignore_auto:
description:
- Ignore automatically configured IPv4 name servers.
type: bool
default: false
version_added: 3.2.0
method4:
description:
- Configuration method to be used for IPv4.
@@ -125,6 +137,12 @@ options:
- The IPv6 gateway for this interface.
- Use the format C(2001:db8::1).
type: str
gw6_ignore_auto:
description:
- Ignore automatically configured IPv6 routes.
type: bool
default: false
version_added: 3.2.0
dns6:
description:
- A list of up to 3 dns servers.
@@ -136,6 +154,12 @@ options:
- A list of DNS search domains.
elements: str
type: list
dns6_ignore_auto:
description:
- Ignore automatically configured IPv6 name servers.
type: bool
default: false
version_added: 3.2.0
method6:
description:
- Configuration method to be used for IPv6
@@ -648,16 +672,20 @@ class Nmcli(object):
self.type = module.params['type']
self.ip4 = module.params['ip4']
self.gw4 = module.params['gw4']
self.gw4_ignore_auto = module.params['gw4_ignore_auto']
self.routes4 = module.params['routes4']
self.route_metric4 = module.params['route_metric4']
self.never_default4 = module.params['never_default4']
self.dns4 = module.params['dns4']
self.dns4_search = module.params['dns4_search']
self.dns4_ignore_auto = module.params['dns4_ignore_auto']
self.method4 = module.params['method4']
self.ip6 = module.params['ip6']
self.gw6 = module.params['gw6']
self.gw6_ignore_auto = module.params['gw6_ignore_auto']
self.dns6 = module.params['dns6']
self.dns6_search = module.params['dns6_search']
self.dns6_ignore_auto = module.params['dns6_ignore_auto']
self.method6 = module.params['method6']
self.mtu = module.params['mtu']
self.stp = module.params['stp']
@@ -729,7 +757,9 @@ class Nmcli(object):
'ipv4.dhcp-client-id': self.dhcp_client_id,
'ipv4.dns': self.dns4,
'ipv4.dns-search': self.dns4_search,
'ipv4.ignore-auto-dns': self.dns4_ignore_auto,
'ipv4.gateway': self.gw4,
'ipv4.ignore-auto-routes': self.gw4_ignore_auto,
'ipv4.routes': self.routes4,
'ipv4.route-metric': self.route_metric4,
'ipv4.never-default': self.never_default4,
@@ -737,7 +767,9 @@ class Nmcli(object):
'ipv6.addresses': self.ip6,
'ipv6.dns': self.dns6,
'ipv6.dns-search': self.dns6_search,
'ipv6.ignore-auto-dns': self.dns6_ignore_auto,
'ipv6.gateway': self.gw6,
'ipv6.ignore-auto-routes': self.gw6_ignore_auto,
'ipv6.method': self.ipv6_method,
})
@@ -900,7 +932,11 @@ class Nmcli(object):
if setting in ('bridge.stp',
'bridge-port.hairpin-mode',
'connection.autoconnect',
'ipv4.never-default'):
'ipv4.never-default',
'ipv4.ignore-auto-dns',
'ipv4.ignore-auto-routes',
'ipv6.ignore-auto-dns',
'ipv6.ignore-auto-routes'):
return bool
elif setting in ('ipv4.dns',
'ipv4.dns-search',
@@ -1116,17 +1152,21 @@ def main():
]),
ip4=dict(type='str'),
gw4=dict(type='str'),
gw4_ignore_auto=dict(type='bool', default=False),
routes4=dict(type='list', elements='str'),
route_metric4=dict(type='int'),
never_default4=dict(type='bool', default=False),
dns4=dict(type='list', elements='str'),
dns4_search=dict(type='list', elements='str'),
dns4_ignore_auto=dict(type='bool', default=False),
method4=dict(type='str', choices=['auto', 'link-local', 'manual', 'shared', 'disabled']),
dhcp_client_id=dict(type='str'),
ip6=dict(type='str'),
gw6=dict(type='str'),
gw6_ignore_auto=dict(type='bool', default=False),
dns6=dict(type='list', elements='str'),
dns6_search=dict(type='list', elements='str'),
dns6_ignore_auto=dict(type='bool', default=False),
method6=dict(type='str', choices=['ignore', 'auto', 'dhcp', 'link-local', 'manual', 'shared']),
# Bond Specific vars
mode=dict(type='str', default='balance-rr',

View File

@@ -117,9 +117,14 @@ options:
default: false
type: bool
aliases: [ ignore-platform-reqs ]
composer_executable:
type: path
description:
- Path to composer executable on the remote host, if composer is not in C(PATH) or a custom composer is needed.
version_added: 3.2.0
requirements:
- php
- composer installed in bin path (recommended /usr/local/bin)
- composer installed in bin path (recommended /usr/local/bin) or specified in I(composer_executable)
notes:
- Default options that are always appended in each execution are --no-ansi, --no-interaction and --no-progress if available.
- We received reports about issues on macOS if composer was installed by Homebrew. Please use the official install method to avoid issues.
@@ -187,7 +192,11 @@ def composer_command(module, command, arguments="", options=None, global_command
else:
php_path = module.params['executable']
composer_path = module.get_bin_path("composer", True, ["/usr/local/bin"])
if module.params['composer_executable'] is None:
composer_path = module.get_bin_path("composer", True, ["/usr/local/bin"])
else:
composer_path = module.params['composer_executable']
cmd = "%s %s %s %s %s %s" % (php_path, composer_path, "global" if global_command else "", command, " ".join(options), arguments)
return module.run_command(cmd)
@@ -231,6 +240,7 @@ def main():
ignore_platform_reqs=dict(
default=False, type="bool", aliases=["ignore-platform-reqs"],
deprecated_aliases=[dict(name='ignore-platform-reqs', version='5.0.0', collection_name='community.general')]),
composer_executable=dict(type="path"),
),
required_if=[('global_command', False, ['working_dir'])],
supports_check_mode=True

View File

@@ -129,10 +129,10 @@ options:
verify_checksum:
type: str
description:
- If C(never), the md5 checksum will never be downloaded and verified.
- If C(download), the md5 checksum will be downloaded and verified only after artifact download. This is the default.
- If C(change), the md5 checksum will be downloaded and verified if the destination already exist,
to verify if they are identical. This was the behaviour before 2.6. Since it downloads the md5 before (maybe)
- If C(never), the MD5/SHA1 checksum will never be downloaded and verified.
- If C(download), the MD5/SHA1 checksum will be downloaded and verified only after artifact download. This is the default.
- If C(change), the MD5/SHA1 checksum will be downloaded and verified if the destination already exist,
to verify if they are identical. This was the behaviour before 2.6. Since it downloads the checksum before (maybe)
downloading the artifact, and since some repository software, when acting as a proxy/cache, return a 404 error
if the artifact has not been cached yet, it may fail unexpectedly.
If you still need it, you should consider using C(always) instead - if you deal with a checksum, it is better to
@@ -141,6 +141,15 @@ options:
required: false
default: 'download'
choices: ['never', 'download', 'change', 'always']
checksum_alg:
type: str
description:
- If C(md5), checksums will use the MD5 algorithm. This is the default.
- If C(sha1), checksums will use the SHA1 algorithm. This can be used on systems configured to use
FIPS-compliant algorithms, since MD5 will be blocked on such systems.
default: 'md5'
choices: ['md5', 'sha1']
version_added: 3.2.0
directory_mode:
type: str
description:
@@ -507,7 +516,7 @@ class MavenDownloader:
raise ValueError(failmsg + " because of " + info['msg'] + "for URL " + url_to_use)
return None
def download(self, tmpdir, artifact, verify_download, filename=None):
def download(self, tmpdir, artifact, verify_download, filename=None, checksum_alg='md5'):
if (not artifact.version and not artifact.version_by_spec) or artifact.version == "latest":
artifact = Artifact(artifact.group_id, artifact.artifact_id, self.find_latest_version_available(artifact), None,
artifact.classifier, artifact.extension)
@@ -528,11 +537,11 @@ class MavenDownloader:
shutil.copyfileobj(response, f)
if verify_download:
invalid_md5 = self.is_invalid_md5(tempname, url)
if invalid_md5:
invalid_checksum = self.is_invalid_checksum(tempname, url, checksum_alg)
if invalid_checksum:
# if verify_change was set, the previous file would be deleted
os.remove(tempname)
return invalid_md5
return invalid_checksum
except Exception as e:
os.remove(tempname)
raise e
@@ -541,40 +550,45 @@ class MavenDownloader:
shutil.move(tempname, artifact.get_filename(filename))
return None
def is_invalid_md5(self, file, remote_url):
def is_invalid_checksum(self, file, remote_url, checksum_alg='md5'):
if os.path.exists(file):
local_md5 = self._local_md5(file)
local_checksum = self._local_checksum(checksum_alg, file)
if self.local:
parsed_url = urlparse(remote_url)
remote_md5 = self._local_md5(parsed_url.path)
remote_checksum = self._local_checksum(checksum_alg, parsed_url.path)
else:
try:
remote_md5 = to_text(self._getContent(remote_url + '.md5', "Failed to retrieve MD5", False), errors='strict')
remote_checksum = to_text(self._getContent(remote_url + '.' + checksum_alg, "Failed to retrieve checksum", False), errors='strict')
except UnicodeError as e:
return "Cannot retrieve a valid md5 from %s: %s" % (remote_url, to_native(e))
if(not remote_md5):
return "Cannot find md5 from " + remote_url
return "Cannot retrieve a valid %s checksum from %s: %s" % (checksum_alg, remote_url, to_native(e))
if not remote_checksum:
return "Cannot find %s checksum from %s" % (checksum_alg, remote_url)
try:
# Check if remote md5 only contains md5 or md5 + filename
_remote_md5 = remote_md5.split(None)[0]
remote_md5 = _remote_md5
# remote_md5 is empty so we continue and keep original md5 string
# This should not happen since we check for remote_md5 before
# Check if remote checksum only contains md5/sha1 or md5/sha1 + filename
_remote_checksum = remote_checksum.split(None)[0]
remote_checksum = _remote_checksum
# remote_checksum is empty so we continue and keep original checksum string
# This should not happen since we check for remote_checksum before
except IndexError:
pass
if local_md5.lower() == remote_md5.lower():
if local_checksum.lower() == remote_checksum.lower():
return None
else:
return "Checksum does not match: we computed " + local_md5 + " but the repository states " + remote_md5
return "Checksum does not match: we computed " + local_checksum + " but the repository states " + remote_checksum
return "Path does not exist: " + file
def _local_md5(self, file):
md5 = hashlib.md5()
def _local_checksum(self, checksum_alg, file):
if checksum_alg.lower() == 'md5':
hash = hashlib.md5()
elif checksum_alg.lower() == 'sha1':
hash = hashlib.sha1()
else:
raise ValueError("Unknown checksum_alg %s" % checksum_alg)
with io.open(file, 'rb') as f:
for chunk in iter(lambda: f.read(8192), b''):
md5.update(chunk)
return md5.hexdigest()
hash.update(chunk)
return hash.hexdigest()
def main():
@@ -599,6 +613,7 @@ def main():
client_key=dict(type="path", required=False),
keep_name=dict(required=False, default=False, type='bool'),
verify_checksum=dict(required=False, default='download', choices=['never', 'download', 'change', 'always']),
checksum_alg=dict(required=False, default='md5', choices=['md5', 'sha1']),
directory_mode=dict(type='str'),
),
add_file_common_args=True,
@@ -639,6 +654,7 @@ def main():
verify_checksum = module.params["verify_checksum"]
verify_download = verify_checksum in ['download', 'always']
verify_change = verify_checksum in ['change', 'always']
checksum_alg = module.params["checksum_alg"]
downloader = MavenDownloader(module, repository_url, local, headers)
@@ -683,12 +699,12 @@ def main():
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.lexists(b_dest) and ((not verify_change) or not downloader.is_invalid_md5(dest, downloader.find_uri_for_artifact(artifact))):
if os.path.lexists(b_dest) and ((not verify_change) or not downloader.is_invalid_checksum(dest, downloader.find_uri_for_artifact(artifact), checksum_alg)):
prev_state = "present"
if prev_state == "absent":
try:
download_error = downloader.download(module.tmpdir, artifact, verify_download, b_dest)
download_error = downloader.download(module.tmpdir, artifact, verify_download, b_dest, checksum_alg)
if download_error is None:
changed = True
else:

View File

@@ -6,27 +6,6 @@
# Copyright: (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# ATTENTION CONTRIBUTORS!
#
# TL;DR: Run this module's integration tests manually before opening a pull request
#
# Long explanation:
# The integration tests for this module are currently NOT run on the Ansible project's continuous
# delivery pipeline. So please: When you make changes to this module, make sure that you run the
# included integration tests manually for both Python 2 and Python 3:
#
# Python 2:
# ansible-test integration -v --docker fedora28 --docker-privileged --allow-unsupported --python 2.7 flatpak
# Python 3:
# ansible-test integration -v --docker fedora28 --docker-privileged --allow-unsupported --python 3.6 flatpak
#
# Because of external dependencies, the current integration tests are somewhat too slow and brittle
# to be included right now. I have plans to rewrite the integration tests based on a local flatpak
# repository so that they can be included into the normal CI pipeline.
# //oolongbrothers
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
@@ -60,18 +39,28 @@ options:
name:
description:
- The name of the flatpak to manage.
- When used with I(state=present), I(name) can be specified as an C(http(s)) URL to a
- When used with I(state=present), I(name) can be specified as a URL to a
C(flatpakref) file or the unique reverse DNS name that identifies a flatpak.
- Both C(https://) and C(http://) URLs are supported.
- When supplying a reverse DNS name, you can use the I(remote) option to specify on what remote
to look for the flatpak. An example for a reverse DNS name is C(org.gnome.gedit).
- When used with I(state=absent), it is recommended to specify the name in the reverse DNS
format.
- When supplying an C(http(s)) URL with I(state=absent), the module will try to match the
- When supplying a URL with I(state=absent), the module will try to match the
installed flatpak based on the name of the flatpakref to remove it. However, there is no
guarantee that the names of the flatpakref file and the reverse DNS name of the installed
flatpak do match.
type: str
required: true
no_dependencies:
description:
- If installing runtime dependencies should be omitted or not
- This parameter is primarily implemented for integration testing this module.
There might however be some use cases where you would want to have this, like when you are
packaging your own flatpaks.
type: bool
default: false
version_added: 3.2.0
remote:
description:
- The flatpak remote (repository) to install the flatpak from.
@@ -94,10 +83,11 @@ EXAMPLES = r'''
name: https://s3.amazonaws.com/alexlarsson/spotify-repo/spotify.flatpakref
state: present
- name: Install the gedit flatpak package
- name: Install the gedit flatpak package without dependencies (not recommended)
community.general.flatpak:
name: https://git.gnome.org/browse/gnome-apps-nightly/plain/gedit.flatpakref
state: present
no_dependencies: true
- name: Install the gedit package from flathub for current user
community.general.flatpak:
@@ -153,18 +143,21 @@ from ansible.module_utils.basic import AnsibleModule
OUTDATED_FLATPAK_VERSION_ERROR_MESSAGE = "Unknown option --columns=application"
def install_flat(module, binary, remote, name, method):
def install_flat(module, binary, remote, name, method, no_dependencies):
"""Add a new flatpak."""
global result
flatpak_version = _flatpak_version(module, binary)
command = [binary, "install", "--{0}".format(method)]
if StrictVersion(flatpak_version) < StrictVersion('1.1.3'):
noninteractive_arg = "-y"
command += ["-y"]
else:
noninteractive_arg = "--noninteractive"
command += ["--noninteractive"]
if no_dependencies:
command += ["--no-deps"]
if name.startswith('http://') or name.startswith('https://'):
command = [binary, "install", "--{0}".format(method), noninteractive_arg, name]
command += [name]
else:
command = [binary, "install", "--{0}".format(method), noninteractive_arg, remote, name]
command += [remote, name]
_flatpak_command(module, module.check_mode, command)
result['changed'] = True
@@ -279,6 +272,7 @@ def main():
choices=['user', 'system']),
state=dict(type='str', default='present',
choices=['absent', 'present']),
no_dependencies=dict(type='bool', default=False),
executable=dict(type='path', default='flatpak')
),
supports_check_mode=True,
@@ -287,6 +281,7 @@ def main():
name = module.params['name']
state = module.params['state']
remote = module.params['remote']
no_dependencies = module.params['no_dependencies']
method = module.params['method']
executable = module.params['executable']
binary = module.get_bin_path(executable, None)
@@ -301,7 +296,7 @@ def main():
module.fail_json(msg="Executable '%s' was not found on the system." % executable, **result)
if state == 'present' and not flatpak_exists(module, binary, name, method):
install_flat(module, binary, remote, name, method)
install_flat(module, binary, remote, name, method, no_dependencies)
elif state == 'absent' and flatpak_exists(module, binary, name, method):
uninstall_flat(module, binary, name, method)

View File

@@ -6,27 +6,6 @@
# Copyright: (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# ATTENTION CONTRIBUTORS!
#
# TL;DR: Run this module's integration tests manually before opening a pull request
#
# Long explanation:
# The integration tests for this module are currently NOT run on the Ansible project's continuous
# delivery pipeline. So please: When you make changes to this module, make sure that you run the
# included integration tests manually for both Python 2 and Python 3:
#
# Python 2:
# ansible-test integration -v --docker fedora28 --docker-privileged --allow-unsupported --python 2.7 flatpak_remote
# Python 3:
# ansible-test integration -v --docker fedora28 --docker-privileged --allow-unsupported --python 3.6 flatpak_remote
#
# Because of external dependencies, the current integration tests are somewhat too slow and brittle
# to be included right now. I have plans to rewrite the integration tests based on a local flatpak
# repository so that they can be included into the normal CI pipeline.
# //oolongbrothers
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

View File

@@ -0,0 +1,314 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, George Rawlinson <george@rawlinson.net.nz>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
---
module: pacman_key
author:
- George Rawlinson (@grawlinson)
version_added: "3.2.0"
short_description: Manage pacman's list of trusted keys
description:
- Add or remove gpg keys from the pacman keyring.
notes:
- Use full-length key ID (40 characters).
- Keys will be verified when using I(data), I(file), or I(url) unless I(verify) is overridden.
- Keys will be locally signed after being imported into the keyring.
- If the key ID exists in the keyring, the key will not be added unless I(force_update) is specified.
- I(data), I(file), I(url), and I(keyserver) are mutually exclusive.
- Supports C(check_mode).
requirements:
- gpg
- pacman-key
options:
id:
description:
- The 40 character identifier of the key.
- Including this allows check mode to correctly report the changed state.
- Do not specify a subkey ID, instead specify the primary key ID.
required: true
type: str
data:
description:
- The keyfile contents to add to the keyring.
- Must be of C(PGP PUBLIC KEY BLOCK) type.
type: str
file:
description:
- The path to a keyfile on the remote server to add to the keyring.
- Remote file must be of C(PGP PUBLIC KEY BLOCK) type.
type: path
url:
description:
- The URL to retrieve keyfile from.
- Remote file must be of C(PGP PUBLIC KEY BLOCK) type.
type: str
keyserver:
description:
- The keyserver used to retrieve key from.
type: str
verify:
description:
- Whether or not to verify the keyfile's key ID against specified key ID.
type: bool
default: true
force_update:
description:
- This forces the key to be updated if it already exists in the keyring.
type: bool
default: false
keyring:
description:
- The full path to the keyring folder on the remote server.
- If not specified, module will use pacman's default (C(/etc/pacman.d/gnupg)).
- Useful if the remote system requires an alternative gnupg directory.
type: path
default: /etc/pacman.d/gnupg
state:
description:
- Ensures that the key is present (added) or absent (revoked).
default: present
choices: [ absent, present ]
type: str
'''
EXAMPLES = '''
- name: Import a key via local file
community.general.pacman_key:
data: "{{ lookup('file', 'keyfile.asc') }}"
state: present
- name: Import a key via remote file
community.general.pacman_key:
file: /tmp/keyfile.asc
state: present
- name: Import a key via url
community.general.pacman_key:
id: 01234567890ABCDE01234567890ABCDE12345678
url: https://domain.tld/keys/keyfile.asc
state: present
- name: Import a key via keyserver
community.general.pacman_key:
id: 01234567890ABCDE01234567890ABCDE12345678
keyserver: keyserver.domain.tld
- name: Import a key into an alternative keyring
community.general.pacman_key:
id: 01234567890ABCDE01234567890ABCDE12345678
file: /tmp/keyfile.asc
keyring: /etc/pacman.d/gnupg-alternative
- name: Remove a key from the keyring
community.general.pacman_key:
id: 01234567890ABCDE01234567890ABCDE12345678
state: absent
'''
RETURN = r''' # '''
import os.path
import tempfile
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_url
from ansible.module_utils._text import to_native
class PacmanKey(object):
def __init__(self, module):
self.module = module
# obtain binary paths for gpg & pacman-key
self.gpg = module.get_bin_path('gpg', required=True)
self.pacman_key = module.get_bin_path('pacman-key', required=True)
# obtain module parameters
keyid = module.params['id']
url = module.params['url']
data = module.params['data']
file = module.params['file']
keyserver = module.params['keyserver']
verify = module.params['verify']
force_update = module.params['force_update']
keyring = module.params['keyring']
state = module.params['state']
self.keylength = 40
# sanitise key ID & check if key exists in the keyring
keyid = self.sanitise_keyid(keyid)
key_present = self.key_in_keyring(keyring, keyid)
# check mode
if module.check_mode:
if state == "present":
changed = (key_present and force_update) or not key_present
module.exit_json(changed=changed)
elif state == "absent":
if key_present:
module.exit_json(changed=True)
module.exit_json(changed=False)
if state == "present":
if key_present and not force_update:
module.exit_json(changed=False)
if data:
file = self.save_key(data)
self.add_key(keyring, file, keyid, verify)
module.exit_json(changed=True)
elif file:
self.add_key(keyring, file, keyid, verify)
module.exit_json(changed=True)
elif url:
data = self.fetch_key(url)
file = self.save_key(data)
self.add_key(keyring, file, keyid, verify)
module.exit_json(changed=True)
elif keyserver:
self.recv_key(keyring, keyid, keyserver)
module.exit_json(changed=True)
elif state == "absent":
if key_present:
self.remove_key(keyring, keyid)
module.exit_json(changed=True)
module.exit_json(changed=False)
def is_hexadecimal(self, string):
"""Check if a given string is valid hexadecimal"""
try:
int(string, 16)
except ValueError:
return False
return True
def sanitise_keyid(self, keyid):
"""Sanitise given key ID.
Strips whitespace, uppercases all characters, and strips leading `0X`.
"""
sanitised_keyid = keyid.strip().upper().replace(' ', '').replace('0X', '')
if len(sanitised_keyid) != self.keylength:
self.module.fail_json(msg="key ID is not full-length: %s" % sanitised_keyid)
if not self.is_hexadecimal(sanitised_keyid):
self.module.fail_json(msg="key ID is not hexadecimal: %s" % sanitised_keyid)
return sanitised_keyid
def fetch_key(self, url):
"""Downloads a key from url"""
response, info = fetch_url(self.module, url)
if info['status'] != 200:
self.module.fail_json(msg="failed to fetch key at %s, error was %s" % (url, info['msg']))
return to_native(response.read())
def recv_key(self, keyring, keyid, keyserver):
"""Receives key via keyserver"""
cmd = [self.pacman_key, '--gpgdir', keyring, '--keyserver', keyserver, '--recv-keys', keyid]
self.module.run_command(cmd, check_rc=True)
self.lsign_key(keyring, keyid)
def lsign_key(self, keyring, keyid):
"""Locally sign key"""
cmd = [self.pacman_key, '--gpgdir', keyring]
self.module.run_command(cmd + ['--lsign-key', keyid], check_rc=True)
def save_key(self, data):
"Saves key data to a temporary file"
tmpfd, tmpname = tempfile.mkstemp()
self.module.add_cleanup_file(tmpname)
tmpfile = os.fdopen(tmpfd, "w")
tmpfile.write(data)
tmpfile.close()
return tmpname
def add_key(self, keyring, keyfile, keyid, verify):
"""Add key to pacman's keyring"""
if verify:
self.verify_keyfile(keyfile, keyid)
cmd = [self.pacman_key, '--gpgdir', keyring, '--add', keyfile]
self.module.run_command(cmd, check_rc=True)
self.lsign_key(keyring, keyid)
def remove_key(self, keyring, keyid):
"""Remove key from pacman's keyring"""
cmd = [self.pacman_key, '--gpgdir', keyring, '--delete', keyid]
self.module.run_command(cmd, check_rc=True)
def verify_keyfile(self, keyfile, keyid):
"""Verify that keyfile matches the specified key ID"""
if keyfile is None:
self.module.fail_json(msg="expected a key, got none")
elif keyid is None:
self.module.fail_json(msg="expected a key ID, got none")
rc, stdout, stderr = self.module.run_command(
[
self.gpg,
'--with-colons',
'--with-fingerprint',
'--batch',
'--no-tty',
'--show-keys',
keyfile
],
check_rc=True,
)
extracted_keyid = None
for line in stdout.splitlines():
if line.startswith('fpr:'):
extracted_keyid = line.split(':')[9]
break
if extracted_keyid != keyid:
self.module.fail_json(msg="key ID does not match. expected %s, got %s" % (keyid, extracted_keyid))
def key_in_keyring(self, keyring, keyid):
"Check if the key ID is in pacman's keyring"
rc, stdout, stderr = self.module.run_command(
[
self.gpg,
'--with-colons',
'--batch',
'--no-tty',
'--no-default-keyring',
'--keyring=%s/pubring.gpg' % keyring,
'--list-keys', keyid
],
check_rc=False,
)
if rc != 0:
if stderr.find("No public key") >= 0:
return False
else:
self.module.fail_json(msg="gpg returned an error: %s" % stderr)
return True
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str', required=True),
data=dict(type='str'),
file=dict(type='path'),
url=dict(type='str'),
keyserver=dict(type='str'),
verify=dict(type='bool', default=True),
force_update=dict(type='bool', default=False),
keyring=dict(type='path', default='/etc/pacman.d/gnupg'),
state=dict(type='str', default='present', choices=['absent', 'present']),
),
supports_check_mode=True,
mutually_exclusive=(('data', 'file', 'url', 'keyserver'),),
required_if=[('state', 'present', ('data', 'file', 'url', 'keyserver'), True)],
)
PacmanKey(module)
if __name__ == '__main__':
main()

View File

@@ -56,9 +56,9 @@ from ansible.module_utils.basic import AnsibleModule
import re
# Matches release-like values such as 7.2, 6.10, 10Server,
# but rejects unlikely values, like 100Server, 100.0, 1.100, etc.
release_matcher = re.compile(r'\b\d{1,2}(?:\.\d{1,2}|Server)\b')
# Matches release-like values such as 7.2, 5.10, 6Server, 8
# but rejects unlikely values, like 100Server, 1.100, 7server etc.
release_matcher = re.compile(r'\b\d{1,2}(?:\.\d{1,2}|Server|Client|Workstation|)\b')
def _sm_release(module, *args):

View File

@@ -175,7 +175,7 @@ def _parse_repos(module):
module.fail_json(msg='Failed to execute "%s"' % " ".join(cmd), rc=rc, stdout=stdout, stderr=stderr)
def _repo_changes(realrepo, repocmp):
def _repo_changes(module, realrepo, repocmp):
"Check whether the 2 given repos have different settings."
for k in repocmp:
if repocmp[k] and k not in realrepo:
@@ -186,6 +186,16 @@ def _repo_changes(realrepo, repocmp):
valold = str(repocmp[k] or "")
valnew = v or ""
if k == "url":
if '$releasever' in valold or '$releasever' in valnew:
cmd = ['rpm', '-q', '--qf', '%{version}', '-f', '/etc/os-release']
rc, stdout, stderr = module.run_command(cmd, check_rc=True)
valnew = valnew.replace('$releasever', stdout)
valold = valold.replace('$releasever', stdout)
if '$basearch' in valold or '$basearch' in valnew:
cmd = ['rpm', '-q', '--qf', '%{arch}', '-f', '/etc/os-release']
rc, stdout, stderr = module.run_command(cmd, check_rc=True)
valnew = valnew.replace('$basearch', stdout)
valold = valold.replace('$basearch', stdout)
valold, valnew = valold.rstrip("/"), valnew.rstrip("/")
if valold != valnew:
return True
@@ -215,7 +225,7 @@ def repo_exists(module, repodata, overwrite_multiple):
return (False, False, None)
elif len(repos) == 1:
# Found an existing repo, look for changes
has_changes = _repo_changes(repos[0], repodata)
has_changes = _repo_changes(module, repos[0], repodata)
return (True, has_changes, repos)
elif len(repos) >= 2:
if overwrite_multiple:

View File

@@ -0,0 +1 @@
packaging/os/pacman_key.py

View File

@@ -12,46 +12,48 @@ DOCUMENTATION = '''
module: stacki_host
short_description: Add or remove host to stacki front-end
description:
- Use this module to add or remove hosts to a stacki front-end via API.
- U(https://github.com/StackIQ/stacki)
- Use this module to add or remove hosts to a stacki front-end via API.
- Information on stacki can be found at U(https://github.com/StackIQ/stacki).
options:
name:
description:
- Name of the host to be added to Stacki.
- Name of the host to be added to Stacki.
required: True
type: str
stacki_user:
description:
- Username for authenticating with Stacki API, but if not
specified, the environment variable C(stacki_user) is used instead.
- Username for authenticating with Stacki API, but if not specified, the environment variable C(stacki_user) is used instead.
required: True
type: str
stacki_password:
description:
- Password for authenticating with Stacki API, but if not
- Password for authenticating with Stacki API, but if not
specified, the environment variable C(stacki_password) is used instead.
required: True
type: str
stacki_endpoint:
description:
- URL for the Stacki API Endpoint.
- URL for the Stacki API Endpoint.
required: True
type: str
prim_intf_mac:
description:
- MAC Address for the primary PXE boot network interface.
- MAC Address for the primary PXE boot network interface.
- Currently not used by the module.
type: str
prim_intf_ip:
description:
- IP Address for the primary network interface.
- IP Address for the primary network interface.
- Currently not used by the module.
type: str
prim_intf:
description:
- Name of the primary network interface.
- Name of the primary network interface.
- Currently not used by the module.
type: str
force_install:
description:
- Set value to True to force node into install state if it already exists in stacki.
- Set value to C(true) to force node into install state if it already exists in stacki.
type: bool
default: no
state:
@@ -59,6 +61,30 @@ options:
- Set value to the desired state for the specified host.
type: str
choices: [ absent, present ]
default: present
appliance:
description:
- Applicance to be used in host creation.
- Required if I(state) is C(present) and host does not yet exist.
type: str
default: backend
rack:
description:
- Rack to be used in host creation.
- Required if I(state) is C(present) and host does not yet exist.
type: int
rank:
description:
- Rank to be used in host creation.
- In Stacki terminology, the rank is the position of the machine in a rack.
- Required if I(state) is C(present) and host does not yet exist.
type: int
network:
description:
- Network to be configured in the host.
- Currently not used by the module.
type: str
default: private
author:
- Hugh Ma (@bbyhuy) <Hugh.Ma@flextronics.com>
'''
@@ -128,7 +154,7 @@ class StackiHost(object):
'PASSWORD': module.params['stacki_password']}
# Get Initial CSRF
cred_a = self.do_request(self.module, self.endpoint, method="GET")
cred_a = self.do_request(self.endpoint, method="GET")
cookie_a = cred_a.headers.get('Set-Cookie').split(';')
init_csrftoken = None
for c in cookie_a:
@@ -145,8 +171,7 @@ class StackiHost(object):
login_endpoint = self.endpoint + "/login"
# Get Final CSRF and Session ID
login_req = self.do_request(self.module, login_endpoint, headers=header,
payload=urlencode(auth_creds), method='POST')
login_req = self.do_request(login_endpoint, headers=header, payload=urlencode(auth_creds), method='POST')
cookie_f = login_req.headers.get('Set-Cookie').split(';')
csrftoken = None
@@ -163,8 +188,8 @@ class StackiHost(object):
'Content-type': 'application/json',
'Cookie': login_req.headers.get('Set-Cookie')}
def do_request(self, module, url, payload=None, headers=None, method=None):
res, info = fetch_url(module, url, data=payload, headers=headers, method=method)
def do_request(self, url, payload=None, headers=None, method=None):
res, info = fetch_url(self.module, url, data=payload, headers=headers, method=method)
if info['status'] != 200:
self.module.fail_json(changed=False, msg=info['msg'])
@@ -172,24 +197,16 @@ class StackiHost(object):
return res
def stack_check_host(self):
res = self.do_request(self.module, self.endpoint, payload=json.dumps({"cmd": "list host"}), headers=self.header, method="POST")
if self.hostname in res.read():
return True
else:
return False
res = self.do_request(self.endpoint, payload=json.dumps({"cmd": "list host"}), headers=self.header, method="POST")
return self.hostname in res.read()
def stack_sync(self):
self.do_request(self.module, self.endpoint, payload=json.dumps({"cmd": "sync config"}), headers=self.header, method="POST")
self.do_request(self.module, self.endpoint, payload=json.dumps({"cmd": "sync host config"}), headers=self.header, method="POST")
self.do_request(self.endpoint, payload=json.dumps({"cmd": "sync config"}), headers=self.header, method="POST")
self.do_request(self.endpoint, payload=json.dumps({"cmd": "sync host config"}), headers=self.header, method="POST")
def stack_force_install(self, result):
data = dict()
changed = False
data['cmd'] = "set host boot {0} action=install" \
.format(self.hostname)
self.do_request(self.module, self.endpoint, payload=json.dumps(data), headers=self.header, method="POST")
data = {'cmd': "set host boot {0} action=install".format(self.hostname)}
self.do_request(self.endpoint, payload=json.dumps(data), headers=self.header, method="POST")
changed = True
self.stack_sync()
@@ -203,7 +220,7 @@ class StackiHost(object):
data['cmd'] = "add host {0} rack={1} rank={2} appliance={3}"\
.format(self.hostname, self.rack, self.rank, self.appliance)
self.do_request(self.module, self.endpoint, payload=json.dumps(data), headers=self.header, method="POST")
self.do_request(self.endpoint, payload=json.dumps(data), headers=self.header, method="POST")
self.stack_sync()
@@ -215,7 +232,7 @@ class StackiHost(object):
data['cmd'] = "remove host {0}"\
.format(self.hostname)
self.do_request(self.module, self.endpoint, payload=json.dumps(data), headers=self.header, method="POST")
self.do_request(self.endpoint, payload=json.dumps(data), headers=self.header, method="POST")
self.stack_sync()
@@ -258,8 +275,7 @@ def main():
.format(module.params['name'])
# Otherwise, state is present, but host doesn't exists, require more params to add host
elif module.params['state'] == 'present' and not host_exists:
for param in ['appliance', 'prim_intf',
'prim_intf_ip', 'network', 'prim_intf_mac']:
for param in ['appliance', 'rack', 'rank', 'prim_intf', 'prim_intf_ip', 'network', 'prim_intf_mac']:
if not module.params[param]:
missing_params.append(param)
if len(missing_params) > 0: # @FIXME replace with required_if

View File

@@ -0,0 +1 @@
./files/sapcar_extract.py

View File

@@ -304,7 +304,7 @@ def write_state(b_path, lines, changed):
return changed
def initialize_from_null_state(initializer, initcommand, table):
def initialize_from_null_state(initializer, initcommand, fallbackcmd, table):
'''
This ensures iptables-state output is suitable for iptables-restore to roll
back to it, i.e. iptables-save output is not empty. This also works for the
@@ -315,8 +315,14 @@ def initialize_from_null_state(initializer, initcommand, table):
commandline = list(initializer)
commandline += ['-t', table]
(rc, out, err) = module.run_command(commandline, check_rc=True)
dummy = module.run_command(commandline, check_rc=True)
(rc, out, err) = module.run_command(initcommand, check_rc=True)
if '*%s' % table not in out.splitlines():
# The last resort.
iptables_input = '*%s\n:OUTPUT ACCEPT\nCOMMIT\n' % table
dummy = module.run_command(fallbackcmd, data=iptables_input, check_rc=True)
(rc, out, err) = module.run_command(initcommand, check_rc=True)
return rc, out, err
@@ -401,6 +407,7 @@ def main():
INITCOMMAND = [bin_iptables_save]
INITIALIZER = [bin_iptables, '-L', '-n']
TESTCOMMAND = [bin_iptables_restore, '--test']
FALLBACKCMD = [bin_iptables_restore]
if counters:
COMMANDARGS.append('--counters')
@@ -425,6 +432,7 @@ def main():
INITIALIZER.extend(['--modprobe', modprobe])
INITCOMMAND.extend(['--modprobe', modprobe])
TESTCOMMAND.extend(['--modprobe', modprobe])
FALLBACKCMD.extend(['--modprobe', modprobe])
SAVECOMMAND = list(COMMANDARGS)
SAVECOMMAND.insert(0, bin_iptables_save)
@@ -458,15 +466,15 @@ def main():
for t in TABLES:
if '*%s' % t in state_to_restore:
if len(stdout) == 0 or '*%s' % t not in stdout.splitlines():
(rc, stdout, stderr) = initialize_from_null_state(INITIALIZER, INITCOMMAND, t)
(rc, stdout, stderr) = initialize_from_null_state(INITIALIZER, INITCOMMAND, FALLBACKCMD, t)
elif len(stdout) == 0:
(rc, stdout, stderr) = initialize_from_null_state(INITIALIZER, INITCOMMAND, 'filter')
(rc, stdout, stderr) = initialize_from_null_state(INITIALIZER, INITCOMMAND, FALLBACKCMD, 'filter')
elif state == 'restored' and '*%s' % table not in state_to_restore:
module.fail_json(msg="Table %s to restore not defined in %s" % (table, path))
elif len(stdout) == 0 or '*%s' % table not in stdout.splitlines():
(rc, stdout, stderr) = initialize_from_null_state(INITIALIZER, INITCOMMAND, table)
(rc, stdout, stderr) = initialize_from_null_state(INITIALIZER, INITCOMMAND, FALLBACKCMD, table)
initial_state = filter_and_format_state(stdout)
if initial_state is None:

View File

@@ -278,7 +278,7 @@ def _export_public_cert_from_pkcs12(module, executable, pkcs_file, alias, passwo
(export_rc, export_stdout, export_err) = module.run_command(export_cmd, data=password, check_rc=False)
if export_rc != 0:
module.fail_json(msg="Internal module failure, cannot extract public certificate from pkcs12, error: %s" % export_err,
module.fail_json(msg="Internal module failure, cannot extract public certificate from pkcs12, error: %s" % export_stdout,
rc=export_rc)
with open(dest, 'w') as f:
@@ -498,7 +498,7 @@ def main():
if pkcs12_path:
# Extracting certificate with openssl
_export_public_cert_from_pkcs12(module, executable, pkcs12_path, cert_alias, pkcs12_pass, new_certificate)
_export_public_cert_from_pkcs12(module, executable, pkcs12_path, pkcs12_alias, pkcs12_pass, new_certificate)
elif path:
# Extracting the X509 digest is a bit easier. Keytool will print the PEM

View File

@@ -57,6 +57,11 @@ options:
- Whether the target node should be automatically connected at startup.
type: bool
aliases: [ automatic ]
auto_portal_startup:
description:
- Whether the target node portal should be automatically connected at startup.
type: bool
version_added: 3.2.0
discover:
description:
- Whether the list of target nodes on the portal should be
@@ -102,10 +107,18 @@ EXAMPLES = r'''
community.general.open_iscsi:
login: no
target: iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
- name: Override and disable automatic portal login on specific portal
community.general.open_iscsi:
login: false
portal: 10.1.1.250
auto_portal_startup: false
target: iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
'''
import glob
import os
import re
import socket
import time
@@ -158,12 +171,18 @@ def iscsi_discover(module, portal, port):
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_loggedon(module, target):
def target_loggedon(module, target, portal=None, port=None):
cmd = '%s --mode session' % iscsiadm_cmd
(rc, out, err) = module.run_command(cmd)
if portal is None:
portal = ""
if port is None:
port = ""
if rc == 0:
return target in out
search_re = "%s:%s.*%s" % (re.escape(portal), port, re.escape(target))
return re.search(search_re, out) is not None
elif rc == 21:
return False
else:
@@ -219,8 +238,14 @@ def target_device_node(module, target):
return devdisks
def target_isauto(module, target):
def target_isauto(module, target, portal=None, port=None):
cmd = '%s --mode node --targetname %s' % (iscsiadm_cmd, target)
if portal is not None:
if port is not None:
portal = '%s:%s' % (portal, port)
cmd = '%s --portal %s' % (cmd, portal)
(rc, out, err) = module.run_command(cmd)
if rc == 0:
@@ -233,16 +258,28 @@ def target_isauto(module, target):
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_setauto(module, target):
def target_setauto(module, target, portal=None, port=None):
cmd = '%s --mode node --targetname %s --op=update --name node.startup --value automatic' % (iscsiadm_cmd, target)
if portal is not None:
if port is not None:
portal = '%s:%s' % (portal, port)
cmd = '%s --portal %s' % (cmd, portal)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_setmanual(module, target):
def target_setmanual(module, target, portal=None, port=None):
cmd = '%s --mode node --targetname %s --op=update --name node.startup --value manual' % (iscsiadm_cmd, target)
if portal is not None:
if port is not None:
portal = '%s:%s' % (portal, port)
cmd = '%s --portal %s' % (cmd, portal)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
@@ -265,6 +302,7 @@ def main():
# actions
login=dict(type='bool', aliases=['state']),
auto_node_startup=dict(type='bool', aliases=['automatic']),
auto_portal_startup=dict(type='bool'),
discover=dict(type='bool', default=False),
show_nodes=dict(type='bool', default=False),
),
@@ -288,6 +326,7 @@ def main():
port = module.params['port']
login = module.params['login']
automatic = module.params['auto_node_startup']
automatic_portal = module.params['auto_portal_startup']
discover = module.params['discover']
show_nodes = module.params['show_nodes']
@@ -333,7 +372,7 @@ def main():
result['nodes'] = nodes
if login is not None:
loggedon = target_loggedon(module, target)
loggedon = target_loggedon(module, target, portal, port)
if (login and loggedon) or (not login and not loggedon):
result['changed'] |= False
if login:
@@ -368,6 +407,22 @@ def main():
result['changed'] |= True
result['automatic_changed'] = True
if automatic_portal is not None:
isauto = target_isauto(module, target, portal, port)
if (automatic_portal and isauto) or (not automatic_portal and not isauto):
result['changed'] |= False
result['automatic_portal_changed'] = False
elif not check:
if automatic_portal:
target_setauto(module, target, portal, port)
else:
target_setmanual(module, target, portal, port)
result['changed'] |= True
result['automatic_portal_changed'] = True
else:
result['changed'] |= True
result['automatic_portal_changed'] = True
module.exit_json(**result)

View File

@@ -100,7 +100,7 @@ options:
fs_type:
description:
- If specified and the partition does not exist, will set filesystem type to given partition.
- Parameter optional, but see notes below about negative negative C(part_start) values.
- Parameter optional, but see notes below about negative C(part_start) values.
type: str
version_added: '0.2.0'
resize:

View File

@@ -209,6 +209,8 @@ class SSHConfig():
hosts_removed = []
hosts_added = []
hosts_result = [host for host in hosts_result if host['host'] == self.host]
if hosts_result:
for host in hosts_result:
if state == 'absent':

View File

@@ -696,7 +696,8 @@ class JenkinsPlugin(object):
self._get_url_data(
url,
msg_status="Plugin not found. %s" % url,
msg_exception="%s has failed." % msg)
msg_exception="%s has failed." % msg,
method="POST")
def main():

View File

@@ -86,6 +86,25 @@ options:
- The comment text to add.
- Note that JIRA may not allow changing field values on specific transitions or states.
comment_visibility:
type: dict
description:
- Used to specify comment comment visibility.
- See U(https://developer.atlassian.com/cloud/jira/platform/rest/v2/api-group-issue-comments/#api-rest-api-2-issue-issueidorkey-comment-post) for details.
suboptions:
type:
description:
- Use type to specify which of the JIRA visibility restriction types will be used.
type: str
required: true
choices: [group, role]
value:
description:
- Use value to specify value corresponding to the type of visibility restriction. For example name of the group or role.
type: str
required: true
version_added: '3.2.0'
status:
type: str
required: false
@@ -223,6 +242,18 @@ EXAMPLES = r"""
operation: comment
comment: A comment added by Ansible
- name: Comment on issue with restricted visibility
community.general.jira:
uri: '{{ server }}'
username: '{{ user }}'
password: '{{ pass }}'
issue: '{{ issue.meta.key }}'
operation: comment
comment: A comment added by Ansible
comment_visibility:
type: role
value: Developers
# Assign an existing issue using edit
- name: Assign an issue using free-form fields
community.general.jira:
@@ -385,6 +416,10 @@ class JIRA(StateModuleHelper):
issuetype=dict(type='str', ),
issue=dict(type='str', aliases=['ticket']),
comment=dict(type='str', ),
comment_visibility=dict(type='dict', options=dict(
type=dict(type='str', choices=['group', 'role'], required=True),
value=dict(type='str', required=True)
)),
status=dict(type='str', ),
assignee=dict(type='str', ),
fields=dict(default={}, type='dict'),
@@ -445,6 +480,10 @@ class JIRA(StateModuleHelper):
data = {
'body': self.vars.comment
}
# if comment_visibility is specified restrict visibility
if self.vars.comment_visibility is not None:
data['visibility'] = self.vars.comment_visibility
url = self.vars.restbase + '/issue/' + self.vars.issue + '/comment'
self.vars.meta = self.post(url, data)

View File

@@ -56,6 +56,7 @@ import sys
from collections import defaultdict
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.six import PY2
import json
@@ -106,14 +107,24 @@ def create_connection():
config_path = os.environ.get('OVIRT_INI_PATH', default_path)
# Create parser and add ovirt section if it doesn't exist:
config = configparser.SafeConfigParser(
defaults={
'ovirt_url': os.environ.get('OVIRT_URL'),
'ovirt_username': os.environ.get('OVIRT_USERNAME'),
'ovirt_password': os.environ.get('OVIRT_PASSWORD'),
'ovirt_ca_file': os.environ.get('OVIRT_CAFILE', ''),
}
)
if PY2:
config = configparser.SafeConfigParser(
defaults={
'ovirt_url': os.environ.get('OVIRT_URL'),
'ovirt_username': os.environ.get('OVIRT_USERNAME'),
'ovirt_password': os.environ.get('OVIRT_PASSWORD'),
'ovirt_ca_file': os.environ.get('OVIRT_CAFILE', ''),
}, allow_no_value=True
)
else:
config = configparser.ConfigParser(
defaults={
'ovirt_url': os.environ.get('OVIRT_URL'),
'ovirt_username': os.environ.get('OVIRT_USERNAME'),
'ovirt_password': os.environ.get('OVIRT_PASSWORD'),
'ovirt_ca_file': os.environ.get('OVIRT_CAFILE', ''),
}, allow_no_value=True
)
if not config.has_section('ovirt'):
config.add_section('ovirt')
config.read(config_path)

View File

@@ -174,7 +174,7 @@
- name: Test that the file modes were changed
assert:
that:
- "archive_02_gz_stat.changed == False "
- archive_02_gz_stat is not changed
- "archive_02_gz_stat.stat.mode == '0600'"
- "'archived' in archive_bz2_result_02"
- "{{ archive_bz2_result_02['archived']| length}} == 3"
@@ -199,7 +199,7 @@
- name: Test that the file modes were changed
assert:
that:
- "archive_02_zip_stat.changed == False"
- archive_02_zip_stat is not changed
- "archive_02_zip_stat.stat.mode == '0600'"
- "'archived' in archive_zip_result_02"
- "{{ archive_zip_result_02['archived']| length}} == 3"
@@ -224,7 +224,7 @@
- name: Test that the file modes were changed
assert:
that:
- "archive_02_bz2_stat.changed == False"
- archive_02_bz2_stat is not changed
- "archive_02_bz2_stat.stat.mode == '0600'"
- "'archived' in archive_bz2_result_02"
- "{{ archive_bz2_result_02['archived']| length}} == 3"
@@ -248,7 +248,7 @@
- name: Test that the file modes were changed
assert:
that:
- "archive_02_xz_stat.changed == False"
- archive_02_xz_stat is not changed
- "archive_02_xz_stat.stat.mode == '0600'"
- "'archived' in archive_xz_result_02"
- "{{ archive_xz_result_02['archived']| length}} == 3"
@@ -294,7 +294,7 @@
- name: Assert that nonascii tests succeeded
assert:
that:
- "nonascii_result_0.changed == true"
- nonascii_result_0 is changed
- "nonascii_stat0.stat.exists == true"
- name: remove nonascii test
@@ -315,7 +315,7 @@
- name: Assert that nonascii tests succeeded
assert:
that:
- "nonascii_result_1.changed == true"
- nonascii_result_1 is changed
- "nonascii_stat_1.stat.exists == true"
- name: remove nonascii test
@@ -336,7 +336,7 @@
- name: Assert that nonascii tests succeeded
assert:
that:
- "nonascii_result_1.changed == true"
- nonascii_result_1 is changed
- "nonascii_stat_1.stat.exists == true"
- name: remove nonascii test
@@ -357,12 +357,25 @@
- name: Assert that nonascii tests succeeded
assert:
that:
- "nonascii_result_2.changed == true"
- nonascii_result_2 is changed
- "nonascii_stat_2.stat.exists == true"
- name: remove nonascii test
file: path="{{ output_dir }}/test-archive-nonascii-くらとみ.zip" state=absent
- name: Test exclusion_patterns option
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/test-archive-exclustion-patterns.tgz"
exclusion_patterns: b?r.*
register: exclusion_patterns_result
- name: Assert that exclusion_patterns only archives included files
assert:
that:
- exclusion_patterns_result is changed
- "'bar.txt' not in exclusion_patterns_result.archived"
- name: Remove backports.lzma if previously installed (pip)
pip: name=backports.lzma state=absent
when: backports_lzma_pip is changed

View File

@@ -1,4 +1,4 @@
unsupported
shippable/posix/group3
destructive
skip/aix
skip/freebsd
@@ -6,4 +6,3 @@ skip/osx
skip/macos
skip/rhel
needs/root
needs/privileged

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import posixpath
import sys
try:
from http.server import SimpleHTTPRequestHandler, HTTPServer
from urllib.parse import unquote
except ImportError:
from SimpleHTTPServer import SimpleHTTPRequestHandler
from BaseHTTPServer import HTTPServer
from urllib import unquote
# Argument parsing
if len(sys.argv) != 4:
print('Syntax: {0} <bind> <port> <path>'.format(sys.argv[0]))
sys.exit(-1)
HOST, PORT, PATH = sys.argv[1:4]
PORT = int(PORT)
# The HTTP request handler
class Handler(SimpleHTTPRequestHandler):
def translate_path(self, path):
# Modified from Python 3.6's version of SimpleHTTPRequestHandler
# to support using another base directory than CWD.
# abandon query parameters
path = path.split('?', 1)[0]
path = path.split('#', 1)[0]
# Don't forget explicit trailing slash when normalizing. Issue17324
trailing_slash = path.rstrip().endswith('/')
try:
path = unquote(path, errors='surrogatepass')
except (UnicodeDecodeError, TypeError) as exc:
path = unquote(path)
path = posixpath.normpath(path)
words = path.split('/')
words = filter(None, words)
path = PATH
for word in words:
if os.path.dirname(word) or word in (os.curdir, os.pardir):
# Ignore components that are not a simple file/directory name
continue
path = os.path.join(path, word)
if trailing_slash:
path += '/'
return path
# Run simple HTTP server
httpd = HTTPServer((HOST, PORT), Handler)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()

View File

@@ -1,2 +1,3 @@
dependencies:
- prepare_tests
- setup_flatpak_remote

View File

@@ -4,8 +4,8 @@
- name: Test addition of absent flatpak (check mode)
flatpak:
name: org.gnome.Characters
remote: flathub
name: com.dummy.App1
remote: dummy-remote
state: present
register: addition_result
check_mode: true
@@ -13,13 +13,13 @@
- name: Verify addition of absent flatpak test result (check mode)
assert:
that:
- "addition_result.changed == true"
- addition_result is changed
msg: "Adding an absent flatpak shall mark module execution as changed"
- name: Test non-existent idempotency of addition of absent flatpak (check mode)
flatpak:
name: org.gnome.Characters
remote: flathub
name: com.dummy.App1
remote: dummy-remote
state: present
register: double_addition_result
check_mode: true
@@ -27,7 +27,7 @@
- name: Verify non-existent idempotency of addition of absent flatpak test result (check mode)
assert:
that:
- "double_addition_result.changed == true"
- double_addition_result is changed
msg: |
Adding an absent flatpak a second time shall still mark module execution
as changed in check mode
@@ -36,7 +36,7 @@
- name: Test removal of absent flatpak check mode
flatpak:
name: org.gnome.Characters
name: com.dummy.App1
state: absent
register: removal_result
check_mode: true
@@ -44,15 +44,15 @@
- name: Verify removal of absent flatpak test result (check mode)
assert:
that:
- "removal_result.changed == false"
- removal_result is not changed
msg: "Removing an absent flatpak shall mark module execution as not changed"
# state=present with url on absent flatpak
- name: Test addition of absent flatpak with url (check mode)
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
remote: flathub
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
remote: dummy-remote
state: present
register: url_addition_result
check_mode: true
@@ -60,13 +60,13 @@
- name: Verify addition of absent flatpak with url test result (check mode)
assert:
that:
- "url_addition_result.changed == true"
- url_addition_result is changed
msg: "Adding an absent flatpak from URL shall mark module execution as changed"
- name: Test non-existent idempotency of addition of absent flatpak with url (check mode)
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
remote: flathub
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
remote: dummy-remote
state: present
register: double_url_addition_result
check_mode: true
@@ -76,7 +76,7 @@
result (check mode)
assert:
that:
- "double_url_addition_result.changed == true"
- double_url_addition_result is changed
msg: |
Adding an absent flatpak from URL a second time shall still mark module execution
as changed in check mode
@@ -85,7 +85,7 @@
- name: Test removal of absent flatpak with url not doing anything (check mode)
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
state: absent
register: url_removal_result
check_mode: true
@@ -93,18 +93,17 @@
- name: Verify removal of absent flatpak with url test result (check mode)
assert:
that:
- "url_removal_result.changed == false"
- url_removal_result is not changed
msg: "Removing an absent flatpak shall mark module execution as not changed"
# - Tests with present flatpak -------------------------------------------------
# state=present on present flatpak
- name: Test addition of present flatpak (check mode)
flatpak:
name: org.gnome.Calculator
remote: flathub
name: com.dummy.App2
remote: dummy-remote
state: present
register: addition_present_result
check_mode: true
@@ -112,14 +111,14 @@
- name: Verify addition test result of present flatpak (check mode)
assert:
that:
- "addition_present_result.changed == false"
- addition_present_result is not changed
msg: "Adding an present flatpak shall mark module execution as not changed"
# state=absent on present flatpak
- name: Test removal of present flatpak (check mode)
flatpak:
name: org.gnome.Calculator
name: com.dummy.App2
state: absent
register: removal_present_result
check_mode: true
@@ -127,12 +126,12 @@
- name: Verify removal of present flatpak test result (check mode)
assert:
that:
- "removal_present_result.changed == true"
- removal_present_result is changed
msg: "Removing a present flatpak shall mark module execution as changed"
- name: Test non-existent idempotency of removal (check mode)
flatpak:
name: org.gnome.Calculator
name: com.dummy.App2
state: absent
register: double_removal_present_result
check_mode: true
@@ -140,7 +139,7 @@
- name: Verify non-existent idempotency of removal (check mode)
assert:
that:
- "double_removal_present_result.changed == true"
- double_removal_present_result is changed
msg: |
Removing a present flatpak a second time shall still mark module execution
as changed in check mode
@@ -149,8 +148,8 @@
- name: Test addition with url of present flatpak (check mode)
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Calculator.flatpakref
remote: flathub
name: http://127.0.0.1:8000/repo/com.dummy.App2.flatpakref
remote: dummy-remote
state: present
register: url_addition_present_result
check_mode: true
@@ -158,14 +157,14 @@
- name: Verify addition with url of present flatpak test result (check mode)
assert:
that:
- "url_addition_present_result.changed == false"
- url_addition_present_result is not changed
msg: "Adding a present flatpak from URL shall mark module execution as not changed"
# state=absent with url on present flatpak
- name: Test removal with url of present flatpak (check mode)
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Calculator.flatpakref
name: http://127.0.0.1:8000/repo/com.dummy.App2.flatpakref
state: absent
register: url_removal_present_result
check_mode: true
@@ -173,13 +172,13 @@
- name: Verify removal with url of present flatpak test result (check mode)
assert:
that:
- "url_removal_present_result.changed == true"
- url_removal_present_result is changed
msg: "Removing an absent flatpak shall mark module execution as not changed"
- name: Test non-existent idempotency of removal with url of present flatpak (check mode)
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Calculator.flatpakref
remote: flathub
name: http://127.0.0.1:8000/repo/com.dummy.App2.flatpakref
remote: dummy-remote
state: absent
register: double_url_removal_present_result
check_mode: true
@@ -189,5 +188,5 @@
flatpak test result (check mode)
assert:
that:
- "double_url_removal_present_result.changed == true"
- double_url_removal_present_result is changed
msg: Removing an absent flatpak a second time shall still mark module execution as changed

View File

@@ -30,8 +30,8 @@
- name: Test executable override
flatpak:
name: org.gnome.Characters
remote: flathub
name: com.dummy.App1
remote: dummy-remote
state: present
executable: nothing-that-exists
ignore_errors: true
@@ -40,8 +40,8 @@
- name: Verify executable override test result
assert:
that:
- "executable_override_result.failed == true"
- "executable_override_result.changed == false"
- executable_override_result is failed
- executable_override_result is not changed
msg: "Specifying non-existing executable shall fail module execution"
- import_tasks: check_mode.yml
@@ -57,5 +57,20 @@
vars:
method: system
always:
- name: Check HTTP server status
async_status:
jid: "{{ webserver_status.ansible_job_id }}"
ignore_errors: true
- name: List processes
command: ps aux
- name: Stop HTTP server
command: >-
pkill -f -- '{{ remote_tmp_dir }}/serve.py'
when: |
ansible_distribution in ('Fedora', 'Ubuntu')
ansible_distribution == 'Fedora' or
ansible_distribution == 'Ubuntu' and not ansible_distribution_major_version | int < 16

View File

@@ -4,32 +4,58 @@
state: present
become: true
when: ansible_distribution == 'Fedora'
- block:
- name: Activate flatpak ppa on Ubuntu
apt_repository:
repo: ppa:alexlarsson/flatpak
state: present
mode: '0644'
when: ansible_lsb.major_release | int < 18
- name: Install flatpak package on Ubuntu
apt:
name: flatpak
state: present
become: true
when: ansible_distribution == 'Ubuntu'
- name: Enable flathub for user
- name: Install dummy remote for user
flatpak_remote:
name: flathub
name: dummy-remote
state: present
flatpakrepo_url: https://dl.flathub.org/repo/flathub.flatpakrepo
flatpakrepo_url: /tmp/flatpak/repo/dummy-repo.flatpakrepo
method: user
- name: Enable flathub for system
- name: Install dummy remote for system
flatpak_remote:
name: flathub
name: dummy-remote
state: present
flatpakrepo_url: https://dl.flathub.org/repo/flathub.flatpakrepo
flatpakrepo_url: /tmp/flatpak/repo/dummy-repo.flatpakrepo
method: system
- name: Remove (if necessary) flatpak for testing check mode on absent flatpak
flatpak:
name: com.dummy.App1
remote: dummy-remote
state: absent
no_dependencies: true
- name: Add flatpak for testing check mode on present flatpak
flatpak:
name: org.gnome.Calculator
remote: flathub
name: com.dummy.App2
remote: dummy-remote
state: present
no_dependencies: true
- name: Copy HTTP server
copy:
src: serve.py
dest: '{{ remote_tmp_dir }}/serve.py'
mode: '0755'
- name: Start HTTP server
command: '{{ remote_tmp_dir }}/serve.py 127.0.0.1 8000 /tmp/flatpak/'
async: 120
poll: 0
register: webserver_status

View File

@@ -2,114 +2,140 @@
- name: Test addition - {{ method }}
flatpak:
name: org.gnome.Characters
remote: flathub
name: com.dummy.App1
remote: dummy-remote
state: present
method: "{{ method }}"
no_dependencies: true
register: addition_result
- name: Verify addition test result - {{ method }}
assert:
that:
- "addition_result.changed == true"
- addition_result is changed
msg: "state=present shall add flatpak when absent"
- name: Test idempotency of addition - {{ method }}
flatpak:
name: org.gnome.Characters
remote: flathub
name: com.dummy.App1
remote: dummy-remote
state: present
method: "{{ method }}"
no_dependencies: true
register: double_addition_result
- name: Verify idempotency of addition test result - {{ method }}
assert:
that:
- "double_addition_result.changed == false"
- double_addition_result is not changed
msg: "state=present shall not do anything when flatpak is already present"
# state=absent
- name: Test removal - {{ method }}
flatpak:
name: org.gnome.Characters
name: com.dummy.App1
state: absent
method: "{{ method }}"
no_dependencies: true
register: removal_result
- name: Verify removal test result - {{ method }}
assert:
that:
- "removal_result.changed == true"
- removal_result is changed
msg: "state=absent shall remove flatpak when present"
- name: Test idempotency of removal - {{ method }}
flatpak:
name: org.gnome.Characters
name: com.dummy.App1
state: absent
method: "{{ method }}"
no_dependencies: true
register: double_removal_result
- name: Verify idempotency of removal test result - {{ method }}
assert:
that:
- "double_removal_result.changed == false"
- double_removal_result is not changed
msg: "state=absent shall not do anything when flatpak is not present"
# state=present with url as name
- name: Test addition with url - {{ method }}
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
remote: flathub
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
remote: dummy-remote
state: present
method: "{{ method }}"
no_dependencies: true
register: url_addition_result
- name: Verify addition test result - {{ method }}
assert:
that:
- "url_addition_result.changed == true"
- url_addition_result is changed
msg: "state=present with url as name shall add flatpak when absent"
- name: Test idempotency of addition with url - {{ method }}
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
remote: flathub
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
remote: dummy-remote
state: present
method: "{{ method }}"
no_dependencies: true
register: double_url_addition_result
- name: Verify idempotency of addition with url test result - {{ method }}
assert:
that:
- "double_url_addition_result.changed == false"
- double_url_addition_result is not changed
msg: "state=present with url as name shall not do anything when flatpak is already present"
# state=absent with url as name
- name: Test removal with url - {{ method }}
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
state: absent
method: "{{ method }}"
no_dependencies: true
register: url_removal_result
ignore_errors: true
- name: Verify removal test result - {{ method }}
- name: Verify removal test result failed - {{ method }}
# It looks like flatpak has a bug when the hostname contains a port. If this is the case, it emits
# the following message, which we check for. If another error happens, we fail.
# Upstream issue: https://github.com/flatpak/flatpak/issues/4307
# (The second message happens with Ubuntu 18.04.)
assert:
that:
- "url_removal_result.changed == true"
msg: "state=absent with url as name shall remove flatpak when present"
- >-
url_removal_result.msg in [
"error: Invalid branch 127.0.0.1:8000: Branch can't contain :",
"error: Invalid id http:: Name can't contain :",
]
when: url_removal_result is failed
- name: Test idempotency of removal with url - {{ method }}
flatpak:
name: https://flathub.org/repo/appstream/org.gnome.Characters.flatpakref
state: absent
method: "{{ method }}"
register: double_url_removal_result
- when: url_removal_result is not failed
block:
- name: Verify idempotency of removal with url test result - {{ method }}
assert:
that:
- "double_url_removal_result.changed == false"
msg: "state=absent with url as name shall not do anything when flatpak is not present"
- name: Verify removal test result - {{ method }}
assert:
that:
- url_removal_result is changed
msg: "state=absent with url as name shall remove flatpak when present"
- name: Test idempotency of removal with url - {{ method }}
flatpak:
name: http://127.0.0.1:8000/repo/com.dummy.App1.flatpakref
state: absent
method: "{{ method }}"
no_dependencies: true
register: double_url_removal_result
- name: Verify idempotency of removal with url test result - {{ method }}
assert:
that:
- double_url_removal_result is not changed
msg: "state=absent with url as name shall not do anything when flatpak is not present"

View File

@@ -6,4 +6,3 @@ skip/osx
skip/macos
skip/rhel
needs/root
disabled # FIXME

View File

@@ -13,7 +13,7 @@
- name: Verify addition of absent flatpak remote test result (check mode)
assert:
that:
- "addition_result.changed == true"
- addition_result is changed
msg: "Adding an absent flatpak remote shall mark module execution as changed"
- name: Test non-existent idempotency of addition of absent flatpak remote (check mode)
@@ -29,7 +29,7 @@
test result (check mode)
assert:
that:
- "double_addition_result.changed == true"
- double_addition_result is changed
msg: |
Adding an absent flatpak remote a second time shall still mark module execution
as changed in check mode
@@ -46,7 +46,7 @@
- name: Verify removal of absent flatpak remote test result (check mode)
assert:
that:
- "removal_result.changed == false"
- removal_result is not changed
msg: "Removing an absent flatpak remote shall mark module execution as not changed"
@@ -65,7 +65,7 @@
- name: Verify addition of present flatpak remote test result (check mode)
assert:
that:
- "addition_result.changed == false"
- addition_result is not changed
msg: "Adding a present flatpak remote shall mark module execution as not changed"
# state=absent
@@ -80,7 +80,7 @@
- name: Verify removal of present flatpak remote test result (check mode)
assert:
that:
- "removal_result.changed == true"
- removal_result is changed
msg: "Removing a present flatpak remote shall mark module execution as changed"
- name: Test non-existent idempotency of removal of present flatpak remote (check mode)
@@ -95,7 +95,7 @@
test result (check mode)
assert:
that:
- "double_removal_result.changed == true"
- double_removal_result is changed
msg: |
Removing a present flatpak remote a second time shall still mark module execution
as changed in check mode

View File

@@ -40,8 +40,8 @@
- name: Verify executable override test result
assert:
that:
- "executable_override_result.failed == true"
- "executable_override_result.changed == false"
- executable_override_result is failed
- executable_override_result is not changed
msg: "Specifying non-existing executable shall fail module execution"
- import_tasks: check_mode.yml

View File

@@ -11,7 +11,7 @@
- name: Verify addition test result - {{ method }}
assert:
that:
- "addition_result.changed == true"
- addition_result is changed
msg: "state=present shall add flatpak when absent"
- name: Test idempotency of addition - {{ method }}
@@ -25,7 +25,7 @@
- name: Verify idempotency of addition test result - {{ method }}
assert:
that:
- "double_addition_result.changed == false"
- double_addition_result is not changed
msg: "state=present shall not do anything when flatpak is already present"
- name: Test updating remote url does not do anything - {{ method }}
@@ -39,7 +39,7 @@
- name: Verify updating remote url does not do anything - {{ method }}
assert:
that:
- "url_update_result.changed == false"
- url_update_result is not changed
msg: "Trying to update the URL of an existing flatpak remote shall not do anything"
@@ -55,7 +55,7 @@
- name: Verify removal test result - {{ method }}
assert:
that:
- "removal_result.changed == true"
- removal_result is changed
msg: "state=absent shall remove flatpak when present"
- name: Test idempotency of removal - {{ method }}
@@ -68,5 +68,5 @@
- name: Verify idempotency of removal test result - {{ method }}
assert:
that:
- "double_removal_result.changed == false"
- double_removal_result is not changed
msg: "state=absent shall not do anything when flatpak is not present"

View File

@@ -17,9 +17,9 @@
- name: assert set changed and value is correct
assert:
that:
- set_result.changed == true
- set_result is changed
- set_result.diff.before == "\n"
- set_result.diff.after == option_value + "\n"
- get_result.changed == false
- get_result is not changed
- get_result.config_value == option_value
...

View File

@@ -19,9 +19,9 @@
- name: assert set changed and value is correct with state=present
assert:
that:
- set_result.changed == true
- set_result is changed
- set_result.diff.before == "\n"
- set_result.diff.after == option_value + "\n"
- get_result.changed == false
- get_result is not changed
- get_result.config_value == option_value
...

Some files were not shown because too many files have changed in this diff Show More