Compare commits

..

70 Commits
4.0.2 ... 4.2.0

Author SHA1 Message Date
Felix Fontein
6661917370 Release 4.2.0. 2021-12-21 11:58:13 +01:00
patchback[bot]
ec0bd3143a Add additional auth support to Gitlab (#705) (#3918) (#3929)
* Add additional auth support to Gitlab (#705)

- removed unused imports from module_utils.gitlab
- fix bug in gitlab_project to check if avatar_path is provided

* add doc_fragment and argument_spec for gitlab auth

* doc fixes and remove avatar_path bug fix

* small doc changes, pass validate_certs to requests call

* update changelog

(cherry picked from commit 52ad0a5fbb)

Co-authored-by: Josh <josham@users.noreply.github.com>
2021-12-20 22:20:40 +01:00
patchback[bot]
cce68def8b fix gitlab_project avatar_path open when undefined bug (#3926) (#3927) (#3928)
* fix gitlab_project avatar_path open when undefined bug (#3926)

* remove changelog fragment

(cherry picked from commit 11fcf661bf)

Co-authored-by: Josh <josham@users.noreply.github.com>
2021-12-20 20:22:29 +01:00
patchback[bot]
6f5ad22d28 Disable snap tests. (#3922) (#3923)
(cherry picked from commit 51838adf8c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-12-20 10:58:42 +01:00
patchback[bot]
6c53a09eef xfconf - using aggregated base class (#3919) (#3920)
* xfconf - using aggregated base class

* added changelog fragment

* fixed typo

(cherry picked from commit daabb53a2b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-20 10:15:24 +01:00
patchback[bot]
9b6e75f7f4 Icinga2 Inventory Plugin - Error handling and inventory name changer (#3906) (#3915)
* Added inventory_attr and filter error handling

* Added inventory_attr and filter error handling

* Added inventory_attr and filter error handling

* Added inventory_attr and filter error handling

* Added changelog

* Added inventory_attr and filter error handling

* Added inventory_attr and filter error handling

* Applying requested changes

* FIxes for tests

* Added inventory_attr and filter error handling

* Error handling

* Error handling

* Error handling

* Modifications to unit tests

* Remove pitfall

(cherry picked from commit 8da2c630d8)

Co-authored-by: Cliff Hults <BongoEADGC6@users.noreply.github.com>
2021-12-19 14:18:57 +01:00
patchback[bot]
e09650140d Fix nrdp string arguments without an encoding (#3909) (#3912)
* Fix nrdp string arguments without an encoding

* added changelog fragment

Signed-off-by: Jesse Harris <zigford@gmail.com>

* Update changelogs/fragments/3909-nrdp_fix_string_args_without_encoding.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 40ffd559ac)

Co-authored-by: Jesse Harris <zigford@gmail.com>
2021-12-17 22:40:29 +01:00
patchback[bot]
67388be1a9 jira - fixed 'body' dict key error (#3867) (#3914)
* fixed

* added changelog fragment

* improved fail output when placing JIRA API requests

* Update plugins/modules/web_infrastructure/jira.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e6c773a4f3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-17 22:13:43 +01:00
patchback[bot]
130d07948a proxmox - fixing onboot parameter causing module failure when not defined (#3874) (#3902)
* fixing onboot parameter when not supplied

* adding changelog fragment

(cherry picked from commit 00a1152bb1)

Co-authored-by: Andrew Pantuso <ajpantuso@gmail.com>
2021-12-14 07:00:32 +01:00
patchback[bot]
5d6fcaef53 LXD inventory: Support virtual machines (#3519) (#3900)
* LXD 4.x compatibility (Containers and VMs)

* add changelog fragment

* update fixture

* update plugin options

* backwards compatible alias

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/3519-inventory-support-lxd-4.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* add lxd 4.0 requirement

* filter for type of virtualization added. due to duplication in the namespace, "type" is not used as the keyword but "nature".

* add type filter

Since the first version of this inventory plugin only supports containers,
a filter function was added to filter between containers and
virtual machines or both.
By default only containers are displayed, as in the first version of the plugin.
This behavior will change in the future.

* rename C(nature) to C(type)

The term "nature" does not fit into the lxd namespace.
Therefore i renamed nature to type.

* update changelog fragment

* Update plugins/inventory/lxd.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* rename typefilter to type_filter

* fix tests with type_filter

* Update plugins/inventory/lxd.py

* Update plugins/inventory/lxd.py

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Frank Dornheim <“dornheim@posteo.de@users.noreply.github.com”>
(cherry picked from commit 8825ef4711)

Co-authored-by: Élie <elie@deloumeau.fr>
2021-12-14 06:42:47 +01:00
patchback[bot]
f044a83c49 Pass missing vlan-related options (flags, ingress, egress) to nmcli (#3896) (#3899)
* Pass missing vlan-related options (flags, ingress, egress) to nmcli

Signed-off-by: Jean-Francois Panisset <panisset@gmail.com>

* Follow style: comma on last parameter

Signed-off-by: Jean-Francois Panisset <panisset@gmail.com>

* PEP8 code style fix

Signed-off-by: Jean-Francois Panisset <panisset@gmail.com>

* add missing changelog fragment

Signed-off-by: Jean-Francois Panisset <panisset@gmail.com>
(cherry picked from commit 6cec2e2f58)

Co-authored-by: Jean-Francois Panisset <32653482+jfpanisset@users.noreply.github.com>
2021-12-13 21:59:37 +01:00
patchback[bot]
e3f7e8dadf Docs improvements. (#3893) (#3894)
(cherry picked from commit 59bbaeed77)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-12-12 11:46:31 +01:00
patchback[bot]
8d1a028dbd Modules for managing HPE iLO (#3740) (#3892)
* Adding HPE ilo modules

* lint fix

* symlink created

* Fan message enhancement

* Removed comments

* Added uniform constuct

* Update plugins/module_utils/redfish_utils.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/module_utils/redfish_utils.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/ilo_redfish_config.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Added info module and minor changes

* lint fixes

* lint fixes

* lint fixes

* lint fixes

* Added tests and modifed ilo_redfish_info

* Modified tests

* lint fix

* result overwrite fixed

* result overwrite fixed

* Added result

* Changed RESULT

* Modified contains

* Added License

* lint fix

* Changed RESULT

* lint fix

* Changed return

* Changed return

* Update plugins/modules/remote_management/redfish/ilo_redfish_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/ilo_redfish_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/ilo_redfish_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/ilo_redfish_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/ilo_redfish_config.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/ilo_redfish_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Added - changed

* Modified changed attribute

* Changed modified

* lint fix

* Removed req

* Minor changes

* Update plugins/modules/remote_management/redfish/ilo_redfish_info.py

Co-authored-by: Rajeevalochana Kallur <rajeevalochana.kallur@hpe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8508e3fa6f)

Co-authored-by: Bhavya <44067558+Bhavya06@users.noreply.github.com>
2021-12-11 21:56:10 +01:00
patchback[bot]
8823e5c061 hponcfg - revamped the module using ModuleHelper (#3840) (#3891)
* hponcfg - revamped the module using ModuleHelper

* added changelog fragment

* fixed imports

* Update plugins/modules/remote_management/hpilo/hponcfg.py

* fixed

(cherry picked from commit 7cbe1bcf63)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-11 21:35:23 +01:00
patchback[bot]
102456d033 add dnsimple_info module, see issue #3569 (#3739) (#3890)
* add dnsimple_info module, see issue #3569

https://github.com/ansible-collections/community.general/issues/3569#issuecomment-945002861

* Update plugins/modules/net_tools/dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update BOTMETA.yml

Update dnsimple_info.py

Create dnsimple_info.py

Create dnsimple_info.py

pep8

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update dnsimple_info.py

add returns

pep8 spacing

Update dnsimple_info.py

Update dnsimple_info.py

change return results to list

fix time stamps

Update dnsimple_info.py

remove extra comma

Update plugins/modules/net_tools/dnsimple_info.py

Update test_dnsimple_info.py

Update dnsimple_info.py

fix descriptions

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

missing punctuation throughout docs

Update dnsimple_info.py

add elements in descriptions

Update dnsimple_info.py

indentation error

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

Update dnsimple_info.py

refactor, remove unneeded arguments

refactor and error handling

formatting

add unit test

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update plugins/modules/net_tools/dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

assert fail/exit

Update test_dnsimple_info.py

pep8 fixes

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Update test_dnsimple_info.py

Co-Authored-By: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 2547932e3d)

Co-authored-by: Edward Hilgendorf <edward@hilgendorf.me>
2021-12-11 21:29:27 +01:00
patchback[bot]
aad4c55d3d lxc_container - invoke run_command passing list (#3851) (#3886)
* lxc_container - invoke run_command passing list

* added changelog fragment

* Update plugins/modules/cloud/lxc/lxc_container.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 9a100e099e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-10 06:43:19 +01:00
patchback[bot]
e31c98f17f jira - Add support for Bearer token auth (#3838) (#3884)
* jira - Add support for Bearer token auth

* jira - Add support for Bearer token auth

* added changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix indent issue

* fix overindent

* jira - Add support for Bearer token auth

* jira - Add support for Bearer token auth

* added changelog fragment

* minor doc fix to be clearer.

Be clear about the exclusivity between username and token
as well as password and token.

* Update changelogs/fragments/3838-jira-token.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/web_infrastructure/jira.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/web_infrastructure/jira.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit cbc9742747)

Co-authored-by: Kambiz Aghaiepour <kambiz@aghaiepour.com>
2021-12-09 22:05:02 +01:00
patchback[bot]
6a5dfc5579 aix_lvg - invoke run_command passing list (#3834) (#3883)
* aix_lvg - invoke run_command passing list

* added changelog fragment

(cherry picked from commit 4bddf9e12c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-09 22:01:28 +01:00
Felix Fontein
ab7efef9df nmcli: adding ipv6 address list support (#3776) (#3885)
* rebase

* Add changelog fragment

* add suggestions

* split PR into two

* Add multiple address support but with #3768 fiexed

* rebase

* clean some merge artifacts

* update the wording

(cherry picked from commit 90c0980e8d)

Co-authored-by: Alex Groshev <38885591+haddystuff@users.noreply.github.com>
2021-12-09 22:00:33 +01:00
patchback[bot]
ca9c763b57 aix_filesystems - invoke run_command passing list (#3833) (#3882)
* aix_filesystems - invoke run_command passing list

* added changelog fragment

(cherry picked from commit 70f73f42f8)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-09 22:00:12 +01:00
patchback[bot]
cfeb40ed23 Update lxd connection to use all documented vars for options (#3798) (#3881)
* Update lxd connection to use documented vars

* Add a changelog fragment

* Add clarification to changelog description

* Shorten changelog fragment description

(cherry picked from commit 8f6866dba6)

Co-authored-by: Conner Crosby <conner@cavcrosby.tech>
2021-12-09 21:58:06 +01:00
patchback[bot]
c495d136fa add module gitlab_branch (#3795) (#3879)
* add module gitlab_branch

* Update plugins/modules/source_control/gitlab/gitlab_branch.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_branch.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_branch.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update gitlab_branch.py

* Update gitlab_branch.py

* Update gitlab_branch.py

* add integration tests

* Update BOTMETA.yml

* Update gitlab_branch.py

* Update tests/integration/targets/gitlab_branch/aliases

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update main.yml

Co-authored-by: paitrault <aymeric.paitrault@inetum.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c69e4f4ac9)

Co-authored-by: paytroff <paytroff@gmail.com>
2021-12-09 21:19:13 +01:00
patchback[bot]
d9e2d6682b small docs update for timezone module (#3876) (#3878)
* small docs update for timezone module
fixes #3242

* Update plugins/modules/system/timezone.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c14eafd63f)

Co-authored-by: Anatoly Pugachev <matorola@gmail.com>
2021-12-09 21:19:03 +01:00
Felix Fontein
d7fe288ffd Prepare 4.2.0 release. 2021-12-08 20:22:04 +01:00
patchback[bot]
7de89699f7 update scaleway maintainers (#3472) (#3873)
* update scaleway maintainers

* Fix

* Fix sieben -> remyleone

Co-authored-by: scaleway-bot <github@scaleway.com>
(cherry picked from commit 80d650f60a)

Co-authored-by: Rémy Léone <remy.leone@gmail.com>
2021-12-08 20:20:59 +01:00
patchback[bot]
b0a9cceeb5 interfaces_file: unit tests improved (#3863) (#3869)
* interfaces_file: fixed unit tests and added README, added test cases for #3862

* typo fix for interfaces_file unit tests README.md

Co-authored-by: Felix Fontein <felix@fontein.de>

* typo fix for interfaces_file unit tests README.md

Co-authored-by: Felix Fontein <felix@fontein.de>

* typo fix for interfaces_file unit tests README.md

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0c828d9d01)

Co-authored-by: Roman Belyakovsky <ihryamzik@gmail.com>
2021-12-08 12:51:25 +01:00
patchback[bot]
b08f0b2f82 interfaces_file - fixed dup options bug (#3862) (#3866)
* interfaces_file - fixed dup options bug

* added changelog fragment

(cherry picked from commit 3dd5b0d343)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-08 05:54:48 +00:00
patchback[bot]
f23f409bd6 MH additional tests (#3850) (#3859)
(cherry picked from commit d50f30c618)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-05 22:14:16 +01:00
patchback[bot]
cfea62793f MH decorators - added decorators for check_mode (#3849) (#3860)
* MH decorators - added decorators for check_mode

* added changelog fragment

(cherry picked from commit fb79c2998e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-05 22:14:08 +01:00
patchback[bot]
62bda91466 Add stable-4 to nightly CI jobs; make stable-2 weekly. (#3852) (#3857)
(cherry picked from commit 727c9a4032)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-12-05 17:41:15 +01:00
patchback[bot]
473d5fa2af Moved changelog fragment file to the right directory (#3853) (#3858)
* moved changelog fragment file to the right directory

* fixed filename

(cherry picked from commit 4f4150117d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-05 17:40:43 +01:00
patchback[bot]
cc76d684d5 opentelemetry: honour ignore errors (#3837) (#3847)
* opentelemetry: honour the ignore_errors

* fix-encoding-pragma

* Add changelog fragment

* opentelemetry: ignore produces unset span status

(cherry picked from commit ce6d0a749e)

Co-authored-by: Victor Martinez <victormartinezrubio@gmail.com>
2021-12-04 19:55:17 +01:00
patchback[bot]
7a6770c731 nmcli - add support for addr-gen-mode and ip6-privacy options (#3802) (#3845)
* Add support for addr-gen-mode and ip6-privacy options

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* try to solve conflict

* add suggested code + fix some of its issues

* Update plugins/modules/net_tools/nmcli.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 142a660571)

Co-authored-by: Alex Groshev <38885591+haddystuff@users.noreply.github.com>
2021-12-04 19:18:49 +01:00
patchback[bot]
d2214af6e8 java_cert - invoke run_command passing list (#3835) (#3842)
* java_cert - invoke run_command passing list

* added changelog fragment

(cherry picked from commit 6b91c56c4e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-03 08:07:15 +01:00
patchback[bot]
fad1220869 monit - invoke run_command passing list (#3821) (#3832)
* monit - invoke run_command passing list

* added changelog fragment

* fixed unit test

* further adjustments

* fixed handling of command_args

* better handling of command_args

(cherry picked from commit 52d4907480)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-02 08:12:52 +01:00
patchback[bot]
fe09516235 svc - invoke run_command passing list (#3829) (#3830)
* svc - invoke run_command passing list

* added changelog fragment

(cherry picked from commit ccb74ffd7c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-01 20:43:16 +01:00
patchback[bot]
78cd8886f4 ip_netns - invoke run_command passing list (#3822) (#3828)
* ip_netns - invoke run_command passing list

* added changelog fragment

(cherry picked from commit ba9578f12a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-01 13:19:28 +01:00
patchback[bot]
6b99d48f06 logstash_plugin - invoke run_command passing list (#3808) (#3827)
* logstash_plugin - invoke run_command passing list

* added changelog fragment

* rogue chglog frag escaped its caged and was seen running around into a different PR

(cherry picked from commit c587d21ba0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-01 07:09:49 +01:00
patchback[bot]
6e0e17a7e3 xattr - invoke run_command passing list (#3806) (#3820)
* xattr - invoke run_command passing list

* added changelog fragment

* Update plugins/modules/files/xattr.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 2edbabd30f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-12-01 06:58:39 +01:00
patchback[bot]
90de95c7b2 pipx - fixed --include-apps bug (#3800) (#3818)
* pipx - fixed --include-apps bug

* added changelog fragment

* skipped freebsd for the last test

(cherry picked from commit bc619bcefc)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-30 08:33:31 +01:00
patchback[bot]
07c6b8b24e ModuleHelper - deprecate attribute VarDict (#3801) (#3819)
* ModuleHelper - deprecate attribute VarDict

* added changelog fragment

(cherry picked from commit 2896131ca7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-30 08:32:56 +01:00
patchback[bot]
d106de6d51 python_requirements_info - improvements (#3797) (#3816)
* python_requirements_info - improvements

- returns python version broken down into its components
- minor refactoring

* adjusted indentation in the documentaiton blocks

* added changelog fragment

* fixes from PR review + assertion in test

(cherry picked from commit ff0c065ca2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-30 08:32:42 +01:00
patchback[bot]
e96101fb3f Improve modules gitlab (#3792) (#3815)
* correction doc

* Update gitlab_group.py

* improve gitlab

* Update changelogs/3766-improve_gitlab_group_and_project.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_group.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_group.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_group.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_group.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* correction

* correction sanity project

* Update plugins/modules/source_control/gitlab/gitlab_project.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* modif condition default_branch arg

* Update gitlab_project.py

change indent if defautl_branch inside if initialize_with_radme

Co-authored-by: paitrault <aymeric.paitrault@inetum.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c6dcae5fda)

Co-authored-by: paytroff <paytroff@gmail.com>
2021-11-30 06:53:17 +01:00
patchback[bot]
a60d55f03c ansible_galaxy_install - minor documentation fix (#3804) (#3814)
* ansible_galaxy_install - minor documentation fix

* further adjustments

(cherry picked from commit 49bdc0f218)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-30 06:53:07 +01:00
patchback[bot]
d6a09ada98 iso_extract - invoke run_command passing list (#3805) (#3812)
* iso_extract - invoke run_command passing list

* added changelog fragment

(cherry picked from commit d60edc4ac1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-30 06:53:00 +01:00
patchback[bot]
9ddb75a3a2 logentries - invoke run_command passing list (#3807) (#3811)
* logentries - invoke run_command passing list

* added changelog fragment

(cherry picked from commit cb0ade4323)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-30 06:52:49 +01:00
patchback[bot]
b85ff2a997 Fixing ip address without mask bug (#3784) (#3803)
* change ip6 type to list of str and fix problem with setting addresses without netmask

* change ip6 type to list of str and fix problem with setting addresses without netmask

* Add changelog fragment

* add suggestions

* fix no mask using bug

* Make change independed from feature branch

(cherry picked from commit aae3ae1a8e)

Co-authored-by: Alex Groshev <38885591+haddystuff@users.noreply.github.com>
2021-11-30 06:01:50 +01:00
patchback[bot]
3d1ca5638b python_requirements_info - fail when version operator used without version (#3785) (#3793)
* python_requirements_info - fail when version operator used without version

* added changelog fragment

* simplified way of achieving the same result

(cherry picked from commit 59c1859fb3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-26 20:53:58 +01:00
patchback[bot]
35fd4700bf MH DeprecateAttrsMixin (#3727) (#3794)
* initial commit for deprecate_attrs

* completed tests

* added spaces

* test now works when tehre is more than one deprecation

* trying == instead of eq in jinja

* new approach to testing

* removed extraneous debug message

(cherry picked from commit 887b3882dc)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-11-26 20:53:49 +01:00
patchback[bot]
9add9df7d6 Keycloak: add sssd provider for user federation (#3780) (#3788)
* add sssd provider

* add changelog fragment

* fix message

* add version

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1cc6938ae3)

Co-authored-by: Laurent Paumier <30328363+laurpaum@users.noreply.github.com>
2021-11-25 13:23:21 +01:00
Felix Fontein
cdb747b41d Next expected release is 4.2.0. 2021-11-23 06:44:41 +01:00
Felix Fontein
7a70fda784 Release 4.1.0. 2021-11-23 05:52:35 +01:00
patchback[bot]
13d1b9569e terraform: ensuring command options are applied during build_plan (#3726) (#3778)
* Fixes parameters missing in planned state

* Added new line at end of file

* Added changelog fragment for pr 3726

* Added changes mentioned by felixfontein

* Removed blank space for pep8 validation

* Update changelogs/fragments/3726-terraform-missing-parameters-planned-fix.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/misc/terraform.py

extend needs to be a list

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Thomas Arringe <thomas.arringe@fouredge.se>
Co-authored-by: Thomas Arringe <Thomas.Arringe@ica.se>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 946430e1fb)

Co-authored-by: egnirra <37709886+egnirra@users.noreply.github.com>
2021-11-23 05:49:18 +01:00
patchback[bot]
c8c5021773 pacman: add stdout and stderr as return parameters (#3758) (#3775)
* pacman: add stdout and stderr as return parameters

Following the model of ansible.builtin.apt

* Bugfix to PR: fix documentation formatting

* Add changelog fragment 3758-pacman-add-stdout-stderr.yml

* Apply suggestions from code review

* Update changelogs/fragments/3758-pacman-add-stdout-stderr.yml

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c2068641f4)

Co-authored-by: Célestin Matte <tohwiq@gmail.com>
2021-11-22 20:01:50 +01:00
patchback[bot]
206ac72bd8 extend open_iscsi to allow rescanning a session to discover new mapped LUN's #3763 (#3765) (#3774)
* <!--- Describe the change below, including rationale and design decisions -->

<!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->

According to issue 3767, adding a session rescan flag to add and utilize mapped_luns after login into a portal and target.

<!--- Pick one below and delete the rest -->
- Feature Pull Request

<!--- Write the short name of the module, plugin, task or feature below -->
open_iscsi rescan flag

<!--- Include additional information to help people understand the change here -->
<!--- A step-by-step reproduction of the problem is helpful if there is no related issue -->

<!--- Paste verbatim command output below, e.g. before and after your change -->
``` yaml
      - name: Rescan Targets
        open_iscsi:
          rescan: true
          target: "{{ item.0 }}"
        register: iscsi_rescan
        loop:
          - iqn.1994-05.com.redhat:8c4ea31d28e
        tags:
          - rescan
```
```bash
    TASK [Rescan Targets] ********************************************************************************************************************************************************************
    changed: [node1] => (item=['iqn.1994-05.com.redhat:8c4ea31d28e'])
    changed: [node2] => (item=['iqn.1994-05.com.redhat:8c4ea31d28e'])

    TASK [Output rescan output] **************************************************************************************************************************************************************
    ok: [node1] => {
        "iscsi_rescan": {
            "changed": true,
            "msg": "All items completed",
            "results": [
                {
                    "ansible_loop_var": "item",
                    "changed": true,
                    "failed": false,
                    "invocation": {
                        "module_args": {
                            "auto_node_startup": null,
                            "discover": false,
                            "login": null,
                            "node_auth": "CHAP",
                            "node_pass": null,
                            "node_user": null,
                            "port": "3260",
                            "portal": null,
                            "rescan": true,
                            "show_nodes": false,
                            "target": "iqn.1994-05.com.redhat:8c4ea31d28e'"
                        }
                    },
                    "item": [
                        "iqn.1994-05.com.redhat:8c4ea31d28e"
                    ],
                    "sessions": [
                        "Rescanning session [sid: 3, target: iqn.1994-05.com.redhat:8c4ea31d28e, portal: 127.0.0.1,3260]",
                        "Rescanning session [sid: 1, target: iqn.1994-05.com.redhat:8c4ea31d28e, portal: 127.0.0.2,3260]",
                        "Rescanning session [sid: 2, target: iqn.1994-05.com.redhat:8c4ea31d28e, portal: 127.0.0.3,3260]",
                        ""
                    ]
                }
            ]
        }
    }
    ok: [node2] => {
        "iscsi_rescan": {
            "changed": true,
            "msg": "All items completed",
            "results": [
                {
                    "ansible_loop_var": "item",
                    "changed": true,
                    "failed": false,
                    "invocation": {
                        "module_args": {
                            "auto_node_startup": null,
                            "discover": false,
                            "login": null,
                            "node_auth": "CHAP",
                            "node_pass": null,
                            "node_user": null,
                            "port": "3260",
                            "portal": null,
                            "rescan": true,
                            "show_nodes": false,
                            "target": "iqn.1994-05.com.redhat:8c4ea31d28e"
                        }
                    },
                    "item": [
                        "iqn.1994-05.com.redhat:8c4ea31d28e"
                    ],
                    "sessions": [
                        "Rescanning session [sid: 3, target: iqn.1994-05.com.redhat:8c4ea31d28e, portal: 127.0.0.1,3260]",
                        "Rescanning session [sid: 2, target: iqn.1994-05.com.redhat:8c4ea31d28e, portal: 127.0.0.2,3260]",
                        "Rescanning session [sid: 1, target: iqn.1994-05.com.redhat:8c4ea31d28e, portal: 127.0.0.3,3260]",
                        ""
                    ]
                }
            ]
        }
    }
```

* minor_changes:
  - open_iscsi - extended module to allow rescanning of established session for one or all targets. (https://github.com/ansible-collections/community.general/issues/3763)

* * fixed commend according to the recommendation.

* Update plugins/modules/system/open_iscsi.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 921417c4b5)

Co-authored-by: Michaela Lang <94735640+michaelalang@users.noreply.github.com>
2021-11-22 19:50:44 +01:00
patchback[bot]
1ee123bb1e Xen orchestra inventory: Added groups, keyed_groups and compose support (#3766) (#3772)
* Xen orchestra inventory: Added groups, keyed_groups and compose support

* Update plugins/inventory/xen_orchestra.py

Remove extra params declaration

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 336f9465cb)

Co-authored-by: Samori Gorse <samori@codeinstyle.io>
2021-11-22 19:27:00 +01:00
patchback[bot]
aa0bcad9df RevBits PAM Secret Server Plugin (#3405) (#3771)
* RevBits PAM Secret Server Plugin

* Update revbitspss.py

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fixes based on feedback from Ansible

* Fixes for auto tests

* module updated

* f string changed

* maintainer added

* maintainer added

* maintainer added

* review updates

* test added

* test added

* test added

* revisions updtes

* revisions updtes

* revisions updtes

* file removed

* unit test added

* suggestions updated

* suggestions updated

* Update plugins/lookup/revbitspss.py

* Update plugins/lookup/revbitspss.py

* Update plugins/lookup/revbitspss.py

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Zubair Hassan <zubair.hassan@invozone.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 25e80762aa)

Co-authored-by: RevBits, LLC <74629760+RevBits@users.noreply.github.com>
2021-11-22 19:26:48 +01:00
patchback[bot]
6e51690a95 Support IPMI encryption key parameter in ipmi_boot (#3702) (#3770)
* Support IPMI encryption key parameter in ipmi_boot

* Support py2 on hex parsing, error handling

Change parsing hex string to support python2 and add error handling to it based on feedback.

* Don't explicitly set required to false

* Add version_added to key arg

* Add changelog fragment

* Add IPMI encryption key arg to ipmi_power

* Fix the formatting of changelog fragment

(cherry picked from commit 4013d0c9ca)

Co-authored-by: bluikko <14869000+bluikko@users.noreply.github.com>
2021-11-22 13:49:54 +01:00
patchback[bot]
3eab4faf0b Bugfix: github_repo does not apply defaults on existing repos (#2386) (#3769)
* github_repo do not apply defaults on currently existing repos

* Fixed sanity

* Fixed doc defaults

* Added changelog

* Fix "or" statement and some formatting

* Improve description change check

* Added api_url parameter for unit tests and default values for private and description parameters

* Added force_defaults parameter

* Improved docs

* Fixed doc anchors for force_defaults parameter

* Update plugins/modules/source_control/github/github_repo.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 17c3708f31)

Co-authored-by: Álvaro Torres Cogollo <atorrescogollo@gmail.com>
2021-11-22 13:49:29 +01:00
Felix Fontein
c6589a772b Prepare 4.1.0 release. 2021-11-20 09:16:49 +01:00
patchback[bot]
cb06c4ff77 [PR #3344/fef02c0f backport][stable-4] Xen orchestra inventory plugin (#3760)
* Xen orchestra inventory plugin (#3344)

* wip

* Renamed xo env variable with ANSIBLE prefix

* Suppress 3.x import and boilerplate errors

* Added shinuza as maintainer

* Do not use automatic field numbering spec

* Removed f string

* Fixed sanity checks

* wip tests

* Added working tests

* Fixed a bug when login fails

* Update plugins/inventory/xen_orchestra.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit fef02c0fba)

* Replace usage of packaging.version with distutils.version.LooseVersion. (#3762)

(cherry picked from commit 08067f08df)

Co-authored-by: Samori Gorse <samori@codeinstyle.io>
Co-authored-by: Felix Fontein <felix@fontein.de>
2021-11-20 09:14:01 +01:00
patchback[bot]
78316fbb75 lxd_container: support lxd instance types (#3661) (#3761)
* lxd_container: support lxd instance types

Update the lxd_container module to enable the new LXD API endpoint,
which supports different types of instances, such as containers and virtual machines.
The type attributes can be set explicitly to create containers or virtual machines.

* lxd_container: rename references from containers to instances

* lxd_container: add an example of creating vms

* lxd_container: update doc

* lxd_container: fix pylint

* resolve converstation

* remove type from config

* remove outdated validation related to the instance api

* correct diff

* changing last bits

* add missing dot

(cherry picked from commit 58eb94fff3)

Co-authored-by: rchicoli <rchicoli@users.noreply.github.com>
2021-11-20 08:36:04 +01:00
patchback[bot]
c7df82652f change ip4 type to list of str (#3738) (#3757)
* change ip4 type to list of str

* Add several tests and change documentation

* Update changelogs/fragments/1088-nmcli_add_multiple_addresses_support.yml

Co-authored-by: Andrew Pantuso <ajpantuso@gmail.com>

Co-authored-by: Andrew Pantuso <ajpantuso@gmail.com>
(cherry picked from commit 50c2f3a97d)

Co-authored-by: Alex Groshev <38885591+haddystuff@users.noreply.github.com>
2021-11-19 07:27:36 +01:00
patchback[bot]
342a1a7faa Fix collection dependency installation in CI. (#3753) (#3756)
(cherry picked from commit 17b4c6972f)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-11-19 07:25:26 +01:00
patchback[bot]
17431bd42c CI: Replace RHEL 8.4 by RHEL 8.5 for devel (#3747) (#3749)
* Replace RHEL 8.4 by RHEL 8.5 for devel.

* Install virtualenv.

* Revert "Install virtualenv."

This reverts commit 22ba0d074e.

* Just do another skip...

(cherry picked from commit 26c7995c82)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-11-17 22:22:46 +01:00
patchback[bot]
eacf663999 listen_ports_facts: Added support for ss (#3708) (#3744)
(cherry picked from commit 245cee0ece)

Co-authored-by: Jan Gaßner <40096303+moonrail@users.noreply.github.com>
2021-11-16 20:06:56 +01:00
patchback[bot]
84a806be08 Add GetHostInterfaces command to redfish_info (#3693) (#3743)
* Add GetHostInterfaces command to redfish_info

Adding a GetHostInterfaces command to redfish_info in order to report the
following:
- Properties about the HostInterface(s) like Status, InterfaceEnabled, etc
- ManagerEthernetInterface (info on BMC -> host NIC)
- HostEthernetInterfaces (list of NICs for host -> BMC connectivity)

fixes #3692

* add fragment

* fixup for linter

* redfish_utils.py cleanup

- Remove unneeded Properties list from get_nic_inventory()
- Remove bogus key variable from get_hostinterfaces()
- Add additional Properties to collect from HostInterface objects

* fixup for stray deletion

(cherry picked from commit 98cca3c19c)

Co-authored-by: Jacob <jyundt@gmail.com>
2021-11-16 20:06:28 +01:00
patchback[bot]
7db5c86dc8 gitlab: clean up modules and utils (#3694) (#3741)
* gitlab: remove dead code in module_utils

* gitlab: use snake_case consistently in methods and functions

* gitlab: use snake_case consistently in variables

* gitlab: fix pep8 indentation issues

* gitlab: add changelog fragment

* gitlab: apply suggestions from code review

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Chris Frage <git@sh0shin.org>

* gitlab: use consistent indentation

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Chris Frage <git@sh0shin.org>
(cherry picked from commit d29aecad26)

Co-authored-by: Nejc Habjan <hab.nejc@gmail.com>
2021-11-16 19:45:45 +01:00
patchback[bot]
1e82b5c580 redfish_config: Add support to configure Redfish Host Interface (#3632) (#3718)
* redfish_config: Add support to configure Redfish Host Interface

Adding another Manager command to redfish_config in order to set Redfish
Host Interface properties.

Fixes #3631

* add fragment

* fixup for fragment filename

* Update plugins/modules/remote_management/redfish/redfish_config.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add support for specifying HostInterface resource ID

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/remote_management/redfish/redfish_config.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/3632-add-redfish-host-interface-config-support.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6f47ddc29f)

Co-authored-by: Jacob <jyundt@gmail.com>
2021-11-16 19:45:30 +01:00
Felix Fontein
7f6be665f9 Next expected release is 4.1.0. 2021-11-16 09:10:42 +01:00
196 changed files with 6545 additions and 1287 deletions

View File

@@ -24,14 +24,15 @@ schedules:
always: true
branches:
include:
- stable-2
- stable-3
- stable-4
- cron: 0 11 * * 0
displayName: Weekly (old stable branches)
always: true
branches:
include:
- stable-1
- stable-2
variables:
- name: checkoutPath
@@ -209,8 +210,8 @@ stages:
test: macos/11.1
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.4
test: rhel/8.4
- name: RHEL 8.5
test: rhel/8.5
- name: FreeBSD 12.2
test: freebsd/12.2
- name: FreeBSD 13.0

20
.github/BOTMETA.yml vendored
View File

@@ -156,13 +156,15 @@ files:
maintainers: conloos
$inventories/nmap.py: {}
$inventories/online.py:
maintainers: sieben
maintainers: remyleone
$inventories/opennebula.py:
maintainers: feldsam
labels: cloud opennebula
keywords: opennebula dynamic inventory script
$inventories/proxmox.py:
maintainers: $team_virt ilijamt
$inventories/xen_orchestra.py:
maintainers: shinuza
$inventories/icinga2.py:
maintainers: bongoeadgc6
$inventories/scaleway.py:
@@ -223,6 +225,8 @@ files:
maintainers: konstruktoid
$lookups/redis.py:
maintainers: $team_ansible_core jpmens
$lookups/revbitspss.py:
maintainers: RevBits
$lookups/shelvefile.py: {}
$lookups/tss.py:
maintainers: amigus endlesstrax
@@ -337,7 +341,7 @@ files:
$modules/cloud/oneandone/:
maintainers: aajdinov edevenport
$modules/cloud/online/:
maintainers: sieben
maintainers: remyleone
$modules/cloud/opennebula/:
maintainers: $team_opennebula
$modules/cloud/opennebula/one_host.py:
@@ -407,11 +411,11 @@ files:
$modules/cloud/scaleway/scaleway_ip_info.py:
maintainers: Spredzy
$modules/cloud/scaleway/scaleway_organization_info.py:
maintainers: sieben Spredzy
maintainers: Spredzy
$modules/cloud/scaleway/scaleway_security_group.py:
maintainers: DenBeke
$modules/cloud/scaleway/scaleway_security_group_info.py:
maintainers: sieben Spredzy
maintainers: Spredzy
$modules/cloud/scaleway/scaleway_security_group_rule.py:
maintainers: DenBeke
$modules/cloud/scaleway/scaleway_server_info.py:
@@ -615,6 +619,8 @@ files:
labels: cloudflare_dns
$modules/net_tools/dnsimple.py:
maintainers: drcapulet
$modules/net_tools/dnsimple_info.py:
maintainers: edhilgendorf
$modules/net_tools/dnsmadeeasy.py:
maintainers: briceburg
$modules/net_tools/gandi_livedns.py:
@@ -947,6 +953,8 @@ files:
maintainers: SamyCoenen
$modules/source_control/gitlab/gitlab_user.py:
maintainers: LennertMertens stgrace
$modules/source_control/gitlab/gitlab_branch.py:
maintainers: paytroff
$modules/source_control/hg.py:
maintainers: yeukhon
$modules/storage/emc/emc_vnx_sg_member.py:
@@ -1220,9 +1228,9 @@ macros:
team_opennebula: ilicmilan meerkampdvv rsmontero xorel nilsding
team_oracle: manojmeda mross22 nalsaber
team_purestorage: bannaych dnix101 genegr lionmax opslounge raekins sdodsley sile16
team_redfish: mraineri tomasg2012 xmadsen renxulei
team_redfish: mraineri tomasg2012 xmadsen renxulei rajeevkallur bhavya06
team_rhn: FlossWare alikins barnabycourt vritant
team_scaleway: QuentinBrosse abarbare jerome-quere kindermoumoute remyleone sieben
team_scaleway: remyleone abarbare
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l
team_suse: commel dcermak evrardjp lrupp toabctl AnderEnder alxgu andytom sealor
team_virt: joshainglis karmab tleguern Thulium-Drake Ajpantuso

View File

@@ -6,6 +6,129 @@ Community General Release Notes
This changelog describes changes after version 3.0.0.
v4.2.0
======
Release Summary
---------------
Regular bugfix and feature release.
Minor Changes
-------------
- aix_filesystem - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3833).
- aix_lvg - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3834).
- gitlab - add more token authentication support with the new options ``api_oauth_token`` and ``api_job_token`` (https://github.com/ansible-collections/community.general/issues/705).
- gitlab_group, gitlab_project - add new option ``avatar_path`` (https://github.com/ansible-collections/community.general/pull/3792).
- gitlab_project - add new option ``default_branch`` to gitlab_project (if ``readme = true``) (https://github.com/ansible-collections/community.general/pull/3792).
- hponcfg - revamped module using ModuleHelper (https://github.com/ansible-collections/community.general/pull/3840).
- icinga2 inventory plugin - added the ``display_name`` field to variables (https://github.com/ansible-collections/community.general/issues/3875, https://github.com/ansible-collections/community.general/pull/3906).
- icinga2 inventory plugin - inventory object names are changable using ``inventory_attr`` in your config file to the host object name, address, or display_name fields (https://github.com/ansible-collections/community.general/issues/3875, https://github.com/ansible-collections/community.general/pull/3906).
- ip_netns - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3822).
- iso_extract - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3805).
- java_cert - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3835).
- jira - add support for Bearer token auth (https://github.com/ansible-collections/community.general/pull/3838).
- keycloak_user_federation - add sssd user federation support (https://github.com/ansible-collections/community.general/issues/3767).
- logentries - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3807).
- logstash_plugin - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3808).
- lxc_container - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3851).
- lxd connection plugin - make sure that ``ansible_lxd_host``, ``ansible_executable``, and ``ansible_lxd_executable`` work (https://github.com/ansible-collections/community.general/pull/3798).
- lxd inventory plugin - support virtual machines (https://github.com/ansible-collections/community.general/pull/3519).
- module_helper module utils - added decorators ``check_mode_skip`` and ``check_mode_skip_returns`` for skipping methods when ``check_mode=True`` (https://github.com/ansible-collections/community.general/pull/3849).
- monit - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3821).
- nmcli - add multiple addresses support for ``ip6`` parameter (https://github.com/ansible-collections/community.general/issues/1088).
- nmcli - add support for ``eui64`` and ``ipv6privacy`` parameters (https://github.com/ansible-collections/community.general/issues/3357).
- python_requirements_info - returns python version broken down into its components, and some minor refactoring (https://github.com/ansible-collections/community.general/pull/3797).
- svc - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3829).
- xattr - calling ``run_command`` with arguments as ``list`` instead of ``str`` (https://github.com/ansible-collections/community.general/pull/3806).
- xfconf - minor refactor on the base class for the module (https://github.com/ansible-collections/community.general/pull/3919).
Deprecated Features
-------------------
- module_helper module utils - deprecated the attribute ``ModuleHelper.VarDict`` (https://github.com/ansible-collections/community.general/pull/3801).
Bugfixes
--------
- icinga2 inventory plugin - handle 404 error when filter produces no results (https://github.com/ansible-collections/community.general/issues/3875, https://github.com/ansible-collections/community.general/pull/3906).
- interfaces_file - fixed the check for existing option in interface (https://github.com/ansible-collections/community.general/issues/3841).
- jira - fixed bug where module returns error related to dictionary key ``body`` (https://github.com/ansible-collections/community.general/issues/3419).
- nmcli - fix returning "changed" when no mask set for IPv4 or IPv6 addresses on task rerun (https://github.com/ansible-collections/community.general/issues/3768).
- nmcli - pass ``flags``, ``ingress``, ``egress`` params to ``nmcli`` (https://github.com/ansible-collections/community.general/issues/1086).
- nrdp callback plugin - fix error ``string arguments without an encoding`` (https://github.com/ansible-collections/community.general/issues/3903).
- opentelemetry_plugin - honour ``ignore_errors`` when a task has failed instead of reporting an error (https://github.com/ansible-collections/community.general/pull/3837).
- pipx - passes the correct command line option ``--include-apps`` (https://github.com/ansible-collections/community.general/issues/3791).
- proxmox - fixed ``onboot`` parameter causing module failures when undefined (https://github.com/ansible-collections/community.general/issues/3844).
- python_requirements_info - fails if version operator used without version (https://github.com/ansible-collections/community.general/pull/3785).
New Modules
-----------
Net Tools
~~~~~~~~~
- dnsimple_info - Pull basic info from DNSimple API
Remote Management
~~~~~~~~~~~~~~~~~
redfish
^^^^^^^
- ilo_redfish_config - Sets or updates configuration attributes on HPE iLO with Redfish OEM extensions
- ilo_redfish_info - Gathers server information through iLO using Redfish APIs
Source Control
~~~~~~~~~~~~~~
gitlab
^^^^^^
- gitlab_branch - Create or delete a branch
v4.1.0
======
Release Summary
---------------
Regular bugfix and feature release.
Minor Changes
-------------
- gitlab - clean up modules and utils (https://github.com/ansible-collections/community.general/pull/3694).
- ipmi_boot - add support for user-specified IPMI encryption key (https://github.com/ansible-collections/community.general/issues/3698).
- ipmi_power - add support for user-specified IPMI encryption key (https://github.com/ansible-collections/community.general/issues/3698).
- listen_ports_facts - add support for ``ss`` command besides ``netstat`` (https://github.com/ansible-collections/community.general/pull/3708).
- lxd_container - adds ``type`` option which also allows to operate on virtual machines and not just containers (https://github.com/ansible-collections/community.general/pull/3661).
- nmcli - add multiple addresses support for ``ip4`` parameter (https://github.com/ansible-collections/community.general/issues/1088, https://github.com/ansible-collections/community.general/pull/3738).
- open_iscsi - extended module to allow rescanning of established session for one or all targets (https://github.com/ansible-collections/community.general/issues/3763).
- pacman - add ``stdout`` and ``stderr`` as return values (https://github.com/ansible-collections/community.general/pull/3758).
- redfish_command - add ``GetHostInterfaces`` command to enable reporting Redfish Host Interface information (https://github.com/ansible-collections/community.general/issues/3693).
- redfish_command - add ``SetHostInterface`` command to enable configuring the Redfish Host Interface (https://github.com/ansible-collections/community.general/issues/3632).
Bugfixes
--------
- github_repo - ``private`` and ``description`` attributes should not be set to default values when the repo already exists (https://github.com/ansible-collections/community.general/pull/2386).
- terraform - fix command options being ignored during planned/plan in function ``build_plan`` such as ``lock`` or ``lock_timeout`` (https://github.com/ansible-collections/community.general/issues/3707, https://github.com/ansible-collections/community.general/pull/3726).
New Plugins
-----------
Inventory
~~~~~~~~~
- xen_orchestra - Xen Orchestra inventory source
Lookup
~~~~~~
- revbitspss - Get secrets from RevBits PAM server
v4.0.2
======

View File

@@ -1069,3 +1069,172 @@ releases:
- 4.0.2.yml
- deprecate-ansible-2.9-2.10.yml
release_date: '2021-11-16'
4.1.0:
changes:
bugfixes:
- github_repo - ``private`` and ``description`` attributes should not be set
to default values when the repo already exists (https://github.com/ansible-collections/community.general/pull/2386).
- terraform - fix command options being ignored during planned/plan in function
``build_plan`` such as ``lock`` or ``lock_timeout`` (https://github.com/ansible-collections/community.general/issues/3707,
https://github.com/ansible-collections/community.general/pull/3726).
minor_changes:
- gitlab - clean up modules and utils (https://github.com/ansible-collections/community.general/pull/3694).
- ipmi_boot - add support for user-specified IPMI encryption key (https://github.com/ansible-collections/community.general/issues/3698).
- ipmi_power - add support for user-specified IPMI encryption key (https://github.com/ansible-collections/community.general/issues/3698).
- listen_ports_facts - add support for ``ss`` command besides ``netstat`` (https://github.com/ansible-collections/community.general/pull/3708).
- lxd_container - adds ``type`` option which also allows to operate on virtual
machines and not just containers (https://github.com/ansible-collections/community.general/pull/3661).
- nmcli - add multiple addresses support for ``ip4`` parameter (https://github.com/ansible-collections/community.general/issues/1088,
https://github.com/ansible-collections/community.general/pull/3738).
- open_iscsi - extended module to allow rescanning of established session for
one or all targets (https://github.com/ansible-collections/community.general/issues/3763).
- pacman - add ``stdout`` and ``stderr`` as return values (https://github.com/ansible-collections/community.general/pull/3758).
- redfish_command - add ``GetHostInterfaces`` command to enable reporting Redfish
Host Interface information (https://github.com/ansible-collections/community.general/issues/3693).
- redfish_command - add ``SetHostInterface`` command to enable configuring the
Redfish Host Interface (https://github.com/ansible-collections/community.general/issues/3632).
release_summary: Regular bugfix and feature release.
fragments:
- 1088-nmcli_add_multiple_addresses_support.yml
- 2386-github_repo-fix-idempotency-issues.yml
- 3632-add-redfish-host-interface-config-support.yml
- 3661-lxd_container-add-vm-support.yml
- 3693-add-redfish-host-interface-info-support.yml
- 3694-gitlab-cleanup.yml
- 3702-ipmi-encryption-key.yml
- 3708-listen_ports_facts-add-ss-support.yml
- 3726-terraform-missing-parameters-planned-fix.yml
- 3758-pacman-add-stdout-stderr.yml
- 3765-extend-open_iscsi-with-rescan.yml
- 4.1.0.yml
plugins:
inventory:
- description: Xen Orchestra inventory source
name: xen_orchestra
namespace: null
lookup:
- description: Get secrets from RevBits PAM server
name: revbitspss
namespace: null
release_date: '2021-11-23'
4.2.0:
changes:
bugfixes:
- icinga2 inventory plugin - handle 404 error when filter produces no results
(https://github.com/ansible-collections/community.general/issues/3875, https://github.com/ansible-collections/community.general/pull/3906).
- interfaces_file - fixed the check for existing option in interface (https://github.com/ansible-collections/community.general/issues/3841).
- jira - fixed bug where module returns error related to dictionary key ``body``
(https://github.com/ansible-collections/community.general/issues/3419).
- nmcli - fix returning "changed" when no mask set for IPv4 or IPv6 addresses
on task rerun (https://github.com/ansible-collections/community.general/issues/3768).
- nmcli - pass ``flags``, ``ingress``, ``egress`` params to ``nmcli`` (https://github.com/ansible-collections/community.general/issues/1086).
- nrdp callback plugin - fix error ``string arguments without an encoding``
(https://github.com/ansible-collections/community.general/issues/3903).
- opentelemetry_plugin - honour ``ignore_errors`` when a task has failed instead
of reporting an error (https://github.com/ansible-collections/community.general/pull/3837).
- pipx - passes the correct command line option ``--include-apps`` (https://github.com/ansible-collections/community.general/issues/3791).
- proxmox - fixed ``onboot`` parameter causing module failures when undefined
(https://github.com/ansible-collections/community.general/issues/3844).
- python_requirements_info - fails if version operator used without version
(https://github.com/ansible-collections/community.general/pull/3785).
deprecated_features:
- module_helper module utils - deprecated the attribute ``ModuleHelper.VarDict``
(https://github.com/ansible-collections/community.general/pull/3801).
minor_changes:
- aix_filesystem - calling ``run_command`` with arguments as ``list`` instead
of ``str`` (https://github.com/ansible-collections/community.general/pull/3833).
- aix_lvg - calling ``run_command`` with arguments as ``list`` instead of ``str``
(https://github.com/ansible-collections/community.general/pull/3834).
- gitlab - add more token authentication support with the new options ``api_oauth_token``
and ``api_job_token`` (https://github.com/ansible-collections/community.general/issues/705).
- gitlab_group, gitlab_project - add new option ``avatar_path`` (https://github.com/ansible-collections/community.general/pull/3792).
- gitlab_project - add new option ``default_branch`` to gitlab_project (if ``readme
= true``) (https://github.com/ansible-collections/community.general/pull/3792).
- hponcfg - revamped module using ModuleHelper (https://github.com/ansible-collections/community.general/pull/3840).
- icinga2 inventory plugin - added the ``display_name`` field to variables (https://github.com/ansible-collections/community.general/issues/3875,
https://github.com/ansible-collections/community.general/pull/3906).
- icinga2 inventory plugin - inventory object names are changable using ``inventory_attr``
in your config file to the host object name, address, or display_name fields
(https://github.com/ansible-collections/community.general/issues/3875, https://github.com/ansible-collections/community.general/pull/3906).
- ip_netns - calling ``run_command`` with arguments as ``list`` instead of ``str``
(https://github.com/ansible-collections/community.general/pull/3822).
- iso_extract - calling ``run_command`` with arguments as ``list`` instead of
``str`` (https://github.com/ansible-collections/community.general/pull/3805).
- java_cert - calling ``run_command`` with arguments as ``list`` instead of
``str`` (https://github.com/ansible-collections/community.general/pull/3835).
- jira - add support for Bearer token auth (https://github.com/ansible-collections/community.general/pull/3838).
- keycloak_user_federation - add sssd user federation support (https://github.com/ansible-collections/community.general/issues/3767).
- logentries - calling ``run_command`` with arguments as ``list`` instead of
``str`` (https://github.com/ansible-collections/community.general/pull/3807).
- logstash_plugin - calling ``run_command`` with arguments as ``list`` instead
of ``str`` (https://github.com/ansible-collections/community.general/pull/3808).
- lxc_container - calling ``run_command`` with arguments as ``list`` instead
of ``str`` (https://github.com/ansible-collections/community.general/pull/3851).
- lxd connection plugin - make sure that ``ansible_lxd_host``, ``ansible_executable``,
and ``ansible_lxd_executable`` work (https://github.com/ansible-collections/community.general/pull/3798).
- lxd inventory plugin - support virtual machines (https://github.com/ansible-collections/community.general/pull/3519).
- module_helper module utils - added decorators ``check_mode_skip`` and ``check_mode_skip_returns``
for skipping methods when ``check_mode=True`` (https://github.com/ansible-collections/community.general/pull/3849).
- monit - calling ``run_command`` with arguments as ``list`` instead of ``str``
(https://github.com/ansible-collections/community.general/pull/3821).
- nmcli - add multiple addresses support for ``ip6`` parameter (https://github.com/ansible-collections/community.general/issues/1088).
- nmcli - add support for ``eui64`` and ``ipv6privacy`` parameters (https://github.com/ansible-collections/community.general/issues/3357).
- python_requirements_info - returns python version broken down into its components,
and some minor refactoring (https://github.com/ansible-collections/community.general/pull/3797).
- svc - calling ``run_command`` with arguments as ``list`` instead of ``str``
(https://github.com/ansible-collections/community.general/pull/3829).
- xattr - calling ``run_command`` with arguments as ``list`` instead of ``str``
(https://github.com/ansible-collections/community.general/pull/3806).
- xfconf - minor refactor on the base class for the module (https://github.com/ansible-collections/community.general/pull/3919).
release_summary: Regular bugfix and feature release.
fragments:
- 1088-add_multiple_ipv6_address_support.yml
- 3357-nmcli-eui64-and-ipv6privacy.yml
- 3519-inventory-support-lxd-4.yml
- 3768-nmcli_fix_changed_when_no_mask_set.yml
- 3780-add-keycloak-sssd-user-federation.yml
- 3785-python_requirements_info-versionless-op.yaml
- 3792-improve_gitlab_group_and_project.yml
- 3797-python_requirements_info-improvements.yaml
- 3798-fix-lxd-connection-option-vars-support.yml
- 3800-pipx-include-apps.yaml
- 3801-mh-deprecate-vardict-attr.yaml
- 3805-iso_extract-run_command-list.yaml
- 3806-xattr-run_command-list.yaml
- 3807-logentries-run_command-list.yaml
- 3808-logstash_plugin-run_command-list.yaml
- 3821-monit-run-list.yaml
- 3822-ip_netns-run-list.yaml
- 3829-svc-run-list.yaml
- 3833-aix_filesystem-run-list.yaml
- 3834-aix-lvg-run-list.yaml
- 3835-java-cert-run-list.yaml
- 3837-opentelemetry_plugin-honour_ignore_errors.yaml
- 3838-jira-token.yaml
- 3840-hponcfg-mh-revamp.yaml
- 3849-mh-check-mode-decos.yaml
- 3851-lxc-container-run-list.yaml
- 3862-interfaces-file-fix-dup-option.yaml
- 3867-jira-fix-body.yaml
- 3874-proxmox-fix-onboot-param.yml
- 3875-icinga2-inv-fix.yml
- 3896-nmcli_vlan_missing_options.yaml
- 3909-nrdp_fix_string_args_without_encoding.yaml
- 3919-xfconf-baseclass.yaml
- 4.2.0.yml
- 705-gitlab-auth-support.yml
modules:
- description: Pull basic info from DNSimple API
name: dnsimple_info
namespace: net_tools
- description: Create or delete a branch
name: gitlab_branch
namespace: source_control.gitlab
- description: Sets or updates configuration attributes on HPE iLO with Redfish
OEM extensions
name: ilo_redfish_config
namespace: remote_management.redfish
- description: Gathers server information through iLO using Redfish APIs
name: ilo_redfish_info
namespace: remote_management.redfish
release_date: '2021-12-21'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 4.0.2
version: 4.2.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -70,6 +70,7 @@ import os
import json
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.common.text.converters import to_bytes
from ansible.module_utils.urls import open_url
from ansible.plugins.callback import CallbackBase
@@ -143,7 +144,7 @@ class CallbackModule(CallbackBase):
body = {
'cmd': 'submitcheck',
'token': self.token,
'XMLDATA': bytes(xmldata)
'XMLDATA': to_bytes(xmldata)
}
try:

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# (C) 2021, Victor Martinez <VictorMartinezRubio@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -267,6 +268,8 @@ class OpenTelemetrySource(object):
elif host_data.status == 'skipped':
message = res['skip_reason'] if 'skip_reason' in res else 'skipped'
status = Status(status_code=StatusCode.UNSET)
elif host_data.status == 'ignored':
status = Status(status_code=StatusCode.UNSET)
span.set_status(status)
if isinstance(task_data.args, dict) and "gather_facts" not in task_data.action:
@@ -462,10 +465,15 @@ class CallbackModule(CallbackBase):
)
def v2_runner_on_failed(self, result, ignore_errors=False):
self.errors += 1
if ignore_errors:
status = 'ignored'
else:
status = 'failed'
self.errors += 1
self.opentelemetry.finish_task(
self.tasks_data,
'failed',
status,
result
)

View File

@@ -89,9 +89,9 @@ class Connection(ConnectionBase):
local_cmd.extend(["--project", self.get_option("project")])
local_cmd.extend([
"exec",
"%s:%s" % (self.get_option("remote"), self._host),
"%s:%s" % (self.get_option("remote"), self.get_option("remote_addr")),
"--",
self._play_context.executable, "-c", cmd
self.get_option("executable"), "-c", cmd
])
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
@@ -126,7 +126,7 @@ class Connection(ConnectionBase):
local_cmd.extend([
"file", "push",
in_path,
"%s:%s/%s" % (self.get_option("remote"), self._host, out_path)
"%s:%s/%s" % (self.get_option("remote"), self.get_option("remote_addr"), out_path)
])
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
@@ -145,7 +145,7 @@ class Connection(ConnectionBase):
local_cmd.extend(["--project", self.get_option("project")])
local_cmd.extend([
"file", "pull",
"%s:%s/%s" % (self.get_option("remote"), self._host, in_path),
"%s:%s/%s" % (self.get_option("remote"), self.get_option("remote_addr"), in_path),
out_path
])

View File

@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard files documentation fragment
DOCUMENTATION = r'''
requirements:
- requests (Python library U(https://pypi.org/project/requests/))
options:
api_token:
description:
- GitLab access token with API permissions.
type: str
api_oauth_token:
description:
- GitLab OAuth token for logging in.
type: str
version_added: 4.2.0
api_job_token:
description:
- GitLab CI job token for logging in.
type: str
version_added: 4.2.0
'''

View File

@@ -35,13 +35,23 @@ DOCUMENTATION = '''
type: string
required: true
host_filter:
description: An Icinga2 API valid host filter.
description:
- An Icinga2 API valid host filter. Leave blank for no filtering
type: string
required: false
validate_certs:
description: Enables or disables SSL certificate verification.
type: boolean
default: true
inventory_attr:
description:
- Allows the override of the inventory name based on different attributes.
- This allows for changing the way limits are used.
- The current default, C(address), is sometimes not unique or present. We recommend to use C(name) instead.
type: string
default: address
choices: ['name', 'display_name', 'address']
version_added: 4.2.0
'''
EXAMPLES = r'''
@@ -52,6 +62,7 @@ user: ansible
password: secure
host_filter: \"linux-servers\" in host.groups
validate_certs: false
inventory_attr: name
'''
import json
@@ -59,6 +70,7 @@ import json
from ansible.errors import AnsibleParserError
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible.module_utils.urls import open_url
from ansible.module_utils.six.moves.urllib.error import HTTPError
class InventoryModule(BaseInventoryPlugin, Constructable):
@@ -76,6 +88,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
self.icinga2_password = None
self.ssl_verify = None
self.host_filter = None
self.inventory_attr = None
self.cache_key = None
self.use_cache = None
@@ -114,9 +127,21 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
if data is not None:
request_args['data'] = json.dumps(data)
self.display.vvv("Request Args: %s" % request_args)
response = open_url(request_url, **request_args)
try:
response = open_url(request_url, **request_args)
except HTTPError as e:
try:
error_body = json.loads(e.read().decode())
self.display.vvv("Error returned: {0}".format(error_body))
except Exception:
error_body = {"status": None}
if e.code == 404 and error_body.get('status') == "No objects found.":
raise AnsibleParserError("Host filter returned no data. Please confirm your host_filter value is valid")
raise AnsibleParserError("Unexpected data returned: {0} -- {1}".format(e, error_body))
response_body = response.read()
json_data = json.loads(response_body.decode('utf-8'))
self.display.vvv("Returned Data: %s" % json.dumps(json_data, indent=4, sort_keys=True))
if 200 <= response.status <= 299:
return json_data
if response.status == 404 and json_data['status'] == "No objects found.":
@@ -155,7 +180,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
"""Query for all hosts """
self.display.vvv("Querying Icinga2 for inventory")
query_args = {
"attrs": ["address", "state_type", "state", "groups"],
"attrs": ["address", "display_name", "state_type", "state", "groups"],
}
if self.host_filter is not None:
query_args['host_filter'] = self.host_filter
@@ -177,24 +202,35 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
"""Convert Icinga2 API data to JSON format for Ansible"""
groups_dict = {"_meta": {"hostvars": {}}}
for entry in json_data:
host_name = entry['name']
host_attrs = entry['attrs']
if self.inventory_attr == "name":
host_name = entry.get('name')
if self.inventory_attr == "address":
# When looking for address for inventory, if missing fallback to object name
if host_attrs.get('address', '') != '':
host_name = host_attrs.get('address')
else:
host_name = entry.get('name')
if self.inventory_attr == "display_name":
host_name = host_attrs.get('display_name')
if host_attrs['state'] == 0:
host_attrs['state'] = 'on'
else:
host_attrs['state'] = 'off'
host_groups = host_attrs['groups']
host_addr = host_attrs['address']
self.inventory.add_host(host_addr)
host_groups = host_attrs.get('groups')
self.inventory.add_host(host_name)
for group in host_groups:
if group not in self.inventory.groups.keys():
self.inventory.add_group(group)
self.inventory.add_child(group, host_addr)
self.inventory.set_variable(host_addr, 'address', host_addr)
self.inventory.set_variable(host_addr, 'hostname', host_name)
self.inventory.set_variable(host_addr, 'state',
self.inventory.add_child(group, host_name)
# If the address attribute is populated, override ansible_host with the value
if host_attrs.get('address') != '':
self.inventory.set_variable(host_name, 'ansible_host', host_attrs.get('address'))
self.inventory.set_variable(host_name, 'hostname', entry.get('name'))
self.inventory.set_variable(host_name, 'display_name', host_attrs.get('display_name'))
self.inventory.set_variable(host_name, 'state',
host_attrs['state'])
self.inventory.set_variable(host_addr, 'state_type',
self.inventory.set_variable(host_name, 'state_type',
host_attrs['state_type'])
return groups_dict
@@ -211,6 +247,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
self.icinga2_password = self.get_option('password')
self.ssl_verify = self.get_option('validate_certs')
self.host_filter = self.get_option('host_filter')
self.inventory_attr = self.get_option('inventory_attr')
# Not currently enabled
# self.cache_key = self.get_cache_key(path)
# self.use_cache = cache and self.get_option('cache')

View File

@@ -15,6 +15,7 @@ DOCUMENTATION = r'''
author: "Frank Dornheim (@conloos)"
requirements:
- ipaddress
- lxd >= 4.0
options:
plugin:
description: Token that ensures this is a source file for the 'lxd' plugin.
@@ -49,26 +50,38 @@ DOCUMENTATION = r'''
- If I(trust_password) is set, this module send a request for authentication before sending any requests.
type: str
state:
description: Filter the container according to the current status.
description: Filter the instance according to the current status.
type: str
default: none
choices: [ 'STOPPED', 'STARTING', 'RUNNING', 'none' ]
prefered_container_network_interface:
type_filter:
description:
- If a container has multiple network interfaces, select which one is the prefered as pattern.
- Filter the instances by type C(virtual-machine), C(container) or C(both).
- The first version of the inventory only supported containers.
type: str
default: container
choices: [ 'virtual-machine', 'container', 'both' ]
version_added: 4.2.0
prefered_instance_network_interface:
description:
- If an instance has multiple network interfaces, select which one is the prefered as pattern.
- Combined with the first number that can be found e.g. 'eth' + 0.
- The option has been renamed from I(prefered_container_network_interface) to I(prefered_instance_network_interface) in community.general 3.8.0.
The old name still works as an alias.
type: str
default: eth
prefered_container_network_family:
aliases:
- prefered_container_network_interface
prefered_instance_network_family:
description:
- If a container has multiple network interfaces, which one is the prefered by family.
- If an instance has multiple network interfaces, which one is the prefered by family.
- Specify C(inet) for IPv4 and C(inet6) for IPv6.
type: str
default: inet
choices: [ 'inet', 'inet6' ]
groupby:
description:
- Create groups by the following keywords C(location), C(pattern), C(network_range), C(os), C(release), C(profile), C(vlanid).
- Create groups by the following keywords C(location), C(network_range), C(os), C(pattern), C(profile), C(release), C(type), C(vlanid).
- See example for syntax.
type: dict
'''
@@ -83,38 +96,49 @@ plugin: community.general.lxd
url: unix:/var/snap/lxd/common/lxd/unix.socket
state: RUNNING
# simple lxd.yml including virtual machines and containers
plugin: community.general.lxd
url: unix:/var/snap/lxd/common/lxd/unix.socket
type_filter: both
# grouping lxd.yml
groupby:
testpattern:
type: pattern
attribute: test
vlan666:
type: vlanid
attribute: 666
locationBerlin:
type: location
attribute: Berlin
osUbuntu:
type: os
attribute: ubuntu
releaseFocal:
type: release
attribute: focal
releaseBionic:
type: release
attribute: bionic
profileDefault:
type: profile
attribute: default
profileX11:
type: profile
attribute: x11
netRangeIPv4:
type: network_range
attribute: 10.98.143.0/24
netRangeIPv6:
type: network_range
attribute: fd42:bd00:7b11:2167:216:3eff::/24
osUbuntu:
type: os
attribute: ubuntu
testpattern:
type: pattern
attribute: test
profileDefault:
type: profile
attribute: default
profileX11:
type: profile
attribute: x11
releaseFocal:
type: release
attribute: focal
releaseBionic:
type: release
attribute: bionic
typeVM:
type: type
attribute: virtual-machine
typeContainer:
type: type
attribute: container
vlan666:
type: vlanid
attribute: 666
'''
import binascii
@@ -283,10 +307,10 @@ class InventoryModule(BaseInventoryPlugin):
network_configs = self.socket.do('GET', '/1.0/networks')
return [m.split('/')[3] for m in network_configs['metadata']]
def _get_containers(self):
"""Get Containernames
def _get_instances(self):
"""Get instancenames
Returns all containernames
Returns all instancenames
Args:
None
@@ -295,25 +319,27 @@ class InventoryModule(BaseInventoryPlugin):
Raises:
None
Returns:
list(names): names of all containers"""
# e.g. {'type': 'sync',
# 'status': 'Success',
# 'status_code': 200,
# 'operation': '',
# 'error_code': 0,
# 'error': '',
# 'metadata': ['/1.0/containers/udemy-ansible-ubuntu-2004']}
containers = self.socket.do('GET', '/1.0/containers')
return [m.split('/')[3] for m in containers['metadata']]
list(names): names of all instances"""
# e.g. {
# "metadata": [
# "/1.0/instances/foo",
# "/1.0/instances/bar"
# ],
# "status": "Success",
# "status_code": 200,
# "type": "sync"
# }
instances = self.socket.do('GET', '/1.0/instances')
return [m.split('/')[3] for m in instances['metadata']]
def _get_config(self, branch, name):
"""Get inventory of container
"""Get inventory of instance
Get config of container
Get config of instance
Args:
str(branch): Name oft the API-Branch
str(name): Name of Container
str(name): Name of instance
Kwargs:
None
Source:
@@ -321,7 +347,7 @@ class InventoryModule(BaseInventoryPlugin):
Raises:
None
Returns:
dict(config): Config of the container"""
dict(config): Config of the instance"""
config = {}
if isinstance(branch, (tuple, list)):
config[name] = {branch[1]: self.socket.do('GET', '/1.0/{0}/{1}/{2}'.format(to_native(branch[0]), to_native(name), to_native(branch[1])))}
@@ -329,13 +355,13 @@ class InventoryModule(BaseInventoryPlugin):
config[name] = {branch: self.socket.do('GET', '/1.0/{0}/{1}'.format(to_native(branch), to_native(name)))}
return config
def get_container_data(self, names):
"""Create Inventory of the container
def get_instance_data(self, names):
"""Create Inventory of the instance
Iterate through the different branches of the containers and collect Informations.
Iterate through the different branches of the instances and collect Informations.
Args:
list(names): List of container names
list(names): List of instance names
Kwargs:
None
Raises:
@@ -344,20 +370,20 @@ class InventoryModule(BaseInventoryPlugin):
None"""
# tuple(('instances','metadata/templates')) to get section in branch
# e.g. /1.0/instances/<name>/metadata/templates
branches = ['containers', ('instances', 'state')]
container_config = {}
branches = ['instances', ('instances', 'state')]
instance_config = {}
for branch in branches:
for name in names:
container_config['containers'] = self._get_config(branch, name)
self.data = dict_merge(container_config, self.data)
instance_config['instances'] = self._get_config(branch, name)
self.data = dict_merge(instance_config, self.data)
def get_network_data(self, names):
"""Create Inventory of the container
"""Create Inventory of the instance
Iterate through the different branches of the containers and collect Informations.
Iterate through the different branches of the instances and collect Informations.
Args:
list(names): List of container names
list(names): List of instance names
Kwargs:
None
Raises:
@@ -376,26 +402,26 @@ class InventoryModule(BaseInventoryPlugin):
network_config['networks'] = {name: None}
self.data = dict_merge(network_config, self.data)
def extract_network_information_from_container_config(self, container_name):
def extract_network_information_from_instance_config(self, instance_name):
"""Returns the network interface configuration
Returns the network ipv4 and ipv6 config of the container without local-link
Returns the network ipv4 and ipv6 config of the instance without local-link
Args:
str(container_name): Name oft he container
str(instance_name): Name oft he instance
Kwargs:
None
Raises:
None
Returns:
dict(network_configuration): network config"""
container_network_interfaces = self._get_data_entry('containers/{0}/state/metadata/network'.format(container_name))
instance_network_interfaces = self._get_data_entry('instances/{0}/state/metadata/network'.format(instance_name))
network_configuration = None
if container_network_interfaces:
if instance_network_interfaces:
network_configuration = {}
gen_interface_names = [interface_name for interface_name in container_network_interfaces if interface_name != 'lo']
gen_interface_names = [interface_name for interface_name in instance_network_interfaces if interface_name != 'lo']
for interface_name in gen_interface_names:
gen_address = [address for address in container_network_interfaces[interface_name]['addresses'] if address.get('scope') != 'link']
gen_address = [address for address in instance_network_interfaces[interface_name]['addresses'] if address.get('scope') != 'link']
network_configuration[interface_name] = []
for address in gen_address:
address_set = {}
@@ -406,24 +432,24 @@ class InventoryModule(BaseInventoryPlugin):
network_configuration[interface_name].append(address_set)
return network_configuration
def get_prefered_container_network_interface(self, container_name):
"""Helper to get the prefered interface of thr container
def get_prefered_instance_network_interface(self, instance_name):
"""Helper to get the prefered interface of thr instance
Helper to get the prefered interface provide by neme pattern from 'prefered_container_network_interface'.
Helper to get the prefered interface provide by neme pattern from 'prefered_instance_network_interface'.
Args:
str(containe_name): name of container
str(containe_name): name of instance
Kwargs:
None
Raises:
None
Returns:
str(prefered_interface): None or interface name"""
container_network_interfaces = self._get_data_entry('inventory/{0}/network_interfaces'.format(container_name))
instance_network_interfaces = self._get_data_entry('inventory/{0}/network_interfaces'.format(instance_name))
prefered_interface = None # init
if container_network_interfaces: # container have network interfaces
if instance_network_interfaces: # instance have network interfaces
# generator if interfaces which start with the desired pattern
net_generator = [interface for interface in container_network_interfaces if interface.startswith(self.prefered_container_network_interface)]
net_generator = [interface for interface in instance_network_interfaces if interface.startswith(self.prefered_instance_network_interface)]
selected_interfaces = [] # init
for interface in net_generator:
selected_interfaces.append(interface)
@@ -431,13 +457,13 @@ class InventoryModule(BaseInventoryPlugin):
prefered_interface = sorted(selected_interfaces)[0]
return prefered_interface
def get_container_vlans(self, container_name):
"""Get VLAN(s) from container
def get_instance_vlans(self, instance_name):
"""Get VLAN(s) from instance
Helper to get the VLAN_ID from the container
Helper to get the VLAN_ID from the instance
Args:
str(containe_name): name of container
str(containe_name): name of instance
Kwargs:
None
Raises:
@@ -450,13 +476,13 @@ class InventoryModule(BaseInventoryPlugin):
if self._get_data_entry('state/metadata/vlan/vid', data=self.data['networks'].get(network)):
network_vlans[network] = self._get_data_entry('state/metadata/vlan/vid', data=self.data['networks'].get(network))
# get networkdevices of container and return
# get networkdevices of instance and return
# e.g.
# "eth0":{ "name":"eth0",
# "network":"lxdbr0",
# "type":"nic"},
vlan_ids = {}
devices = self._get_data_entry('containers/{0}/containers/metadata/expanded_devices'.format(to_native(container_name)))
devices = self._get_data_entry('instances/{0}/instances/metadata/expanded_devices'.format(to_native(instance_name)))
for device in devices:
if 'network' in devices[device]:
if devices[device]['network'] in network_vlans:
@@ -492,14 +518,14 @@ class InventoryModule(BaseInventoryPlugin):
except KeyError:
return None
def _set_data_entry(self, container_name, key, value, path=None):
def _set_data_entry(self, instance_name, key, value, path=None):
"""Helper to save data
Helper to save the data in self.data
Detect if data is allready in branch and use dict_merge() to prevent that branch is overwritten.
Args:
str(container_name): name of container
str(instance_name): name of instance
str(key): same as dict
*(value): same as dict
Kwargs:
@@ -510,24 +536,24 @@ class InventoryModule(BaseInventoryPlugin):
None"""
if not path:
path = self.data['inventory']
if container_name not in path:
path[container_name] = {}
if instance_name not in path:
path[instance_name] = {}
try:
if isinstance(value, dict) and key in path[container_name]:
path[container_name] = dict_merge(value, path[container_name][key])
if isinstance(value, dict) and key in path[instance_name]:
path[instance_name] = dict_merge(value, path[instance_name][key])
else:
path[container_name][key] = value
path[instance_name][key] = value
except KeyError as err:
raise AnsibleParserError("Unable to store Informations: {0}".format(to_native(err)))
def extract_information_from_container_configs(self):
def extract_information_from_instance_configs(self):
"""Process configuration information
Preparation of the data
Args:
dict(configs): Container configurations
dict(configs): instance configurations
Kwargs:
None
Raises:
@@ -538,33 +564,35 @@ class InventoryModule(BaseInventoryPlugin):
if 'inventory' not in self.data:
self.data['inventory'] = {}
for container_name in self.data['containers']:
self._set_data_entry(container_name, 'os', self._get_data_entry(
'containers/{0}/containers/metadata/config/image.os'.format(container_name)))
self._set_data_entry(container_name, 'release', self._get_data_entry(
'containers/{0}/containers/metadata/config/image.release'.format(container_name)))
self._set_data_entry(container_name, 'version', self._get_data_entry(
'containers/{0}/containers/metadata/config/image.version'.format(container_name)))
self._set_data_entry(container_name, 'profile', self._get_data_entry(
'containers/{0}/containers/metadata/profiles'.format(container_name)))
self._set_data_entry(container_name, 'location', self._get_data_entry(
'containers/{0}/containers/metadata/location'.format(container_name)))
self._set_data_entry(container_name, 'state', self._get_data_entry(
'containers/{0}/containers/metadata/config/volatile.last_state.power'.format(container_name)))
self._set_data_entry(container_name, 'network_interfaces', self.extract_network_information_from_container_config(container_name))
self._set_data_entry(container_name, 'preferred_interface', self.get_prefered_container_network_interface(container_name))
self._set_data_entry(container_name, 'vlan_ids', self.get_container_vlans(container_name))
for instance_name in self.data['instances']:
self._set_data_entry(instance_name, 'os', self._get_data_entry(
'instances/{0}/instances/metadata/config/image.os'.format(instance_name)))
self._set_data_entry(instance_name, 'release', self._get_data_entry(
'instances/{0}/instances/metadata/config/image.release'.format(instance_name)))
self._set_data_entry(instance_name, 'version', self._get_data_entry(
'instances/{0}/instances/metadata/config/image.version'.format(instance_name)))
self._set_data_entry(instance_name, 'profile', self._get_data_entry(
'instances/{0}/instances/metadata/profiles'.format(instance_name)))
self._set_data_entry(instance_name, 'location', self._get_data_entry(
'instances/{0}/instances/metadata/location'.format(instance_name)))
self._set_data_entry(instance_name, 'state', self._get_data_entry(
'instances/{0}/instances/metadata/config/volatile.last_state.power'.format(instance_name)))
self._set_data_entry(instance_name, 'type', self._get_data_entry(
'instances/{0}/instances/metadata/type'.format(instance_name)))
self._set_data_entry(instance_name, 'network_interfaces', self.extract_network_information_from_instance_config(instance_name))
self._set_data_entry(instance_name, 'preferred_interface', self.get_prefered_instance_network_interface(instance_name))
self._set_data_entry(instance_name, 'vlan_ids', self.get_instance_vlans(instance_name))
def build_inventory_network(self, container_name):
"""Add the network interfaces of the container to the inventory
def build_inventory_network(self, instance_name):
"""Add the network interfaces of the instance to the inventory
Logic:
- if the container have no interface -> 'ansible_connection: local'
- get preferred_interface & prefered_container_network_family -> 'ansible_connection: ssh' & 'ansible_host: <IP>'
- first Interface from: network_interfaces prefered_container_network_family -> 'ansible_connection: ssh' & 'ansible_host: <IP>'
- if the instance have no interface -> 'ansible_connection: local'
- get preferred_interface & prefered_instance_network_family -> 'ansible_connection: ssh' & 'ansible_host: <IP>'
- first Interface from: network_interfaces prefered_instance_network_family -> 'ansible_connection: ssh' & 'ansible_host: <IP>'
Args:
str(container_name): name of container
str(instance_name): name of instance
Kwargs:
None
Raises:
@@ -572,45 +600,45 @@ class InventoryModule(BaseInventoryPlugin):
Returns:
None"""
def interface_selection(container_name):
"""Select container Interface for inventory
def interface_selection(instance_name):
"""Select instance Interface for inventory
Logic:
- get preferred_interface & prefered_container_network_family -> str(IP)
- first Interface from: network_interfaces prefered_container_network_family -> str(IP)
- get preferred_interface & prefered_instance_network_family -> str(IP)
- first Interface from: network_interfaces prefered_instance_network_family -> str(IP)
Args:
str(container_name): name of container
str(instance_name): name of instance
Kwargs:
None
Raises:
None
Returns:
dict(interface_name: ip)"""
prefered_interface = self._get_data_entry('inventory/{0}/preferred_interface'.format(container_name)) # name or None
prefered_container_network_family = self.prefered_container_network_family
prefered_interface = self._get_data_entry('inventory/{0}/preferred_interface'.format(instance_name)) # name or None
prefered_instance_network_family = self.prefered_instance_network_family
ip_address = ''
if prefered_interface:
interface = self._get_data_entry('inventory/{0}/network_interfaces/{1}'.format(container_name, prefered_interface))
interface = self._get_data_entry('inventory/{0}/network_interfaces/{1}'.format(instance_name, prefered_interface))
for config in interface:
if config['family'] == prefered_container_network_family:
if config['family'] == prefered_instance_network_family:
ip_address = config['address']
break
else:
interface = self._get_data_entry('inventory/{0}/network_interfaces'.format(container_name))
for config in interface:
if config['family'] == prefered_container_network_family:
ip_address = config['address']
break
interfaces = self._get_data_entry('inventory/{0}/network_interfaces'.format(instance_name))
for interface in interfaces.values():
for config in interface:
if config['family'] == prefered_instance_network_family:
ip_address = config['address']
break
return ip_address
if self._get_data_entry('inventory/{0}/network_interfaces'.format(container_name)): # container have network interfaces
if self._get_data_entry('inventory/{0}/preferred_interface'.format(container_name)): # container have a preferred interface
self.inventory.set_variable(container_name, 'ansible_connection', 'ssh')
self.inventory.set_variable(container_name, 'ansible_host', interface_selection(container_name))
if self._get_data_entry('inventory/{0}/network_interfaces'.format(instance_name)): # instance have network interfaces
self.inventory.set_variable(instance_name, 'ansible_connection', 'ssh')
self.inventory.set_variable(instance_name, 'ansible_host', interface_selection(instance_name))
else:
self.inventory.set_variable(container_name, 'ansible_connection', 'local')
self.inventory.set_variable(instance_name, 'ansible_connection', 'local')
def build_inventory_hosts(self):
"""Build host-part dynamic inventory
@@ -626,29 +654,33 @@ class InventoryModule(BaseInventoryPlugin):
None
Returns:
None"""
for container_name in self.data['inventory']:
# Only consider containers that match the "state" filter, if self.state is not None
for instance_name in self.data['inventory']:
instance_state = str(self._get_data_entry('inventory/{0}/state'.format(instance_name)) or "STOPPED").lower()
# Only consider instances that match the "state" filter, if self.state is not None
if self.filter:
if self.filter.lower() != self._get_data_entry('inventory/{0}/state'.format(container_name)).lower():
if self.filter.lower() != instance_state:
continue
# add container
self.inventory.add_host(container_name)
# add instance
self.inventory.add_host(instance_name)
# add network informations
self.build_inventory_network(container_name)
self.build_inventory_network(instance_name)
# add os
self.inventory.set_variable(container_name, 'ansible_lxd_os', self._get_data_entry('inventory/{0}/os'.format(container_name)).lower())
self.inventory.set_variable(instance_name, 'ansible_lxd_os', self._get_data_entry('inventory/{0}/os'.format(instance_name)).lower())
# add release
self.inventory.set_variable(container_name, 'ansible_lxd_release', self._get_data_entry('inventory/{0}/release'.format(container_name)).lower())
self.inventory.set_variable(instance_name, 'ansible_lxd_release', self._get_data_entry('inventory/{0}/release'.format(instance_name)).lower())
# add profile
self.inventory.set_variable(container_name, 'ansible_lxd_profile', self._get_data_entry('inventory/{0}/profile'.format(container_name)))
self.inventory.set_variable(instance_name, 'ansible_lxd_profile', self._get_data_entry('inventory/{0}/profile'.format(instance_name)))
# add state
self.inventory.set_variable(container_name, 'ansible_lxd_state', self._get_data_entry('inventory/{0}/state'.format(container_name)).lower())
self.inventory.set_variable(instance_name, 'ansible_lxd_state', instance_state)
# add type
self.inventory.set_variable(instance_name, 'ansible_lxd_type', self._get_data_entry('inventory/{0}/type'.format(instance_name)))
# add location information
if self._get_data_entry('inventory/{0}/location'.format(container_name)) != "none": # wrong type by lxd 'none' != 'None'
self.inventory.set_variable(container_name, 'ansible_lxd_location', self._get_data_entry('inventory/{0}/location'.format(container_name)))
if self._get_data_entry('inventory/{0}/location'.format(instance_name)) != "none": # wrong type by lxd 'none' != 'None'
self.inventory.set_variable(instance_name, 'ansible_lxd_location', self._get_data_entry('inventory/{0}/location'.format(instance_name)))
# add VLAN_ID information
if self._get_data_entry('inventory/{0}/vlan_ids'.format(container_name)):
self.inventory.set_variable(container_name, 'ansible_lxd_vlan_ids', self._get_data_entry('inventory/{0}/vlan_ids'.format(container_name)))
if self._get_data_entry('inventory/{0}/vlan_ids'.format(instance_name)):
self.inventory.set_variable(instance_name, 'ansible_lxd_vlan_ids', self._get_data_entry('inventory/{0}/vlan_ids'.format(instance_name)))
def build_inventory_groups_location(self, group_name):
"""create group by attribute: location
@@ -665,9 +697,9 @@ class InventoryModule(BaseInventoryPlugin):
if group_name not in self.inventory.groups:
self.inventory.add_group(group_name)
for container_name in self.inventory.hosts:
if 'ansible_lxd_location' in self.inventory.get_host(container_name).get_vars():
self.inventory.add_child(group_name, container_name)
for instance_name in self.inventory.hosts:
if 'ansible_lxd_location' in self.inventory.get_host(instance_name).get_vars():
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups_pattern(self, group_name):
"""create group by name pattern
@@ -686,10 +718,10 @@ class InventoryModule(BaseInventoryPlugin):
regex_pattern = self.groupby[group_name].get('attribute')
for container_name in self.inventory.hosts:
result = re.search(regex_pattern, container_name)
for instance_name in self.inventory.hosts:
result = re.search(regex_pattern, instance_name)
if result:
self.inventory.add_child(group_name, container_name)
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups_network_range(self, group_name):
"""check if IP is in network-class
@@ -712,14 +744,14 @@ class InventoryModule(BaseInventoryPlugin):
raise AnsibleParserError(
'Error while parsing network range {0}: {1}'.format(self.groupby[group_name].get('attribute'), to_native(err)))
for container_name in self.inventory.hosts:
if self.data['inventory'][container_name].get('network_interfaces') is not None:
for interface in self.data['inventory'][container_name].get('network_interfaces'):
for interface_family in self.data['inventory'][container_name].get('network_interfaces')[interface]:
for instance_name in self.inventory.hosts:
if self.data['inventory'][instance_name].get('network_interfaces') is not None:
for interface in self.data['inventory'][instance_name].get('network_interfaces'):
for interface_family in self.data['inventory'][instance_name].get('network_interfaces')[interface]:
try:
address = ipaddress.ip_address(to_text(interface_family['address']))
if address.version == network.version and address in network:
self.inventory.add_child(group_name, container_name)
self.inventory.add_child(group_name, instance_name)
except ValueError:
# Ignore invalid IP addresses returned by lxd
pass
@@ -730,7 +762,7 @@ class InventoryModule(BaseInventoryPlugin):
Args:
str(group_name): Group name
Kwargs:
Noneself.data['inventory'][container_name][interface]
None
Raises:
None
Returns:
@@ -739,12 +771,12 @@ class InventoryModule(BaseInventoryPlugin):
if group_name not in self.inventory.groups:
self.inventory.add_group(group_name)
gen_containers = [
container_name for container_name in self.inventory.hosts
if 'ansible_lxd_os' in self.inventory.get_host(container_name).get_vars()]
for container_name in gen_containers:
if self.groupby[group_name].get('attribute').lower() == self.inventory.get_host(container_name).get_vars().get('ansible_lxd_os'):
self.inventory.add_child(group_name, container_name)
gen_instances = [
instance_name for instance_name in self.inventory.hosts
if 'ansible_lxd_os' in self.inventory.get_host(instance_name).get_vars()]
for instance_name in gen_instances:
if self.groupby[group_name].get('attribute').lower() == self.inventory.get_host(instance_name).get_vars().get('ansible_lxd_os'):
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups_release(self, group_name):
"""create group by attribute: release
@@ -761,12 +793,12 @@ class InventoryModule(BaseInventoryPlugin):
if group_name not in self.inventory.groups:
self.inventory.add_group(group_name)
gen_containers = [
container_name for container_name in self.inventory.hosts
if 'ansible_lxd_release' in self.inventory.get_host(container_name).get_vars()]
for container_name in gen_containers:
if self.groupby[group_name].get('attribute').lower() == self.inventory.get_host(container_name).get_vars().get('ansible_lxd_release'):
self.inventory.add_child(group_name, container_name)
gen_instances = [
instance_name for instance_name in self.inventory.hosts
if 'ansible_lxd_release' in self.inventory.get_host(instance_name).get_vars()]
for instance_name in gen_instances:
if self.groupby[group_name].get('attribute').lower() == self.inventory.get_host(instance_name).get_vars().get('ansible_lxd_release'):
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups_profile(self, group_name):
"""create group by attribute: profile
@@ -783,12 +815,12 @@ class InventoryModule(BaseInventoryPlugin):
if group_name not in self.inventory.groups:
self.inventory.add_group(group_name)
gen_containers = [
container_name for container_name in self.inventory.hosts.keys()
if 'ansible_lxd_profile' in self.inventory.get_host(container_name).get_vars().keys()]
for container_name in gen_containers:
if self.groupby[group_name].get('attribute').lower() in self.inventory.get_host(container_name).get_vars().get('ansible_lxd_profile'):
self.inventory.add_child(group_name, container_name)
gen_instances = [
instance_name for instance_name in self.inventory.hosts.keys()
if 'ansible_lxd_profile' in self.inventory.get_host(instance_name).get_vars().keys()]
for instance_name in gen_instances:
if self.groupby[group_name].get('attribute').lower() in self.inventory.get_host(instance_name).get_vars().get('ansible_lxd_profile'):
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups_vlanid(self, group_name):
"""create group by attribute: vlanid
@@ -805,12 +837,34 @@ class InventoryModule(BaseInventoryPlugin):
if group_name not in self.inventory.groups:
self.inventory.add_group(group_name)
gen_containers = [
container_name for container_name in self.inventory.hosts.keys()
if 'ansible_lxd_vlan_ids' in self.inventory.get_host(container_name).get_vars().keys()]
for container_name in gen_containers:
if self.groupby[group_name].get('attribute') in self.inventory.get_host(container_name).get_vars().get('ansible_lxd_vlan_ids').values():
self.inventory.add_child(group_name, container_name)
gen_instances = [
instance_name for instance_name in self.inventory.hosts.keys()
if 'ansible_lxd_vlan_ids' in self.inventory.get_host(instance_name).get_vars().keys()]
for instance_name in gen_instances:
if self.groupby[group_name].get('attribute') in self.inventory.get_host(instance_name).get_vars().get('ansible_lxd_vlan_ids').values():
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups_type(self, group_name):
"""create group by attribute: type
Args:
str(group_name): Group name
Kwargs:
None
Raises:
None
Returns:
None"""
# maybe we just want to expand one group
if group_name not in self.inventory.groups:
self.inventory.add_group(group_name)
gen_instances = [
instance_name for instance_name in self.inventory.hosts
if 'ansible_lxd_type' in self.inventory.get_host(instance_name).get_vars()]
for instance_name in gen_instances:
if self.groupby[group_name].get('attribute').lower() == self.inventory.get_host(instance_name).get_vars().get('ansible_lxd_type'):
self.inventory.add_child(group_name, instance_name)
def build_inventory_groups(self):
"""Build group-part dynamic inventory
@@ -839,6 +893,7 @@ class InventoryModule(BaseInventoryPlugin):
* 'release'
* 'profile'
* 'vlanid'
* 'type'
Args:
str(group_name): Group name
@@ -864,6 +919,8 @@ class InventoryModule(BaseInventoryPlugin):
self.build_inventory_groups_profile(group_name)
elif self.groupby[group_name].get('type') == 'vlanid':
self.build_inventory_groups_vlanid(group_name)
elif self.groupby[group_name].get('type') == 'type':
self.build_inventory_groups_type(group_name)
else:
raise AnsibleParserError('Unknown group type: {0}'.format(to_native(group_name)))
@@ -890,10 +947,30 @@ class InventoryModule(BaseInventoryPlugin):
self.build_inventory_hosts()
self.build_inventory_groups()
def cleandata(self):
"""Clean the dynamic inventory
The first version of the inventory only supported container.
This will change in the future.
The following function cleans up the data and remove the all items with the wrong type.
Args:
None
Kwargs:
None
Raises:
None
Returns:
None"""
iter_keys = list(self.data['instances'].keys())
for instance_name in iter_keys:
if self._get_data_entry('instances/{0}/instances/metadata/type'.format(instance_name)) != self.type_filter:
del self.data['instances'][instance_name]
def _populate(self):
"""Return the hosts and groups
Returns the processed container configurations from the lxd import
Returns the processed instance configurations from the lxd import
Args:
None
@@ -906,10 +983,16 @@ class InventoryModule(BaseInventoryPlugin):
if len(self.data) == 0: # If no data is injected by unittests open socket
self.socket = self._connect_to_socket()
self.get_container_data(self._get_containers())
self.get_instance_data(self._get_instances())
self.get_network_data(self._get_networks())
self.extract_information_from_container_configs()
# The first version of the inventory only supported containers.
# This will change in the future.
# The following function cleans up the data.
if self.type_filter != 'both':
self.cleandata()
self.extract_information_from_instance_configs()
# self.display.vvv(self.save_json_data([os.path.abspath(__file__)]))
@@ -948,8 +1031,9 @@ class InventoryModule(BaseInventoryPlugin):
self.data = {} # store for inventory-data
self.groupby = self.get_option('groupby')
self.plugin = self.get_option('plugin')
self.prefered_container_network_family = self.get_option('prefered_container_network_family')
self.prefered_container_network_interface = self.get_option('prefered_container_network_interface')
self.prefered_instance_network_family = self.get_option('prefered_instance_network_family')
self.prefered_instance_network_interface = self.get_option('prefered_instance_network_interface')
self.type_filter = self.get_option('type_filter')
if self.get_option('state').lower() == 'none': # none in config is str()
self.filter = None
else:

View File

@@ -8,7 +8,7 @@ __metaclass__ = type
DOCUMENTATION = r'''
name: online
author:
- Remy Leone (@sieben)
- Remy Leone (@remyleone)
short_description: Scaleway (previously Online SAS or Online.net) inventory source
description:
- Get inventory hosts from Scaleway (previously Online SAS or Online.net).

View File

@@ -9,7 +9,7 @@ __metaclass__ = type
DOCUMENTATION = r'''
name: scaleway
author:
- Remy Leone (@sieben)
- Remy Leone (@remyleone)
short_description: Scaleway inventory source
description:
- Get inventory hosts from Scaleway.

View File

@@ -0,0 +1,328 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: xen_orchestra
short_description: Xen Orchestra inventory source
version_added: 4.1.0
author:
- Dom Del Nano (@ddelnano) <ddelnano@gmail.com>
- Samori Gorse (@shinuza) <samorigorse@gmail.com>
requirements:
- websocket-client >= 1.0.0
description:
- Get inventory hosts from a Xen Orchestra deployment.
- 'Uses a configuration file as an inventory source, it must end in C(.xen_orchestra.yml) or C(.xen_orchestra.yaml).'
extends_documentation_fragment:
- constructed
- inventory_cache
options:
plugin:
description: The name of this plugin, it should always be set to C(community.general.xen_orchestra) for this plugin to recognize it as its own.
required: yes
choices: ['community.general.xen_orchestra']
type: str
api_host:
description:
- API host to XOA API.
- If the value is not specified in the inventory configuration, the value of environment variable C(ANSIBLE_XO_HOST) will be used instead.
type: str
env:
- name: ANSIBLE_XO_HOST
user:
description:
- Xen Orchestra user.
- If the value is not specified in the inventory configuration, the value of environment variable C(ANSIBLE_XO_USER) will be used instead.
required: yes
type: str
env:
- name: ANSIBLE_XO_USER
password:
description:
- Xen Orchestra password.
- If the value is not specified in the inventory configuration, the value of environment variable C(ANSIBLE_XO_PASSWORD) will be used instead.
required: yes
type: str
env:
- name: ANSIBLE_XO_PASSWORD
validate_certs:
description: Verify TLS certificate if using HTTPS.
type: boolean
default: true
use_ssl:
description: Use wss when connecting to the Xen Orchestra API
type: boolean
default: true
'''
EXAMPLES = '''
# file must be named xen_orchestra.yaml or xen_orchestra.yml
simple_config_file:
plugin: community.general.xen_orchestra
api_host: 192.168.1.255
user: xo
password: xo_pwd
validate_certs: true
use_ssl: true
groups:
kube_nodes: "'kube_node' in tags"
compose:
ansible_port: 2222
'''
import json
import ssl
from distutils.version import LooseVersion
from ansible.errors import AnsibleError
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
# 3rd party imports
try:
HAS_WEBSOCKET = True
import websocket
from websocket import create_connection
if LooseVersion(websocket.__version__) <= LooseVersion('1.0.0'):
raise ImportError
except ImportError as e:
HAS_WEBSOCKET = False
HALTED = 'Halted'
PAUSED = 'Paused'
RUNNING = 'Running'
SUSPENDED = 'Suspended'
POWER_STATES = [RUNNING, HALTED, SUSPENDED, PAUSED]
HOST_GROUP = 'xo_hosts'
POOL_GROUP = 'xo_pools'
def clean_group_name(label):
return label.lower().replace(' ', '-').replace('-', '_')
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
''' Host inventory parser for ansible using XenOrchestra as source. '''
NAME = 'community.general.xen_orchestra'
def __init__(self):
super(InventoryModule, self).__init__()
# from config
self.counter = -1
self.session = None
self.cache_key = None
self.use_cache = None
@property
def pointer(self):
self.counter += 1
return self.counter
def create_connection(self, xoa_api_host):
validate_certs = self.get_option('validate_certs')
use_ssl = self.get_option('use_ssl')
proto = 'wss' if use_ssl else 'ws'
sslopt = None if validate_certs else {'cert_reqs': ssl.CERT_NONE}
self.conn = create_connection(
'{0}://{1}/api/'.format(proto, xoa_api_host), sslopt=sslopt)
def login(self, user, password):
payload = {'id': self.pointer, 'jsonrpc': '2.0', 'method': 'session.signIn', 'params': {
'username': user, 'password': password}}
self.conn.send(json.dumps(payload))
result = json.loads(self.conn.recv())
if 'error' in result:
raise AnsibleError(
'Could not connect: {0}'.format(result['error']))
def get_object(self, name):
payload = {'id': self.pointer, 'jsonrpc': '2.0',
'method': 'xo.getAllObjects', 'params': {'filter': {'type': name}}}
self.conn.send(json.dumps(payload))
answer = json.loads(self.conn.recv())
if 'error' in answer:
raise AnsibleError(
'Could not request: {0}'.format(answer['error']))
return answer['result']
def _get_objects(self):
self.create_connection(self.xoa_api_host)
self.login(self.xoa_user, self.xoa_password)
return {
'vms': self.get_object('VM'),
'pools': self.get_object('pool'),
'hosts': self.get_object('host'),
}
def _apply_constructable(self, name, variables):
strict = self.get_option('strict')
self._add_host_to_composed_groups(self.get_option('groups'), variables, name, strict=strict)
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), variables, name, strict=strict)
self._set_composite_vars(self.get_option('compose'), variables, name, strict=strict)
def _add_vms(self, vms, hosts, pools):
for uuid, vm in vms.items():
group = 'with_ip'
ip = vm.get('mainIpAddress')
entry_name = uuid
power_state = vm['power_state'].lower()
pool_name = self._pool_group_name_for_uuid(pools, vm['$poolId'])
host_name = self._host_group_name_for_uuid(hosts, vm['$container'])
self.inventory.add_host(entry_name)
# Grouping by power state
self.inventory.add_child(power_state, entry_name)
# Grouping by host
if host_name:
self.inventory.add_child(host_name, entry_name)
# Grouping by pool
if pool_name:
self.inventory.add_child(pool_name, entry_name)
# Grouping VMs with an IP together
if ip is None:
group = 'without_ip'
self.inventory.add_group(group)
self.inventory.add_child(group, entry_name)
# Adding meta
self.inventory.set_variable(entry_name, 'uuid', uuid)
self.inventory.set_variable(entry_name, 'ip', ip)
self.inventory.set_variable(entry_name, 'ansible_host', ip)
self.inventory.set_variable(entry_name, 'power_state', power_state)
self.inventory.set_variable(
entry_name, 'name_label', vm['name_label'])
self.inventory.set_variable(entry_name, 'type', vm['type'])
self.inventory.set_variable(
entry_name, 'cpus', vm['CPUs']['number'])
self.inventory.set_variable(entry_name, 'tags', vm['tags'])
self.inventory.set_variable(
entry_name, 'memory', vm['memory']['size'])
self.inventory.set_variable(
entry_name, 'has_ip', group == 'with_ip')
self.inventory.set_variable(
entry_name, 'is_managed', vm.get('managementAgentDetected', False))
self.inventory.set_variable(
entry_name, 'os_version', vm['os_version'])
self._apply_constructable(entry_name, self.inventory.get_host(entry_name).get_vars())
def _add_hosts(self, hosts, pools):
for host in hosts.values():
entry_name = host['uuid']
group_name = 'xo_host_{0}'.format(
clean_group_name(host['name_label']))
pool_name = self._pool_group_name_for_uuid(pools, host['$poolId'])
self.inventory.add_group(group_name)
self.inventory.add_host(entry_name)
self.inventory.add_child(HOST_GROUP, entry_name)
self.inventory.add_child(pool_name, entry_name)
self.inventory.set_variable(entry_name, 'enabled', host['enabled'])
self.inventory.set_variable(
entry_name, 'hostname', host['hostname'])
self.inventory.set_variable(entry_name, 'memory', host['memory'])
self.inventory.set_variable(entry_name, 'address', host['address'])
self.inventory.set_variable(entry_name, 'cpus', host['cpus'])
self.inventory.set_variable(entry_name, 'type', 'host')
self.inventory.set_variable(entry_name, 'tags', host['tags'])
self.inventory.set_variable(entry_name, 'version', host['version'])
self.inventory.set_variable(
entry_name, 'power_state', host['power_state'].lower())
self.inventory.set_variable(
entry_name, 'product_brand', host['productBrand'])
for pool in pools.values():
group_name = 'xo_pool_{0}'.format(
clean_group_name(pool['name_label']))
self.inventory.add_group(group_name)
def _add_pools(self, pools):
for pool in pools.values():
group_name = 'xo_pool_{0}'.format(
clean_group_name(pool['name_label']))
self.inventory.add_group(group_name)
# TODO: Refactor
def _pool_group_name_for_uuid(self, pools, pool_uuid):
for pool in pools:
if pool == pool_uuid:
return 'xo_pool_{0}'.format(
clean_group_name(pools[pool_uuid]['name_label']))
# TODO: Refactor
def _host_group_name_for_uuid(self, hosts, host_uuid):
for host in hosts:
if host == host_uuid:
return 'xo_host_{0}'.format(
clean_group_name(hosts[host_uuid]['name_label']
))
def _populate(self, objects):
# Prepare general groups
self.inventory.add_group(HOST_GROUP)
self.inventory.add_group(POOL_GROUP)
for group in POWER_STATES:
self.inventory.add_group(group.lower())
self._add_pools(objects['pools'])
self._add_hosts(objects['hosts'], objects['pools'])
self._add_vms(objects['vms'], objects['hosts'], objects['pools'])
def verify_file(self, path):
valid = False
if super(InventoryModule, self).verify_file(path):
if path.endswith(('xen_orchestra.yaml', 'xen_orchestra.yml')):
valid = True
else:
self.display.vvv(
'Skipping due to inventory source not ending in "xen_orchestra.yaml" nor "xen_orchestra.yml"')
return valid
def parse(self, inventory, loader, path, cache=True):
if not HAS_WEBSOCKET:
raise AnsibleError('This plugin requires websocket-client 1.0.0 or higher: '
'https://github.com/websocket-client/websocket-client.')
super(InventoryModule, self).parse(inventory, loader, path)
# read config from file, this sets 'options'
self._read_config_data(path)
self.inventory = inventory
self.protocol = 'wss'
self.xoa_api_host = self.get_option('api_host')
self.xoa_user = self.get_option('user')
self.xoa_password = self.get_option('password')
self.cache_key = self.get_cache_key(path)
self.use_cache = cache and self.get_option('cache')
self.validate_certs = self.get_option('validate_certs')
if not self.get_option('use_ssl'):
self.protocol = 'ws'
objects = self._get_objects()
self._populate(objects)

View File

@@ -93,7 +93,7 @@ DOCUMENTATION = '''
environment variable and keep I(endpoints), I(host), and I(port) unused.
seealso:
- module: community.general.etcd3
- ref: etcd_lookup
- ref: ansible_collections.community.general.etcd_lookup
description: The etcd v2 lookup.
requirements:

View File

@@ -0,0 +1,107 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, RevBits <info@revbits.com>
# GNU General Public License v3.0+ (see COPYING or
# https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
name: revbitspss
author: RevBits (@RevBits) <info@revbits.com>
short_description: Get secrets from RevBits PAM server
version_added: 4.1.0
description:
- Uses the revbits_ansible Python SDK to get Secrets from RevBits PAM
Server using API key authentication with the REST API.
requirements:
- revbits_ansible - U(https://pypi.org/project/revbits_ansible/)
options:
_terms:
description:
- This will be an array of keys for secrets which you want to fetch from RevBits PAM.
required: true
type: list
elements: string
base_url:
description:
- This will be the base URL of the server, for example C(https://server-url-here).
required: true
type: string
api_key:
description:
- This will be the API key for authentication. You can get it from the RevBits PAM secret manager module.
required: true
type: string
"""
RETURN = r"""
_list:
description:
- The JSON responses which you can access with defined keys.
- If you are fetching secrets named as UUID, PASSWORD it will gives you the dict of all secrets.
type: list
elements: dict
"""
EXAMPLES = r"""
- hosts: localhost
vars:
secret: >-
{{
lookup(
'community.general.revbitspss',
'UUIDPAM', 'DB_PASS',
base_url='https://server-url-here',
api_key='API_KEY_GOES_HERE'
)
}}
tasks:
- ansible.builtin.debug:
msg: >
UUIDPAM is {{ (secret['UUIDPAM']) }} and DB_PASS is {{ (secret['DB_PASS']) }}
"""
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
from ansible.errors import AnsibleError
from ansible.module_utils.six import raise_from
try:
from pam.revbits_ansible.server import SecretServer
except ImportError as imp_exc:
ANOTHER_LIBRARY_IMPORT_ERROR = imp_exc
else:
ANOTHER_LIBRARY_IMPORT_ERROR = None
display = Display()
class LookupModule(LookupBase):
@staticmethod
def Client(server_parameters):
return SecretServer(**server_parameters)
def run(self, terms, variables, **kwargs):
if ANOTHER_LIBRARY_IMPORT_ERROR:
raise_from(
AnsibleError('revbits_ansible must be installed to use this plugin'),
ANOTHER_LIBRARY_IMPORT_ERROR
)
self.set_options(var_options=variables, direct=kwargs)
secret_server = LookupModule.Client(
{
"base_url": self.get_option('base_url'),
"api_key": self.get_option('api_key'),
}
)
result = []
for term in terms:
try:
display.vvv(u"Secret Server lookup of Secret with ID %s" % term)
result.append({term: secret_server.get_pam_secret(term)})
except Exception as error:
raise AnsibleError("Secret Server lookup failure: %s" % error.message)
return result

View File

@@ -7,54 +7,41 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
from distutils.version import StrictVersion
from ansible.module_utils.basic import missing_required_lib
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.common.text.converters import to_native
try:
from urllib import quote_plus # Python 2.X
from urlparse import urljoin
except ImportError:
from urllib.parse import quote_plus # Python 3+
from urllib.parse import quote_plus, urljoin # Python 3+
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
import requests
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
def request(module, api_url, project, path, access_token, private_token, rawdata='', method='GET'):
url = "%s/v4/projects/%s%s" % (api_url, quote_plus(project), path)
headers = {}
if access_token:
headers['Authorization'] = "Bearer %s" % access_token
else:
headers['Private-Token'] = private_token
headers['Accept'] = "application/json"
headers['Content-Type'] = "application/json"
response, info = fetch_url(module=module, url=url, headers=headers, data=rawdata, method=method)
status = info['status']
content = ""
if response:
content = response.read()
if status == 204:
return True, content
elif status == 200 or status == 201:
return True, json.loads(content)
else:
return False, str(status) + ": " + content
def auth_argument_spec(spec=None):
arg_spec = (dict(
api_token=dict(type='str', no_log=True),
api_oauth_token=dict(type='str', no_log=True),
api_job_token=dict(type='str', no_log=True),
))
if spec:
arg_spec.update(spec)
return arg_spec
def findProject(gitlab_instance, identifier):
def find_project(gitlab_instance, identifier):
try:
project = gitlab_instance.projects.get(identifier)
except Exception as e:
@@ -67,7 +54,7 @@ def findProject(gitlab_instance, identifier):
return project
def findGroup(gitlab_instance, identifier):
def find_group(gitlab_instance, identifier):
try:
project = gitlab_instance.groups.get(identifier)
except Exception as e:
@@ -76,12 +63,14 @@ def findGroup(gitlab_instance, identifier):
return project
def gitlabAuthentication(module):
def gitlab_authentication(module):
gitlab_url = module.params['api_url']
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
gitlab_oauth_token = module.params['api_oauth_token']
gitlab_job_token = module.params['api_job_token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
@@ -94,7 +83,16 @@ def gitlabAuthentication(module):
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
else:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, private_token=gitlab_token, api_version=4)
# We can create an oauth_token using a username and password
# https://docs.gitlab.com/ee/api/oauth2.html#authorization-code-flow
if gitlab_user:
data = {'grant_type': 'password', 'username': gitlab_user, 'password': gitlab_password}
resp = requests.post(urljoin(gitlab_url, "oauth/token"), data=data, verify=validate_certs)
resp_data = resp.json()
gitlab_oauth_token = resp_data["access_token"]
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, private_token=gitlab_token,
oauth_token=gitlab_oauth_token, job_token=gitlab_job_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:

View File

@@ -0,0 +1,232 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021-2022 Hewlett Packard Enterprise, Inc. All rights reserved.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible_collections.community.general.plugins.module_utils.redfish_utils import RedfishUtils
class iLORedfishUtils(RedfishUtils):
def get_ilo_sessions(self):
result = {}
# listing all users has always been slower than other operations, why?
session_list = []
sessions_results = []
# Get these entries, but does not fail if not found
properties = ['Description', 'Id', 'Name', 'UserName']
# Changed self.sessions_uri to Hardcoded string.
response = self.get_request(
self.root_uri + self.service_root + "SessionService/Sessions/")
if not response['ret']:
return response
result['ret'] = True
data = response['data']
if 'Oem' in data:
if data["Oem"]["Hpe"]["Links"]["MySession"]["@odata.id"]:
current_session = data["Oem"]["Hpe"]["Links"]["MySession"]["@odata.id"]
for sessions in data[u'Members']:
# session_list[] are URIs
session_list.append(sessions[u'@odata.id'])
# for each session, get details
for uri in session_list:
session = {}
if uri != current_session:
response = self.get_request(self.root_uri + uri)
if not response['ret']:
return response
data = response['data']
for property in properties:
if property in data:
session[property] = data[property]
sessions_results.append(session)
result["msg"] = sessions_results
result["ret"] = True
return result
def set_ntp_server(self, mgr_attributes):
result = {}
setkey = mgr_attributes['mgr_attr_name']
nic_info = self.get_manager_ethernet_uri()
ethuri = nic_info["nic_addr"]
response = self.get_request(self.root_uri + ethuri)
if not response['ret']:
return response
result['ret'] = True
data = response['data']
payload = {"DHCPv4": {
"UseNTPServers": ""
}}
if data["DHCPv4"]["UseNTPServers"]:
payload["DHCPv4"]["UseNTPServers"] = False
res_dhv4 = self.patch_request(self.root_uri + ethuri, payload)
if not res_dhv4['ret']:
return res_dhv4
payload = {"DHCPv6": {
"UseNTPServers": ""
}}
if data["DHCPv6"]["UseNTPServers"]:
payload["DHCPv6"]["UseNTPServers"] = False
res_dhv6 = self.patch_request(self.root_uri + ethuri, payload)
if not res_dhv6['ret']:
return res_dhv6
datetime_uri = self.manager_uri + "DateTime"
response = self.get_request(self.root_uri + datetime_uri)
if not response['ret']:
return response
data = response['data']
ntp_list = data[setkey]
if(len(ntp_list) == 2):
ntp_list.pop(0)
ntp_list.append(mgr_attributes['mgr_attr_value'])
payload = {setkey: ntp_list}
response1 = self.patch_request(self.root_uri + datetime_uri, payload)
if not response1['ret']:
return response1
return {'ret': True, 'changed': True, 'msg': "Modified %s" % mgr_attributes['mgr_attr_name']}
def set_time_zone(self, attr):
key = attr['mgr_attr_name']
uri = self.manager_uri + "DateTime/"
response = self.get_request(self.root_uri + uri)
if not response['ret']:
return response
data = response["data"]
if key not in data:
return {'ret': False, 'changed': False, 'msg': "Key %s not found" % key}
timezones = data["TimeZoneList"]
index = ""
for tz in timezones:
if attr['mgr_attr_value'] in tz["Name"]:
index = tz["Index"]
break
payload = {key: {"Index": index}}
response = self.patch_request(self.root_uri + uri, payload)
if not response['ret']:
return response
return {'ret': True, 'changed': True, 'msg': "Modified %s" % attr['mgr_attr_name']}
def set_dns_server(self, attr):
key = attr['mgr_attr_name']
nic_info = self.get_manager_ethernet_uri()
uri = nic_info["nic_addr"]
response = self.get_request(self.root_uri + uri)
if not response['ret']:
return response
data = response['data']
dns_list = data["Oem"]["Hpe"]["IPv4"][key]
if len(dns_list) == 3:
dns_list.pop(0)
dns_list.append(attr['mgr_attr_value'])
payload = {
"Oem": {
"Hpe": {
"IPv4": {
key: dns_list
}
}
}
}
response = self.patch_request(self.root_uri + uri, payload)
if not response['ret']:
return response
return {'ret': True, 'changed': True, 'msg': "Modified %s" % attr['mgr_attr_name']}
def set_domain_name(self, attr):
key = attr['mgr_attr_name']
nic_info = self.get_manager_ethernet_uri()
ethuri = nic_info["nic_addr"]
response = self.get_request(self.root_uri + ethuri)
if not response['ret']:
return response
data = response['data']
payload = {"DHCPv4": {
"UseDomainName": ""
}}
if data["DHCPv4"]["UseDomainName"]:
payload["DHCPv4"]["UseDomainName"] = False
res_dhv4 = self.patch_request(self.root_uri + ethuri, payload)
if not res_dhv4['ret']:
return res_dhv4
payload = {"DHCPv6": {
"UseDomainName": ""
}}
if data["DHCPv6"]["UseDomainName"]:
payload["DHCPv6"]["UseDomainName"] = False
res_dhv6 = self.patch_request(self.root_uri + ethuri, payload)
if not res_dhv6['ret']:
return res_dhv6
domain_name = attr['mgr_attr_value']
payload = {"Oem": {
"Hpe": {
key: domain_name
}
}}
response = self.patch_request(self.root_uri + ethuri, payload)
if not response['ret']:
return response
return {'ret': True, 'changed': True, 'msg': "Modified %s" % attr['mgr_attr_name']}
def set_wins_registration(self, mgrattr):
Key = mgrattr['mgr_attr_name']
nic_info = self.get_manager_ethernet_uri()
ethuri = nic_info["nic_addr"]
payload = {
"Oem": {
"Hpe": {
"IPv4": {
Key: False
}
}
}
}
response = self.patch_request(self.root_uri + ethuri, payload)
if not response['ret']:
return response
return {'ret': True, 'changed': True, 'msg': "Modified %s" % mgrattr['mgr_attr_name']}

View File

@@ -52,3 +52,36 @@ def module_fails_on_exception(func):
self.module.fail_json(msg=msg, exception=traceback.format_exc(),
output=self.output, vars=self.vars.output(), **self.output)
return wrapper
def check_mode_skip(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
if not self.module.check_mode:
return func(self, *args, **kwargs)
return wrapper
def check_mode_skip_returns(callable=None, value=None):
def deco(func):
if callable is not None:
@wraps(func)
def wrapper_callable(self, *args, **kwargs):
if self.module.check_mode:
return callable(self, *args, **kwargs)
return func(self, *args, **kwargs)
return wrapper_callable
if value is not None:
@wraps(func)
def wrapper_value(self, *args, **kwargs):
if self.module.check_mode:
return value
return func(self, *args, **kwargs)
return wrapper_value
if callable is None and value is None:
return check_mode_skip
return deco

View File

@@ -0,0 +1,61 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright: (c) 2020, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.basic import AnsibleModule
class DeprecateAttrsMixin(object):
def _deprecate_setup(self, attr, target, module):
if target is None:
target = self
if not hasattr(target, attr):
raise ValueError("Target {0} has no attribute {1}".format(target, attr))
if module is None:
if isinstance(target, AnsibleModule):
module = target
elif hasattr(target, "module") and isinstance(target.module, AnsibleModule):
module = target.module
else:
raise ValueError("Failed to automatically discover the AnsibleModule instance. Pass 'module' parameter explicitly.")
# setup internal state dicts
value_attr = "__deprecated_attr_value"
trigger_attr = "__deprecated_attr_trigger"
if not hasattr(target, value_attr):
setattr(target, value_attr, {})
if not hasattr(target, trigger_attr):
setattr(target, trigger_attr, {})
value_dict = getattr(target, value_attr)
trigger_dict = getattr(target, trigger_attr)
return target, module, value_dict, trigger_dict
def _deprecate_attr(self, attr, msg, version=None, date=None, collection_name=None, target=None, value=None, module=None):
target, module, value_dict, trigger_dict = self._deprecate_setup(attr, target, module)
value_dict[attr] = getattr(target, attr, value)
trigger_dict[attr] = False
def _trigger():
if not trigger_dict[attr]:
module.deprecate(msg, version=version, date=date, collection_name=collection_name)
trigger_dict[attr] = True
def _getter(_self):
_trigger()
return value_dict[attr]
def _setter(_self, new_value):
_trigger()
value_dict[attr] = new_value
# override attribute
prop = property(_getter)
setattr(target, attr, prop)
setattr(target, "_{0}_setter".format(attr), prop.setter(_setter))

View File

@@ -13,9 +13,10 @@ from ansible_collections.community.general.plugins.module_utils.mh.mixins.cmd im
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarsMixin, VarDict as _VD
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deprecate_attrs import DeprecateAttrsMixin
class ModuleHelper(VarsMixin, DependencyMixin, ModuleHelperBase):
class ModuleHelper(DeprecateAttrsMixin, VarsMixin, DependencyMixin, ModuleHelperBase):
_output_conflict_list = ('msg', 'exception', 'output', 'vars', 'changed')
facts_name = None
output_params = ()
@@ -36,6 +37,15 @@ class ModuleHelper(VarsMixin, DependencyMixin, ModuleHelperBase):
fact=name in self.facts_params,
)
self._deprecate_attr(
attr="VarDict",
msg="ModuleHelper.VarDict attribute is deprecated, use VarDict from "
"the ansible_collections.community.general.plugins.module_utils.mh.mixins.vars module instead",
version="6.0.0",
collection_name="community.general",
target=ModuleHelper,
module=self.module)
def update_output(self, **kwargs):
self.update_vars(meta={"output": True}, **kwargs)

View File

@@ -54,6 +54,17 @@ def proxmox_to_ansible_bool(value):
return True if value == 1 else False
def ansible_to_proxmox_bool(value):
'''Convert Ansible representation of a boolean to be proxmox-friendly'''
if value is None:
return None
if not isinstance(value, bool):
raise ValueError("%s must be of type bool not %s" % (value, type(value)))
return 1 if value else 0
class ProxmoxAnsible(object):
"""Base class for Proxmox modules"""
def __init__(self, module):

View File

@@ -1834,12 +1834,16 @@ class RedfishUtils(object):
result['ret'] = True
data = response['data']
for device in data[u'Fans']:
fan = {}
for property in properties:
if property in device:
fan[property] = device[property]
fan_results.append(fan)
# Checking if fans are present
if u'Fans' in data:
for device in data[u'Fans']:
fan = {}
for property in properties:
if property in device:
fan[property] = device[property]
fan_results.append(fan)
else:
return {'ret': False, 'msg': "No Fans present"}
result["entries"] = fan_results
return result
@@ -2029,15 +2033,28 @@ class RedfishUtils(object):
def get_multi_memory_inventory(self):
return self.aggregate_systems(self.get_memory_inventory)
def get_nic(self, resource_uri):
result = {}
properties = ['Name', 'Id', 'Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses',
'NameServers', 'MACAddress', 'PermanentMACAddress',
'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
nic = {}
for property in properties:
if property in data:
nic[property] = data[property]
result['entries'] = nic
return(result)
def get_nic_inventory(self, resource_uri):
result = {}
nic_list = []
nic_results = []
key = "EthernetInterfaces"
# Get these entries, but does not fail if not found
properties = ['Name', 'Id', 'Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses',
'NameServers', 'MACAddress', 'PermanentMACAddress',
'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
@@ -2061,18 +2078,9 @@ class RedfishUtils(object):
nic_list.append(nic[u'@odata.id'])
for n in nic_list:
nic = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
nic[property] = data[property]
nic_results.append(nic)
nic = self.get_nic(n)
if nic['ret']:
nic_results.append(nic['entries'])
result["entries"] = nic_results
return result
@@ -2697,39 +2705,14 @@ class RedfishUtils(object):
return self.aggregate_managers(self.get_manager_health_report)
def set_manager_nic(self, nic_addr, nic_config):
# Get EthernetInterface collection
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'EthernetInterfaces' not in data:
return {'ret': False, 'msg': "EthernetInterfaces resource not found"}
ethernetinterfaces_uri = data["EthernetInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
# Get the manager ethernet interface uri
nic_info = self.get_manager_ethernet_uri(nic_addr)
# Find target EthernetInterface
target_ethernet_uri = None
target_ethernet_current_setting = None
if nic_addr == 'null':
# Find root_uri matched EthernetInterface when nic_addr is not specified
nic_addr = (self.root_uri).split('/')[-1]
nic_addr = nic_addr.split(':')[0] # split port if existing
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if '"' + nic_addr.lower() + '"' in str(data).lower() or "'" + nic_addr.lower() + "'" in str(data).lower():
target_ethernet_uri = uri
target_ethernet_current_setting = data
break
if target_ethernet_uri is None:
return {'ret': False, 'msg': "No matched EthernetInterface found under Manager"}
if nic_info.get('nic_addr') is None:
return nic_info
else:
target_ethernet_uri = nic_info['nic_addr']
target_ethernet_current_setting = nic_info['ethernet_setting']
# Convert input to payload and check validity
payload = {}
@@ -2792,3 +2775,208 @@ class RedfishUtils(object):
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NIC"}
# A helper function to get the EthernetInterface URI
def get_manager_ethernet_uri(self, nic_addr='null'):
# Get EthernetInterface collection
response = self.get_request(self.root_uri + self.manager_uri)
if not response['ret']:
return response
data = response['data']
if 'EthernetInterfaces' not in data:
return {'ret': False, 'msg': "EthernetInterfaces resource not found"}
ethernetinterfaces_uri = data["EthernetInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if not response['ret']:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
# Find target EthernetInterface
target_ethernet_uri = None
target_ethernet_current_setting = None
if nic_addr == 'null':
# Find root_uri matched EthernetInterface when nic_addr is not specified
nic_addr = (self.root_uri).split('/')[-1]
nic_addr = nic_addr.split(':')[0] # split port if existing
for uri in uris:
response = self.get_request(self.root_uri + uri)
if not response['ret']:
return response
data = response['data']
data_string = json.dumps(data)
if nic_addr.lower() in data_string.lower():
target_ethernet_uri = uri
target_ethernet_current_setting = data
break
nic_info = {}
nic_info['nic_addr'] = target_ethernet_uri
nic_info['ethernet_setting'] = target_ethernet_current_setting
if target_ethernet_uri is None:
return {}
else:
return nic_info
def set_hostinterface_attributes(self, hostinterface_config, hostinterface_id=None):
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'HostInterfaces' not in data:
return {'ret': False, 'msg': "HostInterfaces resource not found"}
hostinterfaces_uri = data["HostInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + hostinterfaces_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if a.get('@odata.id')]
# Capture list of URIs that match a specified HostInterface resource ID
if hostinterface_id:
matching_hostinterface_uris = [uri for uri in uris if hostinterface_id in uri.split('/')[-1]]
if hostinterface_id and matching_hostinterface_uris:
hostinterface_uri = list.pop(matching_hostinterface_uris)
elif hostinterface_id and not matching_hostinterface_uris:
return {'ret': False, 'msg': "HostInterface ID %s not present." % hostinterface_id}
elif len(uris) == 1:
hostinterface_uri = list.pop(uris)
else:
return {'ret': False, 'msg': "HostInterface ID not defined and multiple interfaces detected."}
response = self.get_request(self.root_uri + hostinterface_uri)
if response['ret'] is False:
return response
current_hostinterface_config = response['data']
payload = {}
for property in hostinterface_config.keys():
value = hostinterface_config[property]
if property not in current_hostinterface_config:
return {'ret': False, 'msg': "Property %s in hostinterface_config is invalid" % property}
if isinstance(value, dict):
if isinstance(current_hostinterface_config[property], dict):
payload[property] = value
elif isinstance(current_hostinterface_config[property], list):
payload[property] = list()
payload[property].append(value)
else:
return {'ret': False, 'msg': "Value of property %s in hostinterface_config is invalid" % property}
else:
payload[property] = value
need_change = False
for property in payload.keys():
set_value = payload[property]
cur_value = current_hostinterface_config[property]
if not isinstance(set_value, dict) and not isinstance(set_value, list):
if set_value != cur_value:
need_change = True
if isinstance(set_value, dict):
for subprop in payload[property].keys():
if subprop not in current_hostinterface_config[property]:
need_change = True
break
sub_set_value = payload[property][subprop]
sub_cur_value = current_hostinterface_config[property][subprop]
if sub_set_value != sub_cur_value:
need_change = True
if isinstance(set_value, list):
if len(set_value) != len(cur_value):
need_change = True
continue
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in current_hostinterface_config[property][i]:
need_change = True
break
sub_set_value = payload[property][i][subprop]
sub_cur_value = current_hostinterface_config[property][i][subprop]
if sub_set_value != sub_cur_value:
need_change = True
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Host Interface already configured"}
response = self.patch_request(self.root_uri + hostinterface_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Host Interface"}
def get_hostinterfaces(self):
result = {}
hostinterface_results = []
properties = ['Id', 'Name', 'Description', 'HostInterfaceType', 'Status',
'InterfaceEnabled', 'ExternallyAccessible', 'AuthenticationModes',
'AuthNoneRoleId', 'CredentialBootstrapping']
manager_uri_list = self.manager_uris
for manager_uri in manager_uri_list:
response = self.get_request(self.root_uri + manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'HostInterfaces' in data:
hostinterfaces_uri = data[u'HostInterfaces'][u'@odata.id']
else:
continue
response = self.get_request(self.root_uri + hostinterfaces_uri)
data = response['data']
if 'Members' in data:
for hostinterface in data['Members']:
hostinterface_uri = hostinterface['@odata.id']
hostinterface_response = self.get_request(self.root_uri + hostinterface_uri)
# dictionary for capturing individual HostInterface properties
hostinterface_data_temp = {}
if hostinterface_response['ret'] is False:
return hostinterface_response
hostinterface_data = hostinterface_response['data']
for property in properties:
if property in hostinterface_data:
if hostinterface_data[property] is not None:
hostinterface_data_temp[property] = hostinterface_data[property]
# Check for the presence of a ManagerEthernetInterface
# object, a link to a _single_ EthernetInterface that the
# BMC uses to communicate with the host.
if 'ManagerEthernetInterface' in hostinterface_data:
interface_uri = hostinterface_data['ManagerEthernetInterface']['@odata.id']
interface_response = self.get_nic(interface_uri)
if interface_response['ret'] is False:
return interface_response
hostinterface_data_temp['ManagerEthernetInterface'] = interface_response['entries']
# Check for the presence of a HostEthernetInterfaces
# object, a link to a _collection_ of EthernetInterfaces
# that the host uses to communicate with the BMC.
if 'HostEthernetInterfaces' in hostinterface_data:
interfaces_uri = hostinterface_data['HostEthernetInterfaces']['@odata.id']
interfaces_response = self.get_request(self.root_uri + interfaces_uri)
if interfaces_response['ret'] is False:
return interfaces_response
interfaces_data = interfaces_response['data']
if 'Members' in interfaces_data:
for interface in interfaces_data['Members']:
interface_uri = interface['@odata.id']
interface_response = self.get_nic(interface_uri)
if interface_response['ret'] is False:
return interface_response
# Check if this is the first
# HostEthernetInterfaces item and create empty
# list if so.
if 'HostEthernetInterfaces' not in hostinterface_data_temp:
hostinterface_data_temp['HostEthernetInterfaces'] = []
hostinterface_data_temp['HostEthernetInterfaces'].append(interface_response['entries'])
hostinterface_results.append(hostinterface_data_temp)
else:
continue
result["entries"] = hostinterface_results
if not result["entries"]:
return {'ret': False, 'msg': "No HostInterface objects found"}
return result

View File

@@ -422,6 +422,7 @@ import shutil
import subprocess
import tempfile
import time
import shlex
try:
import lxc
@@ -661,9 +662,8 @@ class LxcContainerManagement(object):
"""
for key, value in variables_dict.items():
build_command.append(
'%s %s' % (key, value)
)
build_command.append(str(key))
build_command.append(str(value))
return build_command
def _get_vars(self, variables):
@@ -686,24 +686,6 @@ class LxcContainerManagement(object):
return_dict[v] = _var
return return_dict
def _run_command(self, build_command, unsafe_shell=False):
"""Return information from running an Ansible Command.
This will squash the build command list into a string and then
execute the command via Ansible. The output is returned to the method.
This output is returned as `return_code`, `stdout`, `stderr`.
:param build_command: Used for the command and all options.
:type build_command: ``list``
:param unsafe_shell: Enable or Disable unsafe sell commands.
:type unsafe_shell: ``bol``
"""
return self.module.run_command(
' '.join(build_command),
use_unsafe_shell=unsafe_shell
)
def _config(self):
"""Configure an LXC container.
@@ -810,7 +792,7 @@ class LxcContainerManagement(object):
elif self.module.params.get('backing_store') == 'overlayfs':
build_command.append('--snapshot')
rc, return_data, err = self._run_command(build_command)
rc, return_data, err = self.module.run_command(build_command)
if rc != 0:
message = "Failed executing %s." % os.path.basename(clone_cmd)
self.failure(
@@ -843,7 +825,7 @@ class LxcContainerManagement(object):
build_command = [
self.module.get_bin_path('lxc-create', True),
'--name %s' % self.container_name,
'--name', self.container_name,
'--quiet'
]
@@ -869,10 +851,12 @@ class LxcContainerManagement(object):
log_path = os.getenv('HOME')
build_command.extend([
'--logfile %s' % os.path.join(
'--logfile',
os.path.join(
log_path, 'lxc-%s.log' % self.container_name
),
'--logpriority %s' % self.module.params.get(
'--logpriority',
self.module.params.get(
'container_log_level'
).upper()
])
@@ -880,9 +864,10 @@ class LxcContainerManagement(object):
# Add the template commands to the end of the command if there are any
template_options = self.module.params.get('template_options', None)
if template_options:
build_command.append('-- %s' % template_options)
build_command.append('--')
build_command += shlex.split(template_options)
rc, return_data, err = self._run_command(build_command)
rc, return_data, err = self.module.run_command(build_command)
if rc != 0:
message = "Failed executing lxc-create."
self.failure(
@@ -1186,7 +1171,7 @@ class LxcContainerManagement(object):
self.module.get_bin_path('lxc-config', True),
"lxc.bdev.lvm.vg"
]
rc, vg, err = self._run_command(build_command)
rc, vg, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1204,7 +1189,7 @@ class LxcContainerManagement(object):
build_command = [
self.module.get_bin_path('lvs', True)
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1231,7 +1216,7 @@ class LxcContainerManagement(object):
'--units',
'g'
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1262,7 +1247,7 @@ class LxcContainerManagement(object):
'--units',
'g'
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1311,7 +1296,7 @@ class LxcContainerManagement(object):
os.path.join(vg, source_lv),
"-L%sg" % snapshot_size_gb
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1336,7 +1321,7 @@ class LxcContainerManagement(object):
"/dev/%s/%s" % (vg, lv_name),
mount_point,
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1380,9 +1365,8 @@ class LxcContainerManagement(object):
'.'
]
rc, stdout, err = self._run_command(
build_command=build_command,
unsafe_shell=True
rc, stdout, err = self.module.run_command(
build_command
)
os.umask(old_umask)
@@ -1410,7 +1394,7 @@ class LxcContainerManagement(object):
"-f",
"%s/%s" % (vg, lv_name),
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1442,11 +1426,10 @@ class LxcContainerManagement(object):
self.module.get_bin_path('rsync', True),
'-aHAX',
fs_path,
temp_dir
temp_dir,
]
rc, stdout, err = self._run_command(
rc, stdout, err = self.module.run_command(
build_command,
unsafe_shell=True
)
if rc != 0:
self.failure(
@@ -1467,7 +1450,7 @@ class LxcContainerManagement(object):
self.module.get_bin_path('umount', True),
mount_point,
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,
@@ -1489,12 +1472,12 @@ class LxcContainerManagement(object):
build_command = [
self.module.get_bin_path('mount', True),
'-t overlayfs',
'-o lowerdir=%s,upperdir=%s' % (lowerdir, upperdir),
'-t', 'overlayfs',
'-o', 'lowerdir=%s,upperdir=%s' % (lowerdir, upperdir),
'overlayfs',
mount_point,
]
rc, stdout, err = self._run_command(build_command)
rc, stdout, err = self.module.run_command(build_command)
if rc != 0:
self.failure(
err=err,

View File

@@ -11,29 +11,28 @@ __metaclass__ = type
DOCUMENTATION = '''
---
module: lxd_container
short_description: Manage LXD Containers
short_description: Manage LXD instances
description:
- Management of LXD containers
- Management of LXD containers and virtual machines.
author: "Hiroaki Nakamura (@hnakamur)"
options:
name:
description:
- Name of a container.
- Name of an instance.
type: str
required: true
architecture:
description:
- 'The architecture for the container (for example C(x86_64) or C(i686)).
- 'The architecture for the instance (for example C(x86_64) or C(i686)).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1).'
type: str
required: false
config:
description:
- 'The config for the container (for example C({"limits.cpu": "2"})).
- 'The config for the instance (for example C({"limits.cpu": "2"})).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1).'
- If the container already exists and its "config" values in metadata
obtained from GET /1.0/containers/<name>
U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#10containersname)
- If the instance already exists and its "config" values in metadata
obtained from the LXD API U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#instances-containers-and-virtual-machines)
are different, this module tries to apply the configurations.
- The keys starting with C(volatile.) are ignored for this comparison when I(ignore_volatile_options=true).
type: dict
@@ -50,25 +49,25 @@ options:
version_added: 3.7.0
profiles:
description:
- Profile to be used by the container.
- Profile to be used by the instance.
type: list
elements: str
devices:
description:
- 'The devices for the container
- 'The devices for the instance
(for example C({ "rootfs": { "path": "/dev/kvm", "type": "unix-char" }})).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1).'
type: dict
required: false
ephemeral:
description:
- Whether or not the container is ephemeral (for example C(true) or C(false)).
- Whether or not the instance is ephemeral (for example C(true) or C(false)).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1).
required: false
type: bool
source:
description:
- 'The source for the container
- 'The source for the instance
(e.g. { "type": "image",
"mode": "pull",
"server": "https://images.linuxcontainers.org",
@@ -86,39 +85,49 @@ options:
- absent
- frozen
description:
- Define the state of a container.
- Define the state of an instance.
required: false
default: started
type: str
target:
description:
- For cluster deployments. Will attempt to create a container on a target node.
If container exists elsewhere in a cluster, then container will not be replaced or moved.
- For cluster deployments. Will attempt to create an instance on a target node.
If the instance exists elsewhere in a cluster, then it will not be replaced or moved.
The name should respond to same name of the node you see in C(lxc cluster list).
type: str
required: false
version_added: 1.0.0
timeout:
description:
- A timeout for changing the state of the container.
- A timeout for changing the state of the instance.
- This is also used as a timeout for waiting until IPv4 addresses
are set to the all network interfaces in the container after
are set to the all network interfaces in the instance after
starting or restarting.
required: false
default: 30
type: int
type:
description:
- Instance type can be either C(virtual-machine) or C(container).
required: false
default: container
choices:
- container
- virtual-machine
type: str
version_added: 4.1.0
wait_for_ipv4_addresses:
description:
- If this is true, the C(lxd_container) waits until IPv4 addresses
are set to the all network interfaces in the container after
are set to the all network interfaces in the instance after
starting or restarting.
required: false
default: false
type: bool
force_stop:
description:
- If this is true, the C(lxd_container) forces to stop the container
when it stops or restarts the container.
- If this is true, the C(lxd_container) forces to stop the instance
when it stops or restarts the instance.
required: false
default: false
type: bool
@@ -160,18 +169,18 @@ options:
required: false
type: str
notes:
- Containers must have a unique name. If you attempt to create a container
- Instances can be a container or a virtual machine, both of them must have unique name. If you attempt to create an instance
with a name that already existed in the users namespace the module will
simply return as "unchanged".
- There are two ways to run commands in containers, using the command
- There are two ways to run commands inside a container or virtual machine, using the command
module or using the ansible lxd connection plugin bundled in Ansible >=
2.1, the later requires python to be installed in the container which can
2.1, the later requires python to be installed in the instance which can
be done with the command module.
- You can copy a file from the host to the container
- You can copy a file from the host to the instance
with the Ansible M(ansible.builtin.copy) and M(ansible.builtin.template) module and the `lxd` connection plugin.
See the example below.
- You can copy a file in the created container to the localhost
with `command=lxc file pull container_name/dir/filename filename`.
- You can copy a file in the created instance to the localhost
with `command=lxc file pull instance_name/dir/filename filename`.
See the first example below.
'''
@@ -240,6 +249,7 @@ EXAMPLES = '''
community.general.lxd_container:
name: mycontainer
state: absent
type: container
# An example for restarting a container
- hosts: localhost
@@ -249,6 +259,7 @@ EXAMPLES = '''
community.general.lxd_container:
name: mycontainer
state: restarted
type: container
# An example for restarting a container using https to connect to the LXD server
- hosts: localhost
@@ -306,16 +317,36 @@ EXAMPLES = '''
mode: pull
alias: ubuntu/xenial/amd64
target: node02
# An example for creating a virtual machine
- hosts: localhost
connection: local
tasks:
- name: Create container on another node
community.general.lxd_container:
name: new-vm-1
type: virtual-machine
state: started
ignore_volatile_options: true
wait_for_ipv4_addresses: true
profiles: ["default"]
source:
protocol: simplestreams
type: image
mode: pull
server: https://images.linuxcontainers.org
alias: debian/11
timeout: 600
'''
RETURN = '''
addresses:
description: Mapping from the network device name to a list of IPv4 addresses in the container
description: Mapping from the network device name to a list of IPv4 addresses in the instance.
returned: when state is started or restarted
type: dict
sample: {"eth0": ["10.155.92.191"]}
old_state:
description: The old state of the container
description: The old state of the instance.
returned: when state is started or restarted
type: str
sample: "stopped"
@@ -325,7 +356,7 @@ logs:
type: list
sample: "(too long to be placed here)"
actions:
description: List of actions performed for the container.
description: List of actions performed for the instance.
returned: success
type: list
sample: '["create", "start"]'
@@ -384,6 +415,15 @@ class LXDContainerManagement(object):
self.addresses = None
self.target = self.module.params['target']
self.type = self.module.params['type']
# LXD Rest API provides additional endpoints for creating containers and virtual-machines.
self.api_endpoint = None
if self.type == 'container':
self.api_endpoint = '/1.0/containers'
elif self.type == 'virtual-machine':
self.api_endpoint = '/1.0/virtual-machines'
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = '{0}/.config/lxc/client.key'.format(os.environ['HOME'])
@@ -419,20 +459,20 @@ class LXDContainerManagement(object):
if param_val is not None:
self.config[attr] = param_val
def _get_container_json(self):
def _get_instance_json(self):
return self.client.do(
'GET', '/1.0/containers/{0}'.format(self.name),
'GET', '{0}/{1}'.format(self.api_endpoint, self.name),
ok_error_codes=[404]
)
def _get_container_state_json(self):
def _get_instance_state_json(self):
return self.client.do(
'GET', '/1.0/containers/{0}/state'.format(self.name),
'GET', '{0}/{1}/state'.format(self.api_endpoint, self.name),
ok_error_codes=[404]
)
@staticmethod
def _container_json_to_module_state(resp_json):
def _instance_json_to_module_state(resp_json):
if resp_json['type'] == 'error':
return 'absent'
return ANSIBLE_LXD_STATES[resp_json['metadata']['status']]
@@ -441,45 +481,45 @@ class LXDContainerManagement(object):
body_json = {'action': action, 'timeout': self.timeout}
if force_stop:
body_json['force'] = True
return self.client.do('PUT', '/1.0/containers/{0}/state'.format(self.name), body_json=body_json)
return self.client.do('PUT', '{0}/{1}/state'.format(self.api_endpoint, self.name), body_json=body_json)
def _create_container(self):
def _create_instance(self):
config = self.config.copy()
config['name'] = self.name
if self.target:
self.client.do('POST', '/1.0/containers?' + urlencode(dict(target=self.target)), config)
self.client.do('POST', '{0}?{1}'.format(self.api_endpoint, urlencode(dict(target=self.target))), config)
else:
self.client.do('POST', '/1.0/containers', config)
self.client.do('POST', self.api_endpoint, config)
self.actions.append('create')
def _start_container(self):
def _start_instance(self):
self._change_state('start')
self.actions.append('start')
def _stop_container(self):
def _stop_instance(self):
self._change_state('stop', self.force_stop)
self.actions.append('stop')
def _restart_container(self):
def _restart_instance(self):
self._change_state('restart', self.force_stop)
self.actions.append('restart')
def _delete_container(self):
self.client.do('DELETE', '/1.0/containers/{0}'.format(self.name))
def _delete_instance(self):
self.client.do('DELETE', '{0}/{1}'.format(self.api_endpoint, self.name))
self.actions.append('delete')
def _freeze_container(self):
def _freeze_instance(self):
self._change_state('freeze')
self.actions.append('freeze')
def _unfreeze_container(self):
def _unfreeze_instance(self):
self._change_state('unfreeze')
self.actions.append('unfreez')
def _container_ipv4_addresses(self, ignore_devices=None):
def _instance_ipv4_addresses(self, ignore_devices=None):
ignore_devices = ['lo'] if ignore_devices is None else ignore_devices
resp_json = self._get_container_state_json()
resp_json = self._get_instance_state_json()
network = resp_json['metadata']['network'] or {}
network = dict((k, v) for k, v in network.items() if k not in ignore_devices) or {}
addresses = dict((k, [a['address'] for a in v['addresses'] if a['family'] == 'inet']) for k, v in network.items()) or {}
@@ -494,7 +534,7 @@ class LXDContainerManagement(object):
due = datetime.datetime.now() + datetime.timedelta(seconds=self.timeout)
while datetime.datetime.now() < due:
time.sleep(1)
addresses = self._container_ipv4_addresses()
addresses = self._instance_ipv4_addresses()
if self._has_all_ipv4_addresses(addresses):
self.addresses = addresses
return
@@ -504,72 +544,72 @@ class LXDContainerManagement(object):
def _started(self):
if self.old_state == 'absent':
self._create_container()
self._start_container()
self._create_instance()
self._start_instance()
else:
if self.old_state == 'frozen':
self._unfreeze_container()
self._unfreeze_instance()
elif self.old_state == 'stopped':
self._start_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._start_instance()
if self._needs_to_apply_instance_configs():
self._apply_instance_configs()
if self.wait_for_ipv4_addresses:
self._get_addresses()
def _stopped(self):
if self.old_state == 'absent':
self._create_container()
self._create_instance()
else:
if self.old_state == 'stopped':
if self._needs_to_apply_container_configs():
self._start_container()
self._apply_container_configs()
self._stop_container()
if self._needs_to_apply_instance_configs():
self._start_instance()
self._apply_instance_configs()
self._stop_instance()
else:
if self.old_state == 'frozen':
self._unfreeze_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._stop_container()
self._unfreeze_instance()
if self._needs_to_apply_instance_configs():
self._apply_instance_configs()
self._stop_instance()
def _restarted(self):
if self.old_state == 'absent':
self._create_container()
self._start_container()
self._create_instance()
self._start_instance()
else:
if self.old_state == 'frozen':
self._unfreeze_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._restart_container()
self._unfreeze_instance()
if self._needs_to_apply_instance_configs():
self._apply_instance_configs()
self._restart_instance()
if self.wait_for_ipv4_addresses:
self._get_addresses()
def _destroyed(self):
if self.old_state != 'absent':
if self.old_state == 'frozen':
self._unfreeze_container()
self._unfreeze_instance()
if self.old_state != 'stopped':
self._stop_container()
self._delete_container()
self._stop_instance()
self._delete_instance()
def _frozen(self):
if self.old_state == 'absent':
self._create_container()
self._start_container()
self._freeze_container()
self._create_instance()
self._start_instance()
self._freeze_instance()
else:
if self.old_state == 'stopped':
self._start_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._freeze_container()
self._start_instance()
if self._needs_to_apply_instance_configs():
self._apply_instance_configs()
self._freeze_instance()
def _needs_to_change_container_config(self, key):
def _needs_to_change_instance_config(self, key):
if key not in self.config:
return False
if key == 'config' and self.ignore_volatile_options: # the old behavior is to ignore configurations by keyword "volatile"
old_configs = dict((k, v) for k, v in self.old_container_json['metadata'][key].items() if not k.startswith('volatile.'))
old_configs = dict((k, v) for k, v in self.old_instance_json['metadata'][key].items() if not k.startswith('volatile.'))
for k, v in self.config['config'].items():
if k not in old_configs:
return True
@@ -577,7 +617,7 @@ class LXDContainerManagement(object):
return True
return False
elif key == 'config': # next default behavior
old_configs = dict((k, v) for k, v in self.old_container_json['metadata'][key].items())
old_configs = dict((k, v) for k, v in self.old_instance_json['metadata'][key].items())
for k, v in self.config['config'].items():
if k not in old_configs:
return True
@@ -585,39 +625,41 @@ class LXDContainerManagement(object):
return True
return False
else:
old_configs = self.old_container_json['metadata'][key]
old_configs = self.old_instance_json['metadata'][key]
return self.config[key] != old_configs
def _needs_to_apply_container_configs(self):
def _needs_to_apply_instance_configs(self):
return (
self._needs_to_change_container_config('architecture') or
self._needs_to_change_container_config('config') or
self._needs_to_change_container_config('ephemeral') or
self._needs_to_change_container_config('devices') or
self._needs_to_change_container_config('profiles')
self._needs_to_change_instance_config('architecture') or
self._needs_to_change_instance_config('config') or
self._needs_to_change_instance_config('ephemeral') or
self._needs_to_change_instance_config('devices') or
self._needs_to_change_instance_config('profiles')
)
def _apply_container_configs(self):
old_metadata = self.old_container_json['metadata']
def _apply_instance_configs(self):
old_metadata = self.old_instance_json['metadata']
body_json = {
'architecture': old_metadata['architecture'],
'config': old_metadata['config'],
'devices': old_metadata['devices'],
'profiles': old_metadata['profiles']
}
if self._needs_to_change_container_config('architecture'):
if self._needs_to_change_instance_config('architecture'):
body_json['architecture'] = self.config['architecture']
if self._needs_to_change_container_config('config'):
if self._needs_to_change_instance_config('config'):
for k, v in self.config['config'].items():
body_json['config'][k] = v
if self._needs_to_change_container_config('ephemeral'):
if self._needs_to_change_instance_config('ephemeral'):
body_json['ephemeral'] = self.config['ephemeral']
if self._needs_to_change_container_config('devices'):
if self._needs_to_change_instance_config('devices'):
body_json['devices'] = self.config['devices']
if self._needs_to_change_container_config('profiles'):
if self._needs_to_change_instance_config('profiles'):
body_json['profiles'] = self.config['profiles']
self.client.do('PUT', '/1.0/containers/{0}'.format(self.name), body_json=body_json)
self.actions.append('apply_container_configs')
self.client.do('PUT', '{0}/{1}'.format(self.api_endpoint, self.name), body_json=body_json)
self.actions.append('apply_instance_configs')
def run(self):
"""Run the main method."""
@@ -627,8 +669,8 @@ class LXDContainerManagement(object):
self.client.authenticate(self.trust_password)
self.ignore_volatile_options = self.module.params.get('ignore_volatile_options')
self.old_container_json = self._get_container_json()
self.old_state = self._container_json_to_module_state(self.old_container_json)
self.old_instance_json = self._get_instance_json()
self.old_state = self._instance_json_to_module_state(self.old_instance_json)
action = getattr(self, LXD_ANSIBLE_STATES[self.state])
action()
@@ -698,6 +740,11 @@ def main():
type='int',
default=30
),
type=dict(
type='str',
default='container',
choices=['container', 'virtual-machine'],
),
wait_for_ipv4_addresses=dict(
type='bool',
default=False
@@ -736,6 +783,7 @@ def main():
'This will change in the future. Please test your scripts'
'by "ignore_volatile_options: false". To keep the old behavior, set that option explicitly to "true"',
version='6.0.0', collection_name='community.general')
lxd_manage = LXDContainerManagement(module=module)
lxd_manage.run()

View File

@@ -359,6 +359,10 @@ except ImportError:
from ansible.module_utils.basic import AnsibleModule, env_fallback
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.proxmox import (
ansible_to_proxmox_bool
)
VZ_TYPE = None
@@ -605,14 +609,14 @@ def main():
netif=module.params['netif'],
mounts=module.params['mounts'],
ip_address=module.params['ip_address'],
onboot=int(module.params['onboot']),
onboot=ansible_to_proxmox_bool(module.params['onboot']),
cpuunits=module.params['cpuunits'],
nameserver=module.params['nameserver'],
searchdomain=module.params['searchdomain'],
force=int(module.params['force']),
force=ansible_to_proxmox_bool(module.params['force']),
pubkey=module.params['pubkey'],
features=",".join(module.params['features']) if module.params['features'] is not None else None,
unprivileged=int(module.params['unprivileged']),
unprivileged=ansible_to_proxmox_bool(module.params['unprivileged']),
description=module.params['description'],
hookscript=module.params['hookscript'])

View File

@@ -319,11 +319,25 @@ def remove_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'delete', workspace)
def build_plan(command, project_path, variables_args, state_file, targets, state, plan_path=None):
def build_plan(command, project_path, variables_args, state_file, targets, state, apply_args, plan_path=None):
if plan_path is None:
f, plan_path = tempfile.mkstemp(suffix='.tfplan')
plan_command = [command[0], 'plan', '-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path]
local_command = command.copy()
plan_command = [command[0], 'plan']
if state == "planned":
for c in local_command[1:]:
plan_command.append(c)
if state == "present":
for a in apply_args:
local_command.remove(a)
for c in local_command[1:]:
plan_command.append(c)
plan_command.extend(['-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path])
for t in targets:
plan_command.extend(['-target', t])
@@ -461,7 +475,7 @@ def main():
module.fail_json(msg='Could not find plan_file "{0}", check the path and try again.'.format(plan_file))
else:
plan_file, needs_application, out, err, command = build_plan(command, project_path, variables_args, state_file,
module.params.get('targets'), state, plan_file)
module.params.get('targets'), state, APPLY_ARGS, plan_file)
if state == 'present' and check_destroy and '- destroy' in out:
module.fail_json(msg="Aborting command because it would destroy some resources. "
"Consider switching the 'check_destroy' to false to suppress this error")

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the servers.
- U(https://www.online.net/en/dedicated-server)
author:
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.online

View File

@@ -12,7 +12,7 @@ short_description: Gather information about Online user.
description:
- Gather information about the user.
author:
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.online
'''

View File

@@ -16,7 +16,7 @@ DOCUMENTATION = '''
---
module: scaleway_compute
short_description: Scaleway compute management module
author: Remy Leone (@sieben)
author: Remy Leone (@remyleone)
description:
- "This module manages compute instances on Scaleway."
extends_documentation_fragment:

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway images available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.scaleway

View File

@@ -13,7 +13,7 @@ DOCUMENTATION = '''
---
module: scaleway_ip
short_description: Scaleway IP management module
author: Remy Leone (@sieben)
author: Remy Leone (@remyleone)
description:
- This module manages IP on Scaleway account
U(https://developer.scaleway.com)

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway ips available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.scaleway

View File

@@ -16,7 +16,7 @@ DOCUMENTATION = '''
---
module: scaleway_lb
short_description: Scaleway load-balancer management module
author: Remy Leone (@sieben)
author: Remy Leone (@remyleone)
description:
- "This module manages load-balancers on Scaleway."
extends_documentation_fragment:

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway organizations available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
options:
api_url:
description:

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway security groups available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
options:
region:
type: str

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway servers available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.scaleway

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway snapshot available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.scaleway

View File

@@ -16,7 +16,7 @@ DOCUMENTATION = '''
---
module: scaleway_sshkey
short_description: Scaleway SSH keys management module
author: Remy Leone (@sieben)
author: Remy Leone (@remyleone)
description:
- This module manages SSH keys on Scaleway account
U(https://developer.scaleway.com)

View File

@@ -16,7 +16,7 @@ DOCUMENTATION = '''
---
module: scaleway_user_data
short_description: Scaleway user_data management module
author: Remy Leone (@sieben)
author: Remy Leone (@remyleone)
description:
- "This module manages user_data on compute instances on Scaleway."
- "It can be used to configure cloud-init for instance"

View File

@@ -15,7 +15,7 @@ description:
- Gather information about the Scaleway volumes available.
author:
- "Yanis Guenane (@Spredzy)"
- "Remy Leone (@sieben)"
- "Remy Leone (@remyleone)"
extends_documentation_fragment:
- community.general.scaleway

View File

@@ -60,6 +60,7 @@ extends_documentation_fragment:
- community.general.redis.documentation
seealso:
- module: community.general.redis_data_incr
- module: community.general.redis_data_info
- module: community.general.redis
'''

View File

@@ -47,7 +47,7 @@ notes:
run the C(GET) command on the key, otherwise the module will fail.
seealso:
- module: community.general.redis_set
- module: community.general.redis_data
- module: community.general.redis_data_info
- module: community.general.redis
'''

View File

@@ -26,6 +26,8 @@ extends_documentation_fragment:
- community.general.redis
seealso:
- module: community.general.redis_data
- module: community.general.redis_data_incr
- module: community.general.redis_info
- module: community.general.redis
'''

View File

@@ -0,0 +1 @@
./net_tools/dnsimple_info.py

View File

@@ -85,11 +85,6 @@ import os.path
import shutil
import tempfile
try: # python 3.3+
from shlex import quote
except ImportError: # older python
from pipes import quote
from ansible.module_utils.basic import AnsibleModule
@@ -154,9 +149,9 @@ def main():
# Use 7zip when we have a binary, otherwise try to mount
if binary:
cmd = '%s x "%s" -o"%s" %s' % (binary, image, tmp_dir, ' '.join([quote(f) for f in extract_files]))
cmd = [binary, 'x', image, '-o%s' % tmp_dir] + extract_files
else:
cmd = 'mount -o loop,ro "%s" "%s"' % (image, tmp_dir)
cmd = [module.get_bin_path('mount'), '-o', 'loop,ro', image, tmp_dir]
rc, out, err = module.run_command(cmd)
if rc != 0:
@@ -201,7 +196,7 @@ def main():
result['changed'] = True
finally:
if not binary:
module.run_command('umount "%s"' % tmp_dir)
module.run_command([module.get_bin_path('umount'), tmp_dir])
shutil.rmtree(tmp_dir)

View File

@@ -12,9 +12,9 @@ DOCUMENTATION = '''
module: xattr
short_description: Manage user defined extended attributes
description:
- Manages filesystem user defined extended attributes.
- Requires that extended attributes are enabled on the target filesystem
and that the setfattr/getfattr utilities are present.
- Manages filesystem user defined extended attributes.
- Requires that extended attributes are enabled on the target filesystem
and that the setfattr/getfattr utilities are present.
options:
path:
description:
@@ -34,13 +34,13 @@ options:
type: str
value:
description:
- The value to set the named name/key to, it automatically sets the C(state) to 'set'.
- The value to set the named name/key to, it automatically sets the I(state) to C(present).
type: str
state:
description:
- defines which state you want to do.
C(read) retrieves the current value for a C(key) (default)
C(present) sets C(name) to C(value), default if value is set
C(read) retrieves the current value for a I(key) (default)
C(present) sets I(path) to C(value), default if value is set
C(all) dumps all data
C(keys) retrieves all keys
C(absent) deletes the key
@@ -49,14 +49,14 @@ options:
default: read
follow:
description:
- If C(yes), dereferences symlinks and sets/gets attributes on symlink target,
- If C(true), dereferences symlinks and sets/gets attributes on symlink target,
otherwise acts on symlink itself.
type: bool
default: yes
default: true
notes:
- As of Ansible 2.3, the I(name) option has been changed to I(path) as default, but I(name) still works as well.
author:
- Brian Coca (@bcoca)
- Brian Coca (@bcoca)
'''
EXAMPLES = '''
@@ -116,7 +116,8 @@ def get_xattr(module, path, key, follow):
if key is None:
cmd.append('-d')
else:
cmd.append('-n %s' % key)
cmd.append('-n')
cmd.append(key)
cmd.append(path)
return _run_xattr(module, cmd, False)
@@ -127,8 +128,10 @@ def set_xattr(module, path, key, value, follow):
cmd = [module.get_bin_path('setfattr', True)]
if not follow:
cmd.append('-h')
cmd.append('-n %s' % key)
cmd.append('-v %s' % value)
cmd.append('-n')
cmd.append(key)
cmd.append('-v')
cmd.append(value)
cmd.append(path)
return _run_xattr(module, cmd)
@@ -139,7 +142,8 @@ def rm_xattr(module, path, key, follow):
cmd = [module.get_bin_path('setfattr', True)]
if not follow:
cmd.append('-h')
cmd.append('-x %s' % key)
cmd.append('-x')
cmd.append(key)
cmd.append(path)
return _run_xattr(module, cmd, False)
@@ -148,7 +152,7 @@ def rm_xattr(module, path, key, follow):
def _run_xattr(module, cmd, check_rc=True):
try:
(rc, out, err) = module.run_command(' '.join(cmd), check_rc=check_rc)
(rc, out, err) = module.run_command(cmd, check_rc=check_rc)
except Exception as e:
module.fail_json(msg="%s!" % to_native(e))

View File

@@ -0,0 +1 @@
source_control/gitlab/gitlab_branch.py

View File

@@ -64,6 +64,7 @@ options:
choices:
- ldap
- kerberos
- sssd
provider_type:
description:
@@ -83,9 +84,10 @@ options:
config:
description:
- Dict specifying the configuration options for the provider; the contents differ depending on
the value of I(provider_id). Examples are given below for C(ldap) and C(kerberos). It is easiest
to obtain valid config values by dumping an already-existing user federation configuration
through check-mode in the I(existing) field.
the value of I(provider_id). Examples are given below for C(ldap), C(kerberos) and C(sssd).
It is easiest to obtain valid config values by dumping an already-existing user federation
configuration through check-mode in the I(existing) field.
- The value C(sssd) has been supported since community.general 4.2.0.
type: dict
suboptions:
enabled:
@@ -531,6 +533,22 @@ EXAMPLES = '''
allowPasswordAuthentication: false
updateProfileFirstLogin: false
- name: Create sssd user federation
community.general.keycloak_user_federation:
auth_keycloak_url: https://keycloak.example.com/auth
auth_realm: master
auth_username: admin
auth_password: password
realm: my-realm
name: my-sssd
state: present
provider_id: sssd
provider_type: org.keycloak.storage.UserStorageProvider
config:
priority: 0
enabled: true
cachePolicy: DEFAULT
- name: Delete user federation
community.general.keycloak_user_federation:
auth_keycloak_url: https://keycloak.example.com/auth
@@ -765,7 +783,7 @@ def main():
realm=dict(type='str', default='master'),
id=dict(type='str'),
name=dict(type='str'),
provider_id=dict(type='str', aliases=['providerId'], choices=['ldap', 'kerberos']),
provider_id=dict(type='str', aliases=['providerId'], choices=['ldap', 'kerberos', 'sssd']),
provider_type=dict(type='str', aliases=['providerType'], default='org.keycloak.storage.UserStorageProvider'),
parent_id=dict(type='str', aliases=['parentId']),
mappers=dict(type='list', elements='dict', options=mapper_spec),
@@ -843,8 +861,8 @@ def main():
# special handling of mappers list to allow change detection
if module.params.get('mappers') is not None:
if module.params['provider_id'] == 'kerberos':
module.fail_json(msg='Cannot configure mappers for Kerberos federations.')
if module.params['provider_id'] in ['kerberos', 'sssd']:
module.fail_json(msg='Cannot configure mappers for {type} provider.'.format(type=module.params['provider_id']))
for change in module.params['mappers']:
change = dict((k, v) for k, v in change.items() if change[k] is not None)
if change.get('id') is None and change.get('name') is None:

View File

@@ -0,0 +1 @@
remote_management/redfish/ilo_redfish_config.py

View File

@@ -0,0 +1 @@
remote_management/redfish/ilo_redfish_info.py

View File

@@ -63,7 +63,7 @@ def query_log_status(module, le_path, path, state="present"):
""" Returns whether a log is followed or not. """
if state == "present":
rc, out, err = module.run_command("%s followed %s" % (le_path, path))
rc, out, err = module.run_command([le_path, "followed", path])
if rc == 0:
return True
@@ -87,7 +87,7 @@ def follow_log(module, le_path, logs, name=None, logtype=None):
cmd.extend(['--name', name])
if logtype:
cmd.extend(['--type', logtype])
rc, out, err = module.run_command(' '.join(cmd))
rc, out, err = module.run_command(cmd)
if not query_log_status(module, le_path, log):
module.fail_json(msg="failed to follow '%s': %s" % (log, err.strip()))

View File

@@ -82,7 +82,7 @@ PACKAGE_STATE_MAP = dict(
def is_plugin_present(module, plugin_bin, plugin_name):
cmd_args = [plugin_bin, "list", plugin_name]
rc, out, err = module.run_command(" ".join(cmd_args))
rc, out, err = module.run_command(cmd_args)
return rc == 0

View File

@@ -122,7 +122,7 @@ class Monit(object):
return self._monit_version
def _get_monit_version(self):
rc, out, err = self.module.run_command('%s -V' % self.monit_bin_path, check_rc=True)
rc, out, err = self.module.run_command([self.monit_bin_path, '-V'], check_rc=True)
version_line = out.split('\n')[0]
raw_version = re.search(r"([0-9]+\.){1,2}([0-9]+)?", version_line).group()
return raw_version, tuple(map(int, raw_version.split('.')))
@@ -140,7 +140,7 @@ class Monit(object):
@property
def command_args(self):
return "-B" if self.monit_version() > (5, 18) else ""
return ["-B"] if self.monit_version() > (5, 18) else []
def get_status(self, validate=False):
"""Return the status of the process in monit.
@@ -149,7 +149,7 @@ class Monit(object):
"""
monit_command = "validate" if validate else "status"
check_rc = False if validate else True # 'validate' always has rc = 1
command = ' '.join([self.monit_bin_path, monit_command, self.command_args, self.process_name])
command = [self.monit_bin_path, monit_command] + self.command_args + [self.process_name]
rc, out, err = self.module.run_command(command, check_rc=check_rc)
return self._parse_status(out, err)
@@ -182,7 +182,8 @@ class Monit(object):
return status
def is_process_present(self):
rc, out, err = self.module.run_command('%s summary %s' % (self.monit_bin_path, self.command_args), check_rc=True)
command = [self.monit_bin_path, 'summary'] + self.command_args
rc, out, err = self.module.run_command(command, check_rc=True)
return bool(re.findall(r'\b%s\b' % self.process_name, out))
def is_process_running(self):
@@ -190,7 +191,7 @@ class Monit(object):
def run_command(self, command):
"""Runs a monit command, and returns the new status."""
return self.module.run_command('%s %s %s' % (self.monit_bin_path, command, self.process_name), check_rc=True)
return self.module.run_command([self.monit_bin_path, command, self.process_name], check_rc=True)
def wait_for_status_change(self, current_status):
running_status = self.get_status()
@@ -228,7 +229,7 @@ class Monit(object):
return current_status
def reload(self):
rc, out, err = self.module.run_command('%s reload' % self.monit_bin_path)
rc, out, err = self.module.run_command([self.monit_bin_path, 'reload'])
if rc != 0:
self.exit_fail('monit reload failed', stdout=out, stderr=err)
self.exit_success(state='reloaded')

View File

@@ -0,0 +1,335 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Edward Hilgendorf, <edward@hilgendorf.me>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: dnsimple_info
short_description: Pull basic info from DNSimple API
version_added: "4.2.0"
description: Retrieve existing records and domains from DNSimple API.
options:
name:
description:
- The domain name to retrieve info from.
- Will return all associated records for this domain if specified.
- If not specified, will return all domains associated with the account ID.
type: str
account_id:
description: The account ID to query.
required: true
type: str
api_key:
description: The API key to use.
required: true
type: str
record:
description:
- The record to find.
- If specified, only this record will be returned instead of all records.
required: false
type: str
sandbox:
description: Whether or not to use sandbox environment.
required: false
default: false
type: bool
author:
- Edward Hilgendorf (@edhilgendorf)
'''
EXAMPLES = r'''
- name: Get all domains from an account
community.general.dnsimple_info:
account_id: "1234"
api_key: "1234"
- name: Get all records from a domain
community.general.dnsimple_info:
name: "example.com"
account_id: "1234"
api_key: "1234"
- name: Get all info from a matching record
community.general.dnsimple_info:
name: "example.com"
record: "subdomain"
account_id: "1234"
api_key: "1234"
'''
RETURN = r'''
dnsimple_domain_info:
description: Returns a list of dictionaries of all domains associated with the supplied account ID.
type: list
elements: dict
returned: success when I(name) is not specified
sample:
- account_id: 1234
created_at: '2021-10-16T21:25:42Z'
id: 123456
last_transferred_at:
name: example.com
reverse: false
secondary: false
updated_at: '2021-11-10T20:22:50Z'
contains:
account_id:
description: The account ID.
type: int
created_at:
description: When the domain entry was created.
type: str
id:
description: ID of the entry.
type: int
last_transferred_at:
description: Date the domain was transferred, or empty if not.
type: str
name:
description: Name of the record.
type: str
reverse:
description: Whether or not it is a reverse zone record.
type: bool
updated_at:
description: When the domain entry was updated.
type: str
dnsimple_records_info:
description: Returns a list of dictionaries with all records for the domain supplied.
type: list
elements: dict
returned: success when I(name) is specified, but I(record) is not
sample:
- content: ns1.dnsimple.com admin.dnsimple.com
created_at: '2021-10-16T19:07:34Z'
id: 12345
name: 'catheadbiscuit'
parent_id: null
priority: null
regions:
- global
system_record: true
ttl: 3600
type: SOA
updated_at: '2021-11-15T23:55:51Z'
zone_id: example.com
contains:
content:
description: Content of the returned record.
type: str
created_at:
description: When the domain entry was created.
type: str
id:
description: ID of the entry.
type: int
name:
description: Name of the record.
type: str
parent_id:
description: Parent record or null.
type: int
priority:
description: Priority setting of the record.
type: str
regions:
description: List of regions where the record is available.
type: list
system_record:
description: Whether or not it is a system record.
type: bool
ttl:
description: Record TTL.
type: int
type:
description: Record type.
type: str
updated_at:
description: When the domain entry was updated.
type: str
zone_id:
description: ID of the zone that the record is associated with.
type: str
dnsimple_record_info:
description: Returns a list of dictionaries that match the record supplied.
returned: success when I(name) and I(record) are specified
type: list
elements: dict
sample:
- content: 1.2.3.4
created_at: '2021-11-15T23:55:51Z'
id: 123456
name: catheadbiscuit
parent_id: null
priority: null
regions:
- global
system_record: false
ttl: 3600
type: A
updated_at: '2021-11-15T23:55:51Z'
zone_id: example.com
contains:
content:
description: Content of the returned record.
type: str
created_at:
description: When the domain entry was created.
type: str
id:
description: ID of the entry.
type: int
name:
description: Name of the record.
type: str
parent_id:
description: Parent record or null.
type: int
priority:
description: Priority setting of the record.
type: str
regions:
description: List of regions where the record is available.
type: list
system_record:
description: Whether or not it is a system record.
type: bool
ttl:
description: Record TTL.
type: int
type:
description: Record type.
type: str
updated_at:
description: When the domain entry was updated.
type: str
zone_id:
description: ID of the zone that the record is associated with.
type: str
'''
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import missing_required_lib
import json
try:
from requests import Request, Session
except ImportError:
HAS_ANOTHER_LIBRARY = False
ANOTHER_LIBRARY_IMPORT_ERROR = traceback.format_exc()
else:
HAS_ANOTHER_LIBRARY = True
def build_url(account, key, is_sandbox):
headers = {'Accept': 'application/json',
'Authorization': 'Bearer ' + key}
url = 'https://api{sandbox}.dnsimple.com/'.format(
sandbox=".sandbox" if is_sandbox else "") + 'v2/' + account
req = Request(url=url, headers=headers)
prepped_request = req.prepare()
return prepped_request
def iterate_data(module, request_object):
base_url = request_object.url
response = Session().send(request_object)
if 'pagination' in response.json():
data = response.json()["data"]
pages = response.json()["pagination"]["total_pages"]
if int(pages) > 1:
for page in range(1, pages):
page = page + 1
request_object.url = base_url + '&page=' + str(page)
new_results = Session().send(request_object)
data = data + new_results.json()["data"]
return(data)
else:
module.fail_json('API Call failed, check ID, key and sandbox values')
def record_info(dnsimple_mod, req_obj):
req_obj.url, req_obj.method = req_obj.url + '/zones/' + dnsimple_mod.params["name"] + '/records?name=' + dnsimple_mod.params["record"], 'GET'
return iterate_data(dnsimple_mod, req_obj)
def domain_info(dnsimple_mod, req_obj):
req_obj.url, req_obj.method = req_obj.url + '/zones/' + dnsimple_mod.params["name"] + '/records?per_page=100', 'GET'
return iterate_data(dnsimple_mod, req_obj)
def account_info(dnsimple_mod, req_obj):
req_obj.url, req_obj.method = req_obj.url + '/zones/?per_page=100', 'GET'
return iterate_data(dnsimple_mod, req_obj)
def main():
# define available arguments/parameters a user can pass to the module
fields = {
"account_id": {"required": True, "type": "str"},
"api_key": {"required": True, "type": "str", "no_log": True},
"name": {"required": False, "type": "str"},
"record": {"required": False, "type": "str"},
"sandbox": {"required": False, "type": "bool", "default": False}
}
result = {
'changed': False
}
module = AnsibleModule(
argument_spec=fields,
supports_check_mode=True
)
params = module.params
req = build_url(params['account_id'],
params['api_key'],
params['sandbox'])
if not HAS_ANOTHER_LIBRARY:
# Needs: from ansible.module_utils.basic import missing_required_lib
module.exit_json(
msg=missing_required_lib('another_library'),
exception=ANOTHER_LIBRARY_IMPORT_ERROR)
# At minimum we need account and key
if params['account_id'] and params['api_key']:
# If we have a record return info on that record
if params['name'] and params['record']:
result['dnsimple_record_info'] = record_info(module, req)
module.exit_json(**result)
# If we have the account only and domain, return records for the domain
elif params['name']:
result['dnsimple_records_info'] = domain_info(module, req)
module.exit_json(**result)
# If we have the account only, return domains
else:
result['dnsimple_domain_info'] = account_info(module, req)
module.exit_json(**result)
else:
module.fail_json(msg="Need at least account_id and api_key")
if __name__ == '__main__':
main()

View File

@@ -76,7 +76,7 @@ class Namespace(object):
def exists(self):
'''Check if the namespace already exists'''
rc, out, err = self.module.run_command('ip netns list')
rc, out, err = self.module.run_command(['ip', 'netns', 'list'])
if rc != 0:
self.module.fail_json(msg=to_text(err))
return self.name in out

View File

@@ -69,10 +69,11 @@ options:
type: str
ip4:
description:
- The IPv4 address to this interface.
- Use the format C(192.0.2.24/24).
- List of IPv4 addresses to this interface.
- Use the format C(192.0.2.24/24) or C(192.0.2.24).
- If defined and I(method4) is not specified, automatically set C(ipv4.method) to C(manual).
type: str
type: list
elements: str
gw4:
description:
- The IPv4 gateway for this interface.
@@ -142,10 +143,11 @@ options:
version_added: 3.3.0
ip6:
description:
- The IPv6 address to this interface.
- Use the format C(abbe::cafe).
- List of IPv6 addresses to this interface.
- Use the format C(abbe::cafe/128) or C(abbe::cafe).
- If defined and I(method6) is not specified, automatically set C(ipv6.method) to C(manual).
type: str
type: list
elements: str
gw6:
description:
- The IPv6 gateway for this interface.
@@ -182,6 +184,18 @@ options:
type: str
choices: [ignore, auto, dhcp, link-local, manual, shared, disabled]
version_added: 2.2.0
ip_privacy6:
description:
- If enabled, it makes the kernel generate a temporary IPv6 address in addition to the public one.
type: str
choices: [disabled, prefer-public-addr, prefer-temp-addr, unknown]
version_added: 4.2.0
addr_gen_mode6:
description:
- Configure method for creating the address for use with IPv6 Stateless Address Autoconfiguration.
type: str
choices: [eui64, stable-privacy]
version_added: 4.2.0
mtu:
description:
- The connection MTU, e.g. 9000. This can't be applied when creating the interface and is done once the interface has been created.
@@ -822,7 +836,9 @@ EXAMPLES = r'''
# nmcli_ethernet:
# - conn_name: em1
# ifname: em1
# ip4: '{{ tenant_ip }}'
# ip4:
# - '{{ tenant_ip }}'
# - '{{ second_tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: em2
# ifname: em2
@@ -844,6 +860,7 @@ EXAMPLES = r'''
# storage_ip: "192.0.2.91/23"
# external_ip: "198.51.100.23/21"
# tenant_ip: "203.0.113.77/23"
# second_tenant_ip: "204.0.113.77/23"
# ```
@@ -997,6 +1014,26 @@ EXAMPLES = r'''
type: ethernet
state: present
- name: Add second ip4 address
community.general.nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4:
- 192.0.2.100/24
- 192.0.3.100/24
state: present
- name: Add second ip6 address
community.general.nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip6:
- 2001:db8::cafe
- 2002:db8::cafe
state: present
- name: Add VxLan
community.general.nmcli:
type: vxlan
@@ -1157,6 +1194,8 @@ class Nmcli(object):
self.dns6_search = module.params['dns6_search']
self.dns6_ignore_auto = module.params['dns6_ignore_auto']
self.method6 = module.params['method6']
self.ip_privacy6 = module.params['ip_privacy6']
self.addr_gen_mode6 = module.params['addr_gen_mode6']
self.mtu = module.params['mtu']
self.stp = module.params['stp']
self.priority = module.params['priority']
@@ -1241,7 +1280,7 @@ class Nmcli(object):
# IP address options.
if self.ip_conn_type and not self.master:
options.update({
'ipv4.addresses': self.ip4,
'ipv4.addresses': self.enforce_ipv4_cidr_notation(self.ip4),
'ipv4.dhcp-client-id': self.dhcp_client_id,
'ipv4.dns': self.dns4,
'ipv4.dns-search': self.dns4_search,
@@ -1254,13 +1293,15 @@ class Nmcli(object):
'ipv4.never-default': self.never_default4,
'ipv4.method': self.ipv4_method,
'ipv4.may-fail': self.may_fail4,
'ipv6.addresses': self.ip6,
'ipv6.addresses': self.enforce_ipv6_cidr_notation(self.ip6),
'ipv6.dns': self.dns6,
'ipv6.dns-search': self.dns6_search,
'ipv6.ignore-auto-dns': self.dns6_ignore_auto,
'ipv6.gateway': self.gw6,
'ipv6.ignore-auto-routes': self.gw6_ignore_auto,
'ipv6.method': self.ipv6_method,
'ipv6.ip6-privacy': self.ip_privacy6,
'ipv6.addr-gen-mode': self.addr_gen_mode6
})
# Layer 2 options.
@@ -1332,6 +1373,9 @@ class Nmcli(object):
options.update({
'vlan.id': self.vlanid,
'vlan.parent': self.vlandev,
'vlan.flags': self.flags,
'vlan.ingress': self.ingress,
'vlan.egress': self.egress,
})
elif self.type == 'vxlan':
options.update({
@@ -1374,6 +1418,8 @@ class Nmcli(object):
elif setting == self.mtu_setting:
# MTU is 'auto' by default when detecting changes.
convert_func = self.mtu_to_string
elif setting == 'ipv6.ip6-privacy':
convert_func = self.ip6_privacy_to_num
elif setting_type is list:
# Convert lists to strings for nmcli create/modify commands.
convert_func = self.list_to_string
@@ -1427,6 +1473,23 @@ class Nmcli(object):
else:
return to_text(mtu)
@staticmethod
def ip6_privacy_to_num(privacy):
ip6_privacy_values = {
'disabled': '0',
'prefer-public-addr': '1 (enabled, prefer public IP)',
'prefer-temp-addr': '2 (enabled, prefer temporary IP)',
'unknown': '-1',
}
if privacy is None:
return None
if privacy not in ip6_privacy_values:
raise AssertionError('{privacy} is invalid ip_privacy6 option'.format(privacy=privacy))
return ip6_privacy_values[privacy]
@property
def slave_conn_type(self):
return self.type in (
@@ -1444,6 +1507,18 @@ class Nmcli(object):
'sit',
)
@staticmethod
def enforce_ipv4_cidr_notation(ip4_addresses):
if ip4_addresses is None:
return None
return [address if '/' in address else address + '/32' for address in ip4_addresses]
@staticmethod
def enforce_ipv6_cidr_notation(ip6_addresses):
if ip6_addresses is None:
return None
return [address if '/' in address else address + '/128' for address in ip6_addresses]
@staticmethod
def bool_to_string(boolean):
if boolean:
@@ -1468,7 +1543,9 @@ class Nmcli(object):
'ipv6.ignore-auto-routes',
'802-11-wireless.hidden'):
return bool
elif setting in ('ipv4.dns',
elif setting in ('ipv4.addresses',
'ipv6.addresses',
'ipv4.dns',
'ipv4.dns-search',
'ipv4.routes',
'ipv4.routing-rules',
@@ -1758,7 +1835,7 @@ def main():
'wifi',
'gsm',
]),
ip4=dict(type='str'),
ip4=dict(type='list', elements='str'),
gw4=dict(type='str'),
gw4_ignore_auto=dict(type='bool', default=False),
routes4=dict(type='list', elements='str'),
@@ -1771,13 +1848,15 @@ def main():
method4=dict(type='str', choices=['auto', 'link-local', 'manual', 'shared', 'disabled']),
may_fail4=dict(type='bool', default=True),
dhcp_client_id=dict(type='str'),
ip6=dict(type='str'),
ip6=dict(type='list', elements='str'),
gw6=dict(type='str'),
gw6_ignore_auto=dict(type='bool', default=False),
dns6=dict(type='list', elements='str'),
dns6_search=dict(type='list', elements='str'),
dns6_ignore_auto=dict(type='bool', default=False),
method6=dict(type='str', choices=['ignore', 'auto', 'dhcp', 'link-local', 'manual', 'shared', 'disabled']),
ip_privacy6=dict(type='str', choices=['disabled', 'prefer-public-addr', 'prefer-temp-addr', 'unknown']),
addr_gen_mode6=dict(type='str', choices=['eui64', 'stable-privacy']),
# Bond Specific vars
mode=dict(type='str', default='balance-rr',
choices=['802.3ad', 'active-backup', 'balance-alb', 'balance-rr', 'balance-tlb', 'balance-xor', 'broadcast']),

View File

@@ -127,7 +127,7 @@ ansible_sysname:
type: str
sample: ubuntu-user
ansible_syslocation:
description: The physical location of this node (e.g., `telephone closet, 3rd floor').
description: The physical location of this node (e.g., C(telephone closet, 3rd floor)).
returned: success
type: str
sample: Sitting on the Dock of the Bay

View File

@@ -10,7 +10,7 @@ __metaclass__ = type
DOCUMENTATION = """
module: ansible_galaxy_install
author:
- "Alexei Znamensky (@russoz)"
- "Alexei Znamensky (@russoz)"
short_description: Install Ansible roles or collections using ansible-galaxy
version_added: 3.5.0
description:
@@ -24,44 +24,46 @@ requirements:
options:
type:
description:
- The type of installation performed by C(ansible-galaxy).
- If I(type) is C(both), then I(requirements_file) must be passed and it may contain both roles and collections.
- "Note however that the opposite is not true: if using a I(requirements_file), then I(type) can be any of the three choices."
- "B(Ansible 2.9): The option C(both) will have the same effect as C(role)."
- The type of installation performed by C(ansible-galaxy).
- If I(type) is C(both), then I(requirements_file) must be passed and it may contain both roles and collections.
- "Note however that the opposite is not true: if using a I(requirements_file), then I(type) can be any of the three choices."
- "B(Ansible 2.9): The option C(both) will have the same effect as C(role)."
type: str
choices: [collection, role, both]
required: true
name:
description:
- Name of the collection or role being installed.
- Versions can be specified with C(ansible-galaxy) usual formats. For example, C(community.docker:1.6.1) or C(ansistrano.deploy,3.8.0).
- I(name) and I(requirements_file) are mutually exclusive.
- Name of the collection or role being installed.
- >
Versions can be specified with C(ansible-galaxy) usual formats.
For example, the collection C(community.docker:1.6.1) or the role C(ansistrano.deploy,3.8.0).
- I(name) and I(requirements_file) are mutually exclusive.
type: str
requirements_file:
description:
- Path to a file containing a list of requirements to be installed.
- It works for I(type) equals to C(collection) and C(role).
- I(name) and I(requirements_file) are mutually exclusive.
- "B(Ansible 2.9): It can only be used to install either I(type=role) or I(type=collection), but not both at the same run."
- Path to a file containing a list of requirements to be installed.
- It works for I(type) equals to C(collection) and C(role).
- I(name) and I(requirements_file) are mutually exclusive.
- "B(Ansible 2.9): It can only be used to install either I(type=role) or I(type=collection), but not both at the same run."
type: path
dest:
description:
- The path to the directory containing your collections or roles, according to the value of I(type).
- >
Please notice that C(ansible-galaxy) will not install collections with I(type=both), when I(requirements_file)
contains both roles and collections and I(dest) is specified.
- The path to the directory containing your collections or roles, according to the value of I(type).
- >
Please notice that C(ansible-galaxy) will not install collections with I(type=both), when I(requirements_file)
contains both roles and collections and I(dest) is specified.
type: path
force:
description:
- Force overwriting an existing role or collection.
- Using I(force=true) is mandatory when downgrading.
- "B(Ansible 2.9 and 2.10): Must be C(true) to upgrade roles and collections."
- Force overwriting an existing role or collection.
- Using I(force=true) is mandatory when downgrading.
- "B(Ansible 2.9 and 2.10): Must be C(true) to upgrade roles and collections."
type: bool
default: false
ack_ansible29:
description:
- Acknowledge using Ansible 2.9 with its limitations, and prevents the module from generating warnings about them.
- This option is completely ignored if using a version Ansible greater than C(2.9.x).
- Acknowledge using Ansible 2.9 with its limitations, and prevents the module from generating warnings about them.
- This option is completely ignored if using a version of Ansible greater than C(2.9.x).
type: bool
default: false
"""
@@ -114,9 +116,9 @@ RETURN = """
returned: always
installed_roles:
description:
- If I(requirements_file) is specified instead, returns dictionary with all the roles installed per path.
- If I(name) is specified, returns that role name and the version installed per path.
- "B(Ansible 2.9): Returns empty because C(ansible-galaxy) has no C(list) subcommand."
- If I(requirements_file) is specified instead, returns dictionary with all the roles installed per path.
- If I(name) is specified, returns that role name and the version installed per path.
- "B(Ansible 2.9): Returns empty because C(ansible-galaxy) has no C(list) subcommand."
type: dict
returned: always when installing roles
contains:
@@ -131,9 +133,9 @@ RETURN = """
ansistrano.deploy: 3.8.0
installed_collections:
description:
- If I(requirements_file) is specified instead, returns dictionary with all the collections installed per path.
- If I(name) is specified, returns that collection name and the version installed per path.
- "B(Ansible 2.9): Returns empty because C(ansible-galaxy) has no C(list) subcommand."
- If I(requirements_file) is specified instead, returns dictionary with all the collections installed per path.
- If I(name) is specified, returns that collection name and the version installed per path.
- "B(Ansible 2.9): Returns empty because C(ansible-galaxy) has no C(list) subcommand."
type: dict
returned: always when installing collections
contains:

View File

@@ -167,7 +167,7 @@ class PipX(CmdStateModuleHelper):
command_args_formats = dict(
state=dict(fmt=lambda v: [_state_map.get(v, v)]),
name_source=dict(fmt=lambda n, s: [s] if s else [n], stars=1),
install_deps=dict(fmt="--install-deps", style=ArgFormat.BOOLEAN),
install_deps=dict(fmt="--include-deps", style=ArgFormat.BOOLEAN),
inject_packages=dict(fmt=lambda v: v),
force=dict(fmt="--force", style=ArgFormat.BOOLEAN),
include_injected=dict(fmt="--include-injected", style=ArgFormat.BOOLEAN),

View File

@@ -102,6 +102,20 @@ packages:
returned: when upgrade is set to yes
type: list
sample: [ package, other-package ]
stdout:
description: Output from pacman.
returned: success, when needed
type: str
sample: ":: Synchronizing package databases... core is up to date :: Starting full system upgrade..."
version_added: 4.1.0
stderr:
description: Error output from pacman.
returned: success, when needed
type: str
sample: "warning: libtool: local (2.4.6+44+gb9b44533-14) is newer than core (2.4.6+42+gb88cebd5-15)\nwarning ..."
version_added: 4.1.0
'''
EXAMPLES = '''
@@ -236,9 +250,9 @@ def update_package_db(module, pacman_path):
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
return True
return stdout, stderr
else:
module.fail_json(msg="could not update package db")
module.fail_json(msg="could not update package db", stdout=stdout, stderr=stderr)
def upgrade(module, pacman_path):
@@ -273,11 +287,11 @@ def upgrade(module, pacman_path):
rc, stdout, stderr = module.run_command(cmdupgrade, check_rc=False)
if rc == 0:
if packages:
module.exit_json(changed=True, msg='System upgraded', packages=packages, diff=diff)
module.exit_json(changed=True, msg='System upgraded', packages=packages, diff=diff, stdout=stdout, stderr=stderr)
else:
module.exit_json(changed=False, msg='Nothing to upgrade', packages=packages)
else:
module.fail_json(msg="Could not upgrade")
module.fail_json(msg="Could not upgrade", stdout=stdout, stderr=stderr)
else:
module.exit_json(changed=False, msg='Nothing to upgrade', packages=packages)
@@ -293,6 +307,8 @@ def remove_packages(module, pacman_path, packages):
module.params["extra_args"] += " --nodeps --nodeps"
remove_c = 0
stdout_total = ""
stderr_total = ""
# Using a for loop in case of error, we can report the package that failed
for package in packages:
# Query the package first, to see if we even need to remove
@@ -304,8 +320,10 @@ def remove_packages(module, pacman_path, packages):
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to remove %s" % (package))
module.fail_json(msg="failed to remove %s" % (package), stdout=stdout, stderr=stderr)
stdout_total += stdout
stderr_total += stderr
if module._diff:
d = stdout.split('\n')[2].split(' ')[2:]
for i, pkg in enumerate(d):
@@ -316,7 +334,7 @@ def remove_packages(module, pacman_path, packages):
remove_c += 1
if remove_c > 0:
module.exit_json(changed=True, msg="removed %s package(s)" % remove_c, diff=diff)
module.exit_json(changed=True, msg="removed %s package(s)" % remove_c, diff=diff, stdout=stdout_total, stderr=stderr_total)
module.exit_json(changed=False, msg="package(s) already absent")
@@ -352,7 +370,7 @@ def install_packages(module, pacman_path, state, packages, package_files):
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_repos), stderr))
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_repos), stderr), stdout=stdout, stderr=stderr)
# As we pass `--needed` to pacman returns a single line of ` there is nothing to do` if no change is performed.
# The check for > 3 is here because we pick the 4th line in normal operation.
@@ -371,7 +389,7 @@ def install_packages(module, pacman_path, state, packages, package_files):
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_files), stderr))
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_files), stderr), stdout=stdout, stderr=stderr)
# As we pass `--needed` to pacman returns a single line of ` there is nothing to do` if no change is performed.
# The check for > 3 is here because we pick the 4th line in normal operation.
@@ -389,7 +407,7 @@ def install_packages(module, pacman_path, state, packages, package_files):
message = "But could not ensure 'latest' state for %s package(s) as remote version could not be fetched." % (package_err)
if install_c > 0:
module.exit_json(changed=True, msg="installed %s package(s). %s" % (install_c, message), diff=diff)
module.exit_json(changed=True, msg="installed %s package(s). %s" % (install_c, message), diff=diff, stdout=stdout, stderr=stderr)
module.exit_json(changed=False, msg="package(s) already installed. %s" % (message), diff=diff)
@@ -479,9 +497,9 @@ def main():
p['state'] = 'absent'
if p["update_cache"] and not module.check_mode:
update_package_db(module, pacman_path)
stdout, stderr = update_package_db(module, pacman_path)
if not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Updated the package master lists')
module.exit_json(changed=True, msg='Updated the package master lists', stdout=stdout, stderr=stderr)
if p['update_cache'] and module.check_mode and not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Would have updated the package cache')

View File

@@ -69,12 +69,13 @@ EXAMPLES = r'''
executable: /opt/hp/tools/hponcfg
'''
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.module_helper import (
CmdModuleHelper, ArgFormat
)
def main():
module = AnsibleModule(
class HPOnCfg(CmdModuleHelper):
module = dict(
argument_spec=dict(
src=dict(type='path', required=True, aliases=['path']),
minfw=dict(type='str'),
@@ -82,29 +83,24 @@ def main():
verbose=dict(default=False, type='bool'),
)
)
command_args_formats = dict(
src=dict(fmt=["-f", "{0}"]),
verbose=dict(fmt="-v", style=ArgFormat.BOOLEAN),
minfw=dict(fmt=["-m", "{0}"]),
)
check_rc = True
# Consider every action a change (not idempotent yet!)
changed = True
def __init_module__(self):
self.command = self.vars.executable
# Consider every action a change (not idempotent yet!)
self.changed = True
src = module.params['src']
minfw = module.params['minfw']
executable = module.params['executable']
verbose = module.params['verbose']
def __run__(self):
self.run_command(params=['src', 'verbose', 'minfw'])
options = ' -f %s' % src
if verbose:
options += ' -v'
if minfw:
options += ' -m %s' % minfw
rc, stdout, stderr = module.run_command('%s %s' % (executable, options))
if rc != 0:
module.fail_json(rc=rc, msg="Failed to run hponcfg", stdout=stdout, stderr=stderr)
module.exit_json(changed=changed, stdout=stdout, stderr=stderr)
def main():
HPOnCfg.execute()
if __name__ == '__main__':

View File

@@ -35,6 +35,12 @@ options:
- Password to connect to the BMC.
required: true
type: str
key:
description:
- Encryption key to connect to the BMC in hex format.
required: false
type: str
version_added: 4.1.0
bootdev:
description:
- Set boot device to use on next reboot
@@ -115,11 +121,13 @@ EXAMPLES = '''
name: test.testdomain.com
user: admin
password: password
key: 1234567890AABBCCDEFF000000EEEE12
bootdev: network
state: absent
'''
import traceback
import binascii
PYGHMI_IMP_ERR = None
try:
@@ -138,6 +146,7 @@ def main():
port=dict(default=623, type='int'),
user=dict(required=True, no_log=True),
password=dict(required=True, no_log=True),
key=dict(type='str', no_log=True),
state=dict(default='present', choices=['present', 'absent']),
bootdev=dict(required=True, choices=['network', 'hd', 'floppy', 'safe', 'optical', 'setup', 'default']),
persistent=dict(default=False, type='bool'),
@@ -162,10 +171,18 @@ def main():
if state == 'absent' and bootdev == 'default':
module.fail_json(msg="The bootdev 'default' cannot be used with state 'absent'.")
try:
if module.params['key']:
key = binascii.unhexlify(module.params['key'])
else:
key = None
except Exception as e:
module.fail_json(msg="Unable to convert 'key' from hex string.")
# --- run command ---
try:
ipmi_cmd = command.Command(
bmc=name, userid=user, password=password, port=port
bmc=name, userid=user, password=password, port=port, kg=key
)
module.debug('ipmi instantiated - name: "%s"' % name)
current = ipmi_cmd.get_bootdev()

View File

@@ -35,6 +35,12 @@ options:
- Password to connect to the BMC.
required: true
type: str
key:
description:
- Encryption key to connect to the BMC in hex format.
required: false
type: str
version_added: 4.1.0
state:
description:
- Whether to ensure that the machine in desired state.
@@ -76,6 +82,7 @@ EXAMPLES = '''
'''
import traceback
import binascii
PYGHMI_IMP_ERR = None
try:
@@ -95,6 +102,7 @@ def main():
state=dict(required=True, choices=['on', 'off', 'shutdown', 'reset', 'boot']),
user=dict(required=True, no_log=True),
password=dict(required=True, no_log=True),
key=dict(type='str', no_log=True),
timeout=dict(default=300, type='int'),
),
supports_check_mode=True,
@@ -110,10 +118,18 @@ def main():
state = module.params['state']
timeout = module.params['timeout']
try:
if module.params['key']:
key = binascii.unhexlify(module.params['key'])
else:
key = None
except Exception as e:
module.fail_json(msg="Unable to convert 'key' from hex string.")
# --- run command ---
try:
ipmi_cmd = command.Command(
bmc=name, userid=user, password=password, port=port
bmc=name, userid=user, password=password, port=port, kg=key
)
module.debug('ipmi instantiated - name: "%s"' % name)

View File

@@ -0,0 +1,175 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2021-2022 Hewlett Packard Enterprise, Inc. All rights reserved.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: ilo_redfish_config
short_description: Sets or updates configuration attributes on HPE iLO with Redfish OEM extensions
version_added: 4.2.0
description:
- Builds Redfish URIs locally and sends them to iLO to
set or update a configuration attribute.
- For use with HPE iLO operations that require Redfish OEM extensions.
options:
category:
required: true
type: str
description:
- Command category to execute on iLO.
choices: ['Manager']
command:
required: true
description:
- List of commands to execute on iLO.
type: list
elements: str
baseuri:
required: true
description:
- Base URI of iLO.
type: str
username:
description:
- User for authentication with iLO.
type: str
password:
description:
- Password for authentication with iLO.
type: str
auth_token:
description:
- Security token for authentication with OOB controller.
type: str
timeout:
description:
- Timeout in seconds for URL requests to iLO controller.
default: 10
type: int
attribute_name:
required: true
description:
- Name of the attribute to be configured.
type: str
attribute_value:
required: false
description:
- Value of the attribute to be configured.
type: str
author:
- "Bhavya B (@bhavya06)"
'''
EXAMPLES = '''
- name: Disable WINS Registration
community.general.ilo_redfish_config:
category: Manager
command: SetWINSReg
baseuri: 15.X.X.X
username: Admin
password: Testpass123
attribute_name: WINSRegistration
- name: Set Time Zone
community.general.ilo_redfish_config:
category: Manager
command: SetTimeZone
baseuri: 15.X.X.X
username: Admin
password: Testpass123
attribute_name: TimeZone
attribute_value: Chennai
'''
RETURN = '''
msg:
description: Message with action result or error description
returned: always
type: str
sample: "Action was successful"
'''
CATEGORY_COMMANDS_ALL = {
"Manager": ["SetTimeZone", "SetDNSserver", "SetDomainName", "SetNTPServers", "SetWINSReg"]
}
from ansible_collections.community.general.plugins.module_utils.ilo_redfish_utils import iLORedfishUtils
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
def main():
result = {}
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True, choices=list(
CATEGORY_COMMANDS_ALL.keys())),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
attribute_name=dict(required=True),
attribute_value=dict(),
timeout=dict(type='int', default=10)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
category = module.params['category']
command_list = module.params['command']
creds = {"user": module.params['username'],
"pswd": module.params['password'],
"token": module.params['auth_token']}
timeout = module.params['timeout']
root_uri = "https://" + module.params['baseuri']
rf_utils = iLORedfishUtils(creds, root_uri, timeout, module)
mgr_attributes = {'mgr_attr_name': module.params['attribute_name'],
'mgr_attr_value': module.params['attribute_value']}
changed = False
offending = [
cmd for cmd in command_list if cmd not in CATEGORY_COMMANDS_ALL[category]]
if offending:
module.fail_json(msg=to_native("Invalid Command(s): '%s'. Allowed Commands = %s" % (
offending, CATEGORY_COMMANDS_ALL[category])))
if category == "Manager":
resource = rf_utils._find_managers_resource()
if not resource['ret']:
module.fail_json(msg=to_native(resource['msg']))
dispatch = dict(
SetTimeZone=rf_utils.set_time_zone,
SetDNSserver=rf_utils.set_dns_server,
SetDomainName=rf_utils.set_domain_name,
SetNTPServers=rf_utils.set_ntp_server,
SetWINSReg=rf_utils.set_wins_registration
)
for command in command_list:
result[command] = dispatch[command](mgr_attributes)
if 'changed' in result[command]:
changed |= result[command]['changed']
module.exit_json(ilo_redfish_config=result, changed=changed)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,186 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2021-2022 Hewlett Packard Enterprise, Inc. All rights reserved.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: ilo_redfish_info
short_description: Gathers server information through iLO using Redfish APIs
version_added: 4.2.0
description:
- Builds Redfish URIs locally and sends them to iLO to
get information back.
- For use with HPE iLO operations that require Redfish OEM extensions.
options:
category:
required: true
description:
- List of categories to execute on iLO.
type: list
elements: str
command:
required: true
description:
- List of commands to execute on iLO.
type: list
elements: str
baseuri:
required: true
description:
- Base URI of iLO.
type: str
username:
description:
- User for authentication with iLO.
type: str
password:
description:
- Password for authentication with iLO.
type: str
auth_token:
description:
- Security token for authentication with iLO.
type: str
timeout:
description:
- Timeout in seconds for URL requests to iLO.
default: 10
type: int
author:
- "Bhavya B (@bhavya06)"
'''
EXAMPLES = '''
- name: Get iLO Sessions
community.general.ilo_redfish_info:
category: Sessions
command: GetiLOSessions
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
register: result_sessions
'''
RETURN = '''
ilo_redfish_info:
description: Returns iLO sessions.
type: dict
contains:
GetiLOSessions:
description: Returns the iLO session msg and whether the function executed successfully.
type: dict
contains:
ret:
description: Check variable to see if the information was succesfully retrived.
type: bool
msg:
description: Information of all active iLO sessions.
type: list
elements: dict
contains:
Description:
description: Provides a description of the resource.
type: str
Id:
description: The sessionId.
type: str
Name:
description: The name of the resource.
type: str
UserName:
description: Name to use to log in to the management processor.
type: str
returned: always
'''
CATEGORY_COMMANDS_ALL = {
"Sessions": ["GetiLOSessions"]
}
CATEGORY_COMMANDS_DEFAULT = {
"Sessions": "GetiLOSessions"
}
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible_collections.community.general.plugins.module_utils.ilo_redfish_utils import iLORedfishUtils
def main():
result = {}
category_list = []
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True, type='list', elements='str'),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
timeout=dict(type='int', default=10)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=True
)
creds = {"user": module.params['username'],
"pswd": module.params['password'],
"token": module.params['auth_token']}
timeout = module.params['timeout']
root_uri = "https://" + module.params['baseuri']
rf_utils = iLORedfishUtils(creds, root_uri, timeout, module)
# Build Category list
if "all" in module.params['category']:
for entry in CATEGORY_COMMANDS_ALL:
category_list.append(entry)
else:
# one or more categories specified
category_list = module.params['category']
for category in category_list:
command_list = []
# Build Command list for each Category
if category in CATEGORY_COMMANDS_ALL:
if not module.params['command']:
# True if we don't specify a command --> use default
command_list.append(CATEGORY_COMMANDS_DEFAULT[category])
elif "all" in module.params['command']:
for entry in CATEGORY_COMMANDS_ALL[category]:
command_list.append(entry)
# one or more commands
else:
command_list = module.params['command']
# Verify that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg="Invalid Command: %s" % cmd)
else:
# Fail if even one category given is invalid
module.fail_json(msg="Invalid Category: %s" % category)
# Organize by Categories / Commands
if category == "Sessions":
for command in command_list:
if command == "GetiLOSessions":
result[command] = rf_utils.get_ilo_sessions()
module.exit_json(ilo_redfish_info=result)
if __name__ == '__main__':
main()

View File

@@ -100,6 +100,18 @@ options:
type: bool
default: false
version_added: 3.7.0
hostinterface_config:
required: false
description:
- Setting dict of HostInterface on OOB controller.
type: dict
version_added: '4.1.0'
hostinterface_id:
required: false
description:
- Redfish HostInterface instance ID if multiple HostInterfaces are present.
type: str
version_added: '4.1.0'
author: "Jose Delarosa (@jose-delarosa)"
'''
@@ -201,6 +213,27 @@ EXAMPLES = '''
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Disable Host Interface
community.general.redfish_config:
category: Manager
command: SetHostInterface
hostinterface_config:
InterfaceEnabled: false
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Enable Host Interface for HostInterface resource ID '2'
community.general.redfish_config:
category: Manager
command: SetHostInterface
hostinterface_config:
InterfaceEnabled: true
hostinterface_id: "2"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
'''
RETURN = '''
@@ -220,7 +253,7 @@ from ansible.module_utils.common.text.converters import to_native
CATEGORY_COMMANDS_ALL = {
"Systems": ["SetBiosDefaultSettings", "SetBiosAttributes", "SetBootOrder",
"SetDefaultBootOrder"],
"Manager": ["SetNetworkProtocols", "SetManagerNic"]
"Manager": ["SetNetworkProtocols", "SetManagerNic", "SetHostInterface"]
}
@@ -248,6 +281,8 @@ def main():
default={}
),
strip_etag_quotes=dict(type='bool', default=False),
hostinterface_config=dict(type='dict', default={}),
hostinterface_id=dict(),
),
required_together=[
('username', 'password'),
@@ -288,6 +323,12 @@ def main():
# Etag options
strip_etag_quotes = module.params['strip_etag_quotes']
# HostInterface config options
hostinterface_config = module.params['hostinterface_config']
# HostInterface instance ID
hostinterface_id = module.params['hostinterface_id']
# Build root URI
root_uri = "https://" + module.params['baseuri']
rf_utils = RedfishUtils(creds, root_uri, timeout, module,
@@ -331,6 +372,8 @@ def main():
result = rf_utils.set_network_protocols(module.params['network_protocols'])
elif command == "SetManagerNic":
result = rf_utils.set_manager_nic(nic_addr, nic_config)
elif command == "SetHostInterface":
result = rf_utils.set_hostinterface_attributes(hostinterface_config, hostinterface_id)
# Return data back or fail with proper message
if result['ret'] is True:

View File

@@ -269,6 +269,14 @@ EXAMPLES = '''
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Get manager Redfish Host Interface inventory
community.general.redfish_info:
category: Manager
command: GetHostInterfaces
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
'''
RETURN = '''
@@ -293,7 +301,7 @@ CATEGORY_COMMANDS_ALL = {
"Sessions": ["GetSessions"],
"Update": ["GetFirmwareInventory", "GetFirmwareUpdateCapabilities", "GetSoftwareInventory"],
"Manager": ["GetManagerNicInventory", "GetVirtualMedia", "GetLogs", "GetNetworkProtocols",
"GetHealthReport"],
"GetHealthReport", "GetHostInterfaces"],
}
CATEGORY_COMMANDS_DEFAULT = {
@@ -475,6 +483,8 @@ def main():
result["network_protocols"] = rf_utils.get_network_protocols()
elif command == "GetHealthReport":
result["health_report"] = rf_utils.get_multi_manager_health_report()
elif command == "GetHostInterfaces":
result["host_interfaces"] = rf_utils.get_hostinterfaces()
# Return data back
module.exit_json(redfish_facts=result)

View File

@@ -42,16 +42,18 @@ options:
description:
description:
- Description for the repository.
- Defaults to empty if I(force_defaults=true), which is the default in this module.
- Defaults to empty if I(force_defaults=false) when creating a new repository.
- This is only used when I(state) is C(present).
type: str
default: ''
required: false
private:
description:
- Whether the new repository should be private or not.
- Whether the repository should be private or not.
- Defaults to C(false) if I(force_defaults=true), which is the default in this module.
- Defaults to C(false) if I(force_defaults=false) when creating a new repository.
- This is only used when I(state) is C(present).
type: bool
default: no
required: false
state:
description:
@@ -72,6 +74,14 @@ options:
type: str
default: 'https://api.github.com'
version_added: "3.5.0"
force_defaults:
description:
- Overwrite current I(description) and I(private) attributes with defaults if set to C(true), which currently is the default.
- The default for this option will be deprecated in a future version of this collection, and eventually change to C(false).
type: bool
default: true
required: false
version_added: 4.1.0
requirements:
- PyGithub>=1.54
notes:
@@ -92,6 +102,7 @@ EXAMPLES = '''
description: "Just for fun"
private: yes
state: present
force_defaults: no
register: result
- name: Delete the repository
@@ -117,7 +128,7 @@ import sys
GITHUB_IMP_ERR = None
try:
from github import Github, GithubException
from github import Github, GithubException, GithubObject
from github.GithubException import UnknownObjectException
HAS_GITHUB_PACKAGE = True
except Exception:
@@ -135,7 +146,7 @@ def authenticate(username=None, password=None, access_token=None, api_url=None):
return Github(base_url=api_url, login_or_token=username, password=password)
def create_repo(gh, name, organization=None, private=False, description='', check_mode=False):
def create_repo(gh, name, organization=None, private=None, description=None, check_mode=False):
result = dict(
changed=False,
repo=dict())
@@ -151,16 +162,21 @@ def create_repo(gh, name, organization=None, private=False, description='', chec
except UnknownObjectException:
if not check_mode:
repo = target.create_repo(
name=name, private=private, description=description)
name=name,
private=GithubObject.NotSet if private is None else private,
description=GithubObject.NotSet if description is None else description,
)
result['repo'] = repo.raw_data
result['changed'] = True
changes = {}
if repo is None or repo.raw_data['private'] != private:
changes['private'] = private
if repo is None or repo.raw_data['description'] != description:
changes['description'] = description
if private is not None:
if repo is None or repo.raw_data['private'] != private:
changes['private'] = private
if description is not None:
if repo is None or repo.raw_data['description'] not in (description, description or None):
changes['description'] = description
if changes:
if not check_mode:
@@ -193,6 +209,10 @@ def delete_repo(gh, name, organization=None, check_mode=False):
def run_module(params, check_mode=False):
if params['force_defaults']:
params['description'] = params['description'] or ''
params['private'] = params['private'] or False
gh = authenticate(
username=params['username'], password=params['password'], access_token=params['access_token'],
api_url=params['api_url'])
@@ -216,17 +236,17 @@ def run_module(params, check_mode=False):
def main():
module_args = dict(
username=dict(type='str', required=False, default=None),
password=dict(type='str', required=False, default=None, no_log=True),
access_token=dict(type='str', required=False,
default=None, no_log=True),
username=dict(type='str'),
password=dict(type='str', no_log=True),
access_token=dict(type='str', no_log=True),
name=dict(type='str', required=True),
state=dict(type='str', required=False, default="present",
choices=["present", "absent"]),
organization=dict(type='str', required=False, default=None),
private=dict(type='bool', required=False, default=False),
description=dict(type='str', required=False, default=''),
private=dict(type='bool'),
description=dict(type='str'),
api_url=dict(type='str', required=False, default='https://api.github.com'),
force_defaults=dict(type='bool', default=True),
)
module = AnsibleModule(
argument_spec=module_args,

View File

@@ -0,0 +1,184 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Werner Dijkerman (ikben@werner-dijkerman.nl)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: gitlab_branch
short_description: Create or delete a branch
version_added: 4.2.0
description:
- This module allows to create or delete branches.
author:
- paytroff (@paytroff)
requirements:
- python >= 2.7
- python-gitlab >= 2.3.0
extends_documentation_fragment:
- community.general.auth_basic
- community.general.gitlab
options:
state:
description:
- Create or delete branch.
default: present
type: str
choices: ["present", "absent"]
project:
description:
- The path or name of the project.
required: true
type: str
branch:
description:
- The name of the branch that needs to be created.
required: true
type: str
ref_branch:
description:
- Reference branch to create from.
- This must be specified if I(state=present).
type: str
'''
EXAMPLES = '''
- name: Create branch branch2 from main
community.general.gitlab_branch:
api_url: https://gitlab.com
api_token: secret_access_token
project: "group1/project1"
branch: branch2
ref_branch: main
state: present
- name: Delete branch branch2
community.general.gitlab_branch:
api_url: https://gitlab.com
api_token: secret_access_token
project: "group1/project1"
branch: branch2
state: absent
'''
RETURN = '''
'''
import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.api import basic_auth_argument_spec
from distutils.version import LooseVersion
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
class GitlabBranch(object):
def __init__(self, module, project, gitlab_instance):
self.repo = gitlab_instance
self._module = module
self.project = self.get_project(project)
def get_project(self, project):
try:
return self.repo.projects.get(project)
except Exception as e:
return False
def get_branch(self, branch):
try:
return self.project.branches.get(branch)
except Exception as e:
return False
def create_branch(self, branch, ref_branch):
return self.project.branches.create({'branch': branch, 'ref': ref_branch})
def delete_branch(self, branch):
branch.unprotect()
return branch.delete()
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(
project=dict(type='str', required=True),
branch=dict(type='str', required=True),
ref_branch=dict(type='str', required=False),
state=dict(type='str', default="present", choices=["absent", "present"]),
)
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
required_if=[
['state', 'present', ['ref_branch'], True],
],
supports_check_mode=False
)
project = module.params['project']
branch = module.params['branch']
ref_branch = module.params['ref_branch']
state = module.params['state']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_version = gitlab.__version__
if LooseVersion(gitlab_version) < LooseVersion('2.3.0'):
module.fail_json(msg="community.general.gitlab_proteched_branch requires python-gitlab Python module >= 2.3.0 (installed version: [%s])."
" Please upgrade python-gitlab to version 2.3.0 or above." % gitlab_version)
gitlab_instance = gitlab_authentication(module)
this_gitlab = GitlabBranch(module=module, project=project, gitlab_instance=gitlab_instance)
this_branch = this_gitlab.get_branch(branch)
if not this_branch and state == "present":
r_branch = this_gitlab.get_branch(ref_branch)
if not r_branch:
module.fail_json(msg="Ref branch {b} not exist.".format(b=ref_branch))
this_gitlab.create_branch(branch, ref_branch)
module.exit_json(changed=True, msg="Created the branch {b}.".format(b=branch))
elif this_branch and state == "present":
module.exit_json(changed=False, msg="Branch {b} already exist".format(b=branch))
elif this_branch and state == "absent":
try:
this_gitlab.delete_branch(this_branch)
module.exit_json(changed=True, msg="Branch {b} deleted.".format(b=branch))
except Exception as e:
module.fail_json(msg="Error delete branch.", exception=traceback.format_exc())
else:
module.exit_json(changed=False, msg="No changes are needed.")
if __name__ == '__main__':
main()

View File

@@ -14,7 +14,7 @@ DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
- Adds, updates and removes project deploy keys
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
@@ -22,13 +22,10 @@ requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- GitLab token for logging in.
type: str
project:
description:
- Id or Full path of project in the form of group/name.
@@ -126,51 +123,51 @@ from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.gitlab import findProject, gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, find_project, gitlab_authentication
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
self.deploy_key_object = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param key_can_push Option of the deploy_key
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
def create_or_update_deploy_key(self, project, key_title, key_key, options):
changed = False
# note: unfortunately public key cannot be updated directly by
# GitLab REST API, so for that case we need to delete and
# than recreate the key
if self.deployKeyObject and self.deployKeyObject.key != key_key:
if self.deploy_key_object and self.deploy_key_object.key != key_key:
if not self._module.check_mode:
self.deployKeyObject.delete()
self.deployKeyObject = None
self.deploy_key_object.delete()
self.deploy_key_object = None
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
# Because we have already call exists_deploy_key in main()
if self.deploy_key_object is None:
deploy_key = self.create_deploy_key(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
changed, deploy_key = self.update_deploy_key(self.deploy_key_object, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
self.deploy_key_object = deploy_key
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
deploy_key.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
@@ -179,67 +176,67 @@ class GitLabDeployKey(object):
'''
@param project Project Object
@param arguments Attributes of the deployKey
@param arguments Attributes of the deploy_key
'''
def createDeployKey(self, project, arguments):
def create_deploy_key(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
deploy_key = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
return deploy_key
'''
@param deployKey Deploy Key Object
@param arguments Attributes of the deployKey
@param deploy_key Deploy Key Object
@param arguments Attributes of the deploy_key
'''
def updateDeployKey(self, deployKey, arguments):
def update_deploy_key(self, deploy_key, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
if getattr(deploy_key, arg_key) != arguments[arg_key]:
setattr(deploy_key, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
return (changed, deploy_key)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list(all=True)
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
def find_deploy_key(self, project, key_title):
deploy_keys = project.keys.list(all=True)
for deploy_key in deploy_keys:
if (deploy_key.title == key_title):
return deploy_key
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
def exists_deploy_key(self, project, key_title):
# When project exists, object will be stored in self.project_object.
deploy_key = self.find_deploy_key(project, key_title)
if deploy_key:
self.deploy_key_object = deploy_key
return True
return False
def deleteDeployKey(self):
def delete_deploy_key(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
return self.deploy_key_object.delete()
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True, no_log=False),
@@ -251,13 +248,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True,
)
@@ -271,32 +271,32 @@ def main():
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
project = find_project(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
deploy_key_exists = gitlab_deploy_key.exists_deploy_key(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
if deploy_key_exists:
gitlab_deploy_key.delete_deploy_key()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
if gitlab_deploy_key.create_or_update_deploy_key(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
deploy_key=gitlab_deploy_key.deploy_key_object._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
deploy_key=gitlab_deploy_key.deploy_key_object._attrs)
if __name__ == '__main__':

View File

@@ -22,13 +22,10 @@ requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- GitLab token for logging in.
type: str
name:
description:
- Name of the group you want to create.
@@ -83,6 +80,12 @@ options:
- Require all users in this group to setup two-factor authentication.
type: bool
version_added: 3.7.0
avatar_path:
description:
- Absolute path image to configure avatar. File size should not exceed 200 kb.
- This option is only used on creation, not for updates.
type: path
version_added: 4.2.0
'''
EXAMPLES = '''
@@ -169,19 +172,19 @@ from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.gitlab import findGroup, gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, find_group, gitlab_authentication
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
self.group_object = None
'''
@param group Group object
'''
def getGroupId(self, group):
def get_group_id(self, group):
if group is not None:
return group.id
return None
@@ -191,12 +194,12 @@ class GitLabGroup(object):
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
def create_or_update_group(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
if self.group_object is None:
parent_id = self.get_group_id(parent)
payload = {
'name': name,
@@ -211,10 +214,17 @@ class GitLabGroup(object):
payload['description'] = options['description']
if options.get('require_two_factor_authentication'):
payload['require_two_factor_authentication'] = options['require_two_factor_authentication']
group = self.createGroup(payload)
group = self.create_group(payload)
# add avatar to group
if options['avatar_path']:
try:
group.avatar = open(options['avatar_path'], 'rb')
except IOError as e:
self._module.fail_json(msg='Cannot open {0}: {1}'.format(options['avatar_path'], e))
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
changed, group = self.update_group(self.group_object, {
'name': name,
'description': options['description'],
'visibility': options['visibility'],
@@ -224,7 +234,7 @@ class GitLabGroup(object):
'require_two_factor_authentication': options['require_two_factor_authentication'],
})
self.groupObject = group
self.group_object = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
@@ -240,7 +250,7 @@ class GitLabGroup(object):
'''
@param arguments Attributes of the group
'''
def createGroup(self, arguments):
def create_group(self, arguments):
if self._module.check_mode:
return True
@@ -255,7 +265,7 @@ class GitLabGroup(object):
@param group Group Object
@param arguments Attributes of the group
'''
def updateGroup(self, group, arguments):
def update_group(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
@@ -266,8 +276,8 @@ class GitLabGroup(object):
return (changed, group)
def deleteGroup(self):
group = self.groupObject
def delete_group(self):
group = self.group_object
if len(group.projects.list()) >= 1:
self._module.fail_json(
@@ -285,19 +295,19 @@ class GitLabGroup(object):
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
def exists_group(self, project_identifier):
# When group/user exists, object will be stored in self.group_object.
group = find_group(self._gitlab, project_identifier)
if group:
self.groupObject = group
self.group_object = group
return True
return False
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
@@ -308,19 +318,23 @@ def main():
auto_devops_enabled=dict(type='bool'),
subgroup_creation_level=dict(type='str', choices=['maintainer', 'owner']),
require_two_factor_authentication=dict(type='bool'),
avatar_path=dict(type='path'),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True,
)
@@ -335,11 +349,12 @@ def main():
auto_devops_enabled = module.params['auto_devops_enabled']
subgroup_creation_level = module.params['subgroup_creation_level']
require_two_factor_authentication = module.params['require_two_factor_authentication']
avatar_path = module.params['avatar_path']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
# Define default group_path based on group_name
if group_path is None:
@@ -349,34 +364,35 @@ def main():
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
parent_group = find_group(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
group_exists = gitlab_group.exists_group(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
group_exists = gitlab_group.exists_group(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
gitlab_group.delete_group()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility,
"project_creation_level": project_creation_level,
"auto_devops_enabled": auto_devops_enabled,
"subgroup_creation_level": subgroup_creation_level,
"require_two_factor_authentication": require_two_factor_authentication,
}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if gitlab_group.create_or_update_group(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility,
"project_creation_level": project_creation_level,
"auto_devops_enabled": auto_devops_enabled,
"subgroup_creation_level": subgroup_creation_level,
"require_two_factor_authentication": require_two_factor_authentication,
"avatar_path": avatar_path,
}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.group_object._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.group_object._attrs)
if __name__ == '__main__':

View File

@@ -12,78 +12,76 @@ DOCUMENTATION = r'''
module: gitlab_group_members
short_description: Manage group members on GitLab Server
description:
- This module allows to add and remove members to/from a group, or change a member's access level in a group on GitLab.
- This module allows to add and remove members to/from a group, or change a member's access level in a group on GitLab.
version_added: '1.2.0'
author: Zainab Alsaffar (@zanssa)
requirements:
- python-gitlab python module <= 1.15.0
- administrator rights on the GitLab server
extends_documentation_fragment: community.general.auth_basic
- python-gitlab python module <= 1.15.0
- administrator rights on the GitLab server
extends_documentation_fragment:
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- A personal access token to authenticate with the GitLab API.
required: true
gitlab_group:
description:
- The C(full_path) of the GitLab group the member is added to/removed from.
- Setting this to C(name) or C(path) is deprecated and will be removed in community.general 6.0.0. Use C(full_path) instead.
required: true
type: str
gitlab_user:
description:
- A username or a list of usernames to add to/remove from the GitLab group.
- Mutually exclusive with I(gitlab_users_access).
type: list
elements: str
access_level:
description:
- The access level for the user.
- Required if I(state=present), user state is set to present.
- Mutually exclusive with I(gitlab_users_access).
type: str
choices: ['guest', 'reporter', 'developer', 'maintainer', 'owner']
gitlab_users_access:
description:
- Provide a list of user to access level mappings.
- Every dictionary in this list specifies a user (by username) and the access level the user should have.
- Mutually exclusive with I(gitlab_user) and I(access_level).
- Use together with I(purge_users) to remove all users not specified here from the group.
type: list
elements: dict
suboptions:
name:
description: A username or a list of usernames to add to/remove from the GitLab group.
type: str
gitlab_group:
description:
- The C(full_path) of the GitLab group the member is added to/removed from.
- Setting this to C(name) or C(path) is deprecated and will be removed in community.general 6.0.0. Use C(full_path) instead.
required: true
type: str
gitlab_user:
access_level:
description:
- A username or a list of usernames to add to/remove from the GitLab group.
- Mutually exclusive with I(gitlab_users_access).
type: list
elements: str
access_level:
description:
- The access level for the user.
- Required if I(state=present), user state is set to present.
- Mutually exclusive with I(gitlab_users_access).
- The access level for the user.
- Required if I(state=present), user state is set to present.
type: str
choices: ['guest', 'reporter', 'developer', 'maintainer', 'owner']
gitlab_users_access:
description:
- Provide a list of user to access level mappings.
- Every dictionary in this list specifies a user (by username) and the access level the user should have.
- Mutually exclusive with I(gitlab_user) and I(access_level).
- Use together with I(purge_users) to remove all users not specified here from the group.
type: list
elements: dict
suboptions:
name:
description: A username or a list of usernames to add to/remove from the GitLab group.
type: str
required: true
access_level:
description:
- The access level for the user.
- Required if I(state=present), user state is set to present.
type: str
choices: ['guest', 'reporter', 'developer', 'maintainer', 'owner']
required: true
version_added: 3.6.0
state:
description:
- State of the member in the group.
- On C(present), it adds a user to a GitLab group.
- On C(absent), it removes a user from a GitLab group.
choices: ['present', 'absent']
default: 'present'
type: str
purge_users:
description:
- Adds/remove users of the given access_level to match the given I(gitlab_user)/I(gitlab_users_access) list.
If omitted do not purge orphaned members.
- Is only used when I(state=present).
type: list
elements: str
choices: ['guest', 'reporter', 'developer', 'maintainer', 'owner']
version_added: 3.6.0
required: true
version_added: 3.6.0
state:
description:
- State of the member in the group.
- On C(present), it adds a user to a GitLab group.
- On C(absent), it removes a user from a GitLab group.
choices: ['present', 'absent']
default: 'present'
type: str
purge_users:
description:
- Adds/remove users of the given access_level to match the given I(gitlab_user)/I(gitlab_users_access) list.
If omitted do not purge orphaned members.
- Is only used when I(state=present).
type: list
elements: str
choices: ['guest', 'reporter', 'developer', 'maintainer', 'owner']
version_added: 3.6.0
notes:
- Supports C(check_mode).
- Supports C(check_mode).
'''
EXAMPLES = r'''
@@ -155,7 +153,7 @@ RETURN = r''' # '''
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible_collections.community.general.plugins.module_utils.gitlab import gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
import traceback
@@ -241,8 +239,8 @@ class GitLabGroup(object):
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', required=True, no_log=True),
gitlab_group=dict(type='str', required=True),
gitlab_user=dict(type='list', elements='str'),
state=dict(type='str', default='present', choices=['present', 'absent']),
@@ -262,16 +260,19 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['gitlab_user', 'gitlab_users_access'],
['access_level', 'gitlab_users_access'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
['gitlab_user', 'access_level'],
],
required_one_of=[
['api_username', 'api_token'],
['api_username', 'api_token', 'api_oauth_token', 'api_job_token'],
['gitlab_user', 'gitlab_users_access'],
],
required_if=[
@@ -288,7 +289,7 @@ def main():
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS
'owner': gitlab.OWNER_ACCESS,
}
gitlab_group = module.params['gitlab_group']
@@ -300,7 +301,7 @@ def main():
purge_users = [access_level_int[level] for level in purge_users]
# connect to gitlab server
gl = gitlabAuthentication(module)
gl = gitlab_authentication(module)
group = GitLabGroup(module, gl)

View File

@@ -24,6 +24,7 @@ requirements:
- python-gitlab python module
extends_documentation_fragment:
- community.general.auth_basic
- community.general.gitlab
options:
state:
@@ -32,11 +33,6 @@ options:
default: present
type: str
choices: ["present", "absent"]
api_token:
description:
- GitLab access token with API permissions.
required: true
type: str
group:
description:
- The path and name of the group.
@@ -144,7 +140,7 @@ except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible_collections.community.general.plugins.module_utils.gitlab import gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
class GitlabGroupVariables(object):
@@ -170,9 +166,13 @@ class GitlabGroupVariables(object):
def create_variable(self, key, value, masked, protected, variable_type):
if self._module.check_mode:
return
return self.group.variables.create({"key": key, "value": value,
"masked": masked, "protected": protected,
"variable_type": variable_type})
return self.group.variables.create({
"key": key,
"value": value,
"masked": masked,
"protected": protected,
"variable_type": variable_type,
})
def update_variable(self, key, var, value, masked, protected, variable_type):
if var.value == value and var.protected == protected and var.masked == masked and var.variable_type == variable_type:
@@ -226,11 +226,14 @@ def native_python_main(this_gitlab, purge, var_list, state, module):
existing_variables[index] = None
if state == 'present':
single_change = this_gitlab.update_variable(key,
gitlab_keys[index],
value, masked,
protected,
variable_type)
single_change = this_gitlab.update_variable(
key,
gitlab_keys[index],
value,
masked,
protected,
variable_type,
)
change = single_change or change
if single_change:
return_value['updated'].append(key)
@@ -261,8 +264,8 @@ def native_python_main(this_gitlab, purge, var_list, state, module):
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(
api_token=dict(type='str', required=True, no_log=True),
group=dict(type='str', required=True),
purge=dict(type='bool', required=False, default=False),
vars=dict(type='dict', required=False, default=dict(), no_log=True),
@@ -273,13 +276,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True
)
@@ -291,7 +297,7 @@ def main():
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
this_gitlab = GitlabGroupVariables(module=module, gitlab_instance=gitlab_instance)

View File

@@ -15,7 +15,7 @@ DOCUMENTATION = '''
module: gitlab_hook
short_description: Manages GitLab project hooks.
description:
- Adds, updates and removes project hook
- Adds, updates and removes project hook
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
@@ -23,13 +23,10 @@ requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- GitLab token for logging in.
type: str
project:
description:
- Id or Full path of the project in the form of group/name.
@@ -176,14 +173,14 @@ from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.gitlab import findProject, gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, find_project, gitlab_authentication
class GitLabHook(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.hookObject = None
self.hook_object = None
'''
@param project Project Object
@@ -191,12 +188,12 @@ class GitLabHook(object):
@param description Description of the group
@param parent Parent group full path
'''
def createOrUpdateHook(self, project, hook_url, options):
def create_or_update_hook(self, project, hook_url, options):
changed = False
# Because we have already call userExists in main()
if self.hookObject is None:
hook = self.createHook(project, {
if self.hook_object is None:
hook = self.create_hook(project, {
'url': hook_url,
'push_events': options['push_events'],
'push_events_branch_filter': options['push_events_branch_filter'],
@@ -208,10 +205,11 @@ class GitLabHook(object):
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
'token': options['token'],
})
changed = True
else:
changed, hook = self.updateHook(self.hookObject, {
changed, hook = self.update_hook(self.hook_object, {
'push_events': options['push_events'],
'push_events_branch_filter': options['push_events_branch_filter'],
'issues_events': options['issues_events'],
@@ -222,9 +220,10 @@ class GitLabHook(object):
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
'token': options['token'],
})
self.hookObject = hook
self.hook_object = hook
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url)
@@ -241,7 +240,7 @@ class GitLabHook(object):
@param project Project Object
@param arguments Attributes of the hook
'''
def createHook(self, project, arguments):
def create_hook(self, project, arguments):
if self._module.check_mode:
return True
@@ -253,7 +252,7 @@ class GitLabHook(object):
@param hook Hook Object
@param arguments Attributes of the hook
'''
def updateHook(self, hook, arguments):
def update_hook(self, hook, arguments):
changed = False
for arg_key, arg_value in arguments.items():
@@ -268,7 +267,7 @@ class GitLabHook(object):
@param project Project object
@param hook_url Url to call on event
'''
def findHook(self, project, hook_url):
def find_hook(self, project, hook_url):
hooks = project.hooks.list()
for hook in hooks:
if (hook.url == hook_url):
@@ -278,25 +277,25 @@ class GitLabHook(object):
@param project Project object
@param hook_url Url to call on event
'''
def existsHook(self, project, hook_url):
# When project exists, object will be stored in self.projectObject.
hook = self.findHook(project, hook_url)
def exists_hook(self, project, hook_url):
# When project exists, object will be stored in self.project_object.
hook = self.find_hook(project, hook_url)
if hook:
self.hookObject = hook
self.hook_object = hook
return True
return False
def deleteHook(self):
def delete_hook(self):
if self._module.check_mode:
return True
return self.hookObject.delete()
return self.hook_object.delete()
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
hook_url=dict(type='str', required=True),
@@ -317,13 +316,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True,
)
@@ -346,41 +348,42 @@ def main():
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
gitlab_hook = GitLabHook(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
project = find_project(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier)
hook_exists = gitlab_hook.existsHook(project, hook_url)
hook_exists = gitlab_hook.exists_hook(project, hook_url)
if state == 'absent':
if hook_exists:
gitlab_hook.deleteHook()
gitlab_hook.delete_hook()
module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url)
else:
module.exit_json(changed=False, msg="Hook deleted or does not exists")
if state == 'present':
if gitlab_hook.createOrUpdateHook(project, hook_url, {
"push_events": push_events,
"push_events_branch_filter": push_events_branch_filter,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token}):
if gitlab_hook.create_or_update_hook(project, hook_url, {
"push_events": push_events,
"push_events_branch_filter": push_events_branch_filter,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token,
}):
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hook_object._attrs)
else:
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hook_object._attrs)
if __name__ == '__main__':

View File

@@ -23,13 +23,10 @@ requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- GitLab token for logging in.
type: str
group:
description:
- Id or the full path of the group of which this projects belongs to.
@@ -162,6 +159,18 @@ options:
- Enable shared runners for this project.
type: bool
version_added: "3.7.0"
avatar_path:
description:
- Absolute path image to configure avatar. File size should not exceed 200 kb.
- This option is only used on creation, not for updates.
type: path
version_added: "4.2.0"
default_branch:
description:
- Default branch name for a new project.
- This option is only used on creation, not for updates. This is also only used if I(initialize_with_readme=true).
type: str
version_added: "4.2.0"
'''
EXAMPLES = r'''
@@ -197,6 +206,19 @@ EXAMPLES = r'''
initialize_with_readme: true
state: present
delegate_to: localhost
- name: get the initial root password
ansible.builtin.shell: |
grep 'Password:' /etc/gitlab/initial_root_password | sed -e 's/Password\: \(.*\)/\1/'
register: initial_root_password
- name: Create a GitLab Project using a username/password via oauth_token
community.general.gitlab_project:
api_url: https://gitlab.example.com/
api_username: root
api_password: "{{ initial_root_password }}"
name: my_second_project
group: "10481470"
'''
RETURN = r'''
@@ -237,21 +259,21 @@ from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.gitlab import findGroup, findProject, gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, find_group, find_project, gitlab_authentication
class GitLabProject(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.projectObject = None
self.project_object = None
'''
@param project_name Name of the project
@param namespace Namespace Object (User or Group)
@param options Options of the project
'''
def createOrUpdateProject(self, project_name, namespace, options):
def create_or_update_project(self, project_name, namespace, options):
changed = False
project_options = {
'name': project_name,
@@ -273,20 +295,31 @@ class GitLabProject(object):
'shared_runners_enabled': options['shared_runners_enabled'],
}
# Because we have already call userExists in main()
if self.projectObject is None:
if self.project_object is None:
project_options.update({
'path': options['path'],
'import_url': options['import_url'],
})
if options['initialize_with_readme']:
project_options['initialize_with_readme'] = options['initialize_with_readme']
project_options = self.getOptionsWithValue(project_options)
project = self.createProject(namespace, project_options)
if options['default_branch']:
project_options['default_branch'] = options['default_branch']
project_options = self.get_options_with_value(project_options)
project = self.create_project(namespace, project_options)
# add avatar to project
if options['avatar_path']:
try:
project.avatar = open(options['avatar_path'], 'rb')
except IOError as e:
self._module.fail_json(msg='Cannot open {0}: {1}'.format(options['avatar_path'], e))
changed = True
else:
changed, project = self.updateProject(self.projectObject, project_options)
changed, project = self.update_project(self.project_object, project_options)
self.projectObject = project
self.project_object = project
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name)
@@ -302,7 +335,7 @@ class GitLabProject(object):
@param namespace Namespace Object (User or Group)
@param arguments Attributes of the project
'''
def createProject(self, namespace, arguments):
def create_project(self, namespace, arguments):
if self._module.check_mode:
return True
@@ -317,7 +350,7 @@ class GitLabProject(object):
'''
@param arguments Attributes of the project
'''
def getOptionsWithValue(self, arguments):
def get_options_with_value(self, arguments):
ret_arguments = dict()
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
@@ -329,7 +362,7 @@ class GitLabProject(object):
@param project Project Object
@param arguments Attributes of the project
'''
def updateProject(self, project, arguments):
def update_project(self, project, arguments):
changed = False
for arg_key, arg_value in arguments.items():
@@ -340,11 +373,11 @@ class GitLabProject(object):
return (changed, project)
def deleteProject(self):
def delete_project(self):
if self._module.check_mode:
return True
project = self.projectObject
project = self.project_object
return project.delete()
@@ -352,24 +385,25 @@ class GitLabProject(object):
@param namespace User/Group object
@param name Name of the project
'''
def existsProject(self, namespace, path):
# When project exists, object will be stored in self.projectObject.
project = findProject(self._gitlab, namespace.full_path + '/' + path)
def exists_project(self, namespace, path):
# When project exists, object will be stored in self.project_object.
project = find_project(self._gitlab, namespace.full_path + '/' + path)
if project:
self.projectObject = project
self.project_object = project
return True
return False
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
group=dict(type='str'),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
initialize_with_readme=dict(type='bool', default=False),
default_branch=dict(type='str'),
issues_enabled=dict(type='bool', default=True),
merge_requests_enabled=dict(type='bool', default=True),
merge_method=dict(type='str', default='merge', choices=["merge", "rebase_merge", "ff"]),
@@ -388,20 +422,24 @@ def main():
squash_option=dict(type='str', choices=['never', 'always', 'default_off', 'default_on']),
ci_config_path=dict(type='str'),
shared_runners_enabled=dict(type='bool'),
avatar_path=dict(type='path'),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
['group', 'username'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True,
)
@@ -429,11 +467,16 @@ def main():
squash_option = module.params['squash_option']
ci_config_path = module.params['ci_config_path']
shared_runners_enabled = module.params['shared_runners_enabled']
avatar_path = module.params['avatar_path']
default_branch = module.params['default_branch']
if default_branch and not initialize_with_readme:
module.fail_json(msg="Param default_branch need param initialize_with_readme set to true")
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
# Set project_path to project_name if it is empty.
if project_path is None:
@@ -444,7 +487,7 @@ def main():
namespace = None
namespace_id = None
if group_identifier:
group = findGroup(gitlab_instance, group_identifier)
group = find_group(gitlab_instance, group_identifier)
if group is None:
module.fail_json(msg="Failed to create project: group %s doesn't exists" % group_identifier)
@@ -466,40 +509,42 @@ def main():
if not namespace:
module.fail_json(msg="Failed to find the namespace for the project")
project_exists = gitlab_project.existsProject(namespace, project_path)
project_exists = gitlab_project.exists_project(namespace, project_path)
if state == 'absent':
if project_exists:
gitlab_project.deleteProject()
gitlab_project.delete_project()
module.exit_json(changed=True, msg="Successfully deleted project %s" % project_name)
module.exit_json(changed=False, msg="Project deleted or does not exists")
if state == 'present':
if gitlab_project.createOrUpdateProject(project_name, namespace, {
"path": project_path,
"description": project_description,
"initialize_with_readme": initialize_with_readme,
"issues_enabled": issues_enabled,
"merge_requests_enabled": merge_requests_enabled,
"merge_method": merge_method,
"wiki_enabled": wiki_enabled,
"snippets_enabled": snippets_enabled,
"visibility": visibility,
"import_url": import_url,
"lfs_enabled": lfs_enabled,
"allow_merge_on_skipped_pipeline": allow_merge_on_skipped_pipeline,
"only_allow_merge_if_all_discussions_are_resolved": only_allow_merge_if_all_discussions_are_resolved,
"only_allow_merge_if_pipeline_succeeds": only_allow_merge_if_pipeline_succeeds,
"packages_enabled": packages_enabled,
"remove_source_branch_after_merge": remove_source_branch_after_merge,
"squash_option": squash_option,
"ci_config_path": ci_config_path,
"shared_runners_enabled": shared_runners_enabled,
}):
if gitlab_project.create_or_update_project(project_name, namespace, {
"path": project_path,
"description": project_description,
"initialize_with_readme": initialize_with_readme,
"default_branch": default_branch,
"issues_enabled": issues_enabled,
"merge_requests_enabled": merge_requests_enabled,
"merge_method": merge_method,
"wiki_enabled": wiki_enabled,
"snippets_enabled": snippets_enabled,
"visibility": visibility,
"import_url": import_url,
"lfs_enabled": lfs_enabled,
"allow_merge_on_skipped_pipeline": allow_merge_on_skipped_pipeline,
"only_allow_merge_if_all_discussions_are_resolved": only_allow_merge_if_all_discussions_are_resolved,
"only_allow_merge_if_pipeline_succeeds": only_allow_merge_if_pipeline_succeeds,
"packages_enabled": packages_enabled,
"remove_source_branch_after_merge": remove_source_branch_after_merge,
"squash_option": squash_option,
"ci_config_path": ci_config_path,
"shared_runners_enabled": shared_runners_enabled,
"avatar_path": avatar_path,
}):
module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.projectObject._attrs)
module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.projectObject._attrs)
module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.project_object._attrs)
module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.project_object._attrs)
if __name__ == '__main__':

View File

@@ -14,95 +14,75 @@ module: gitlab_project_members
short_description: Manage project members on GitLab Server
version_added: 2.2.0
description:
- This module allows to add and remove members to/from a project, or change a member's access level in a project on GitLab.
- This module allows to add and remove members to/from a project, or change a member's access level in a project on GitLab.
author:
- Sergey Mikhaltsov (@metanovii)
- Zainab Alsaffar (@zanssa)
- Sergey Mikhaltsov (@metanovii)
- Zainab Alsaffar (@zanssa)
requirements:
- python-gitlab python module <= 1.15.0
- owner or maintainer rights to project on the GitLab server
- python-gitlab python module <= 1.15.0
- owner or maintainer rights to project on the GitLab server
extends_documentation_fragment:
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- A personal access token to authenticate with the GitLab API.
project:
description:
- The name (or full path) of the GitLab project the member is added to/removed from.
required: true
type: str
gitlab_user:
description:
- A username or a list of usernames to add to/remove from the GitLab project.
- Mutually exclusive with I(gitlab_users_access).
type: list
elements: str
access_level:
description:
- The access level for the user.
- Required if I(state=present), user state is set to present.
type: str
choices: ['guest', 'reporter', 'developer', 'maintainer']
gitlab_users_access:
description:
- Provide a list of user to access level mappings.
- Every dictionary in this list specifies a user (by username) and the access level the user should have.
- Mutually exclusive with I(gitlab_user) and I(access_level).
- Use together with I(purge_users) to remove all users not specified here from the project.
type: list
elements: dict
suboptions:
name:
description: A username or a list of usernames to add to/remove from the GitLab project.
type: str
required: true
type: str
validate_certs:
access_level:
description:
- Whether or not to validate TLS/SSL certificates when supplying a HTTPS endpoint.
- Should only be set to C(false) if you can guarantee that you are talking to the correct server
and no man-in-the-middle attack can happen.
default: true
type: bool
api_username:
description:
- The username to use for authentication against the API.
type: str
api_password:
description:
- The password to use for authentication against the API.
type: str
api_url:
description:
- The resolvable endpoint for the API.
type: str
project:
description:
- The name (or full path) of the GitLab project the member is added to/removed from.
required: true
type: str
gitlab_user:
description:
- A username or a list of usernames to add to/remove from the GitLab project.
- Mutually exclusive with I(gitlab_users_access).
type: list
elements: str
access_level:
description:
- The access level for the user.
- Required if I(state=present), user state is set to present.
- The access level for the user.
- Required if I(state=present), user state is set to present.
type: str
choices: ['guest', 'reporter', 'developer', 'maintainer']
gitlab_users_access:
description:
- Provide a list of user to access level mappings.
- Every dictionary in this list specifies a user (by username) and the access level the user should have.
- Mutually exclusive with I(gitlab_user) and I(access_level).
- Use together with I(purge_users) to remove all users not specified here from the project.
type: list
elements: dict
suboptions:
name:
description: A username or a list of usernames to add to/remove from the GitLab project.
type: str
required: true
access_level:
description:
- The access level for the user.
- Required if I(state=present), user state is set to present.
type: str
choices: ['guest', 'reporter', 'developer', 'maintainer']
required: true
version_added: 3.7.0
state:
description:
- State of the member in the project.
- On C(present), it adds a user to a GitLab project.
- On C(absent), it removes a user from a GitLab project.
choices: ['present', 'absent']
default: 'present'
type: str
purge_users:
description:
- Adds/remove users of the given access_level to match the given I(gitlab_user)/I(gitlab_users_access) list.
If omitted do not purge orphaned members.
- Is only used when I(state=present).
type: list
elements: str
choices: ['guest', 'reporter', 'developer', 'maintainer']
version_added: 3.7.0
required: true
version_added: 3.7.0
state:
description:
- State of the member in the project.
- On C(present), it adds a user to a GitLab project.
- On C(absent), it removes a user from a GitLab project.
choices: ['present', 'absent']
default: 'present'
type: str
purge_users:
description:
- Adds/remove users of the given access_level to match the given I(gitlab_user)/I(gitlab_users_access) list.
If omitted do not purge orphaned members.
- Is only used when I(state=present).
type: list
elements: str
choices: ['guest', 'reporter', 'developer', 'maintainer']
version_added: 3.7.0
notes:
- Supports C(check_mode).
- Supports C(check_mode).
'''
EXAMPLES = r'''
@@ -176,7 +156,7 @@ RETURN = r''' # '''
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible_collections.community.general.plugins.module_utils.gitlab import gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
import traceback
@@ -257,8 +237,8 @@ class GitLabProjectMembers(object):
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', required=True, no_log=True),
project=dict(type='str', required=True),
gitlab_user=dict(type='list', elements='str'),
state=dict(type='str', default='present', choices=['present', 'absent']),
@@ -280,7 +260,10 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
['gitlab_user', 'gitlab_users_access'],
['access_level', 'gitlab_users_access'],
],
@@ -289,7 +272,7 @@ def main():
['gitlab_user', 'access_level'],
],
required_one_of=[
['api_username', 'api_token'],
['api_username', 'api_token', 'api_oauth_token', 'api_job_token'],
['gitlab_user', 'gitlab_users_access'],
],
required_if=[
@@ -317,7 +300,7 @@ def main():
purge_users = [access_level_int[level] for level in purge_users]
# connect to gitlab server
gl = gitlabAuthentication(module)
gl = gitlab_authentication(module)
project = GitLabProjectMembers(module, gl)

View File

@@ -20,7 +20,8 @@ requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
state:
@@ -30,11 +31,6 @@ options:
default: present
type: str
choices: ["present", "absent"]
api_token:
description:
- GitLab access token with API permissions.
required: true
type: str
project:
description:
- The path and name of the project.
@@ -143,7 +139,7 @@ except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible_collections.community.general.plugins.module_utils.gitlab import gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
class GitlabProjectVariables(object):
@@ -172,7 +168,7 @@ class GitlabProjectVariables(object):
var = {
"key": key, "value": value,
"masked": masked, "protected": protected,
"variable_type": variable_type
"variable_type": variable_type,
}
if environment_scope is not None:
var["environment_scope"] = environment_scope
@@ -270,8 +266,8 @@ def native_python_main(this_gitlab, purge, var_list, state, module):
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(
api_token=dict(type='str', required=True, no_log=True),
project=dict(type='str', required=True),
purge=dict(type='bool', required=False, default=False),
vars=dict(type='dict', required=False, default=dict(), no_log=True),
@@ -282,13 +278,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True
)
@@ -300,7 +299,7 @@ def main():
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
this_gitlab = GitlabProjectVariables(module=module, gitlab_instance=gitlab_instance)

View File

@@ -18,7 +18,8 @@ requirements:
- python >= 2.7
- python-gitlab >= 2.3.0
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
state:
@@ -27,11 +28,6 @@ options:
default: present
type: str
choices: ["present", "absent"]
api_token:
description:
- GitLab access token with API permissions.
required: true
type: str
project:
description:
- The path and name of the project.
@@ -87,7 +83,7 @@ except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible_collections.community.general.plugins.module_utils.gitlab import gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
class GitlabProtectedBranch(object):
@@ -141,8 +137,8 @@ class GitlabProtectedBranch(object):
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(
api_token=dict(type='str', required=True, no_log=True),
project=dict(type='str', required=True),
name=dict(type='str', required=True),
merge_access_levels=dict(type='str', default="maintainer", choices=["maintainer", "developer", "nobody"]),
@@ -154,13 +150,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True
)
@@ -179,7 +178,7 @@ def main():
module.fail_json(msg="community.general.gitlab_proteched_branch requires python-gitlab Python module >= 2.3.0 (installed version: [%s])."
" Please upgrade python-gitlab to version 2.3.0 or above." % gitlab_version)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
this_gitlab = GitlabProtectedBranch(module=module, project=project, gitlab_instance=gitlab_instance)
p_branch = this_gitlab.protected_branch_exist(name=name)

View File

@@ -32,13 +32,10 @@ requirements:
- python >= 2.7
- python-gitlab >= 1.5.0
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- Your private token to interact with the GitLab API.
type: str
project:
description:
- ID or full path of the project in the form of group/name.
@@ -186,7 +183,7 @@ from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.gitlab import gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, gitlab_authentication
try:
cmp
@@ -203,32 +200,34 @@ class GitLabRunner(object):
# See https://gitlab.com/gitlab-org/gitlab-ce/issues/60774
# for group runner token access
self._runners_endpoint = project.runners if project else gitlab_instance.runners
self.runnerObject = None
self.runner_object = None
def createOrUpdateRunner(self, description, options):
def create_or_update_runner(self, description, options):
changed = False
# Because we have already call userExists in main()
if self.runnerObject is None:
runner = self.createRunner({
if self.runner_object is None:
runner = self.create_runner({
'description': description,
'active': options['active'],
'token': options['registration_token'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'tag_list': options['tag_list']})
'tag_list': options['tag_list'],
})
changed = True
else:
changed, runner = self.updateRunner(self.runnerObject, {
changed, runner = self.update_runner(self.runner_object, {
'active': options['active'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'access_level': options['access_level'],
'tag_list': options['tag_list']})
'tag_list': options['tag_list'],
})
self.runnerObject = runner
self.runner_object = runner
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the runner %s" % description)
@@ -244,7 +243,7 @@ class GitLabRunner(object):
'''
@param arguments Attributes of the runner
'''
def createRunner(self, arguments):
def create_runner(self, arguments):
if self._module.check_mode:
return True
@@ -259,7 +258,7 @@ class GitLabRunner(object):
@param runner Runner object
@param arguments Attributes of the runner
'''
def updateRunner(self, runner, arguments):
def update_runner(self, runner, arguments):
changed = False
for arg_key, arg_value in arguments.items():
@@ -282,7 +281,7 @@ class GitLabRunner(object):
'''
@param description Description of the runner
'''
def findRunner(self, description, owned=False):
def find_runner(self, description, owned=False):
if owned:
runners = self._runners_endpoint.list(as_list=False)
else:
@@ -301,28 +300,28 @@ class GitLabRunner(object):
'''
@param description Description of the runner
'''
def existsRunner(self, description, owned=False):
# When runner exists, object will be stored in self.runnerObject.
runner = self.findRunner(description, owned)
def exists_runner(self, description, owned=False):
# When runner exists, object will be stored in self.runner_object.
runner = self.find_runner(description, owned)
if runner:
self.runnerObject = runner
self.runner_object = runner
return True
return False
def deleteRunner(self):
def delete_runner(self):
if self._module.check_mode:
return True
runner = self.runnerObject
runner = self.runner_object
return runner.delete()
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
description=dict(type='str', required=True, aliases=["name"]),
active=dict(type='bool', default=True),
owned=dict(type='bool', default=False),
@@ -340,13 +339,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token'],
['api_username', 'api_token', 'api_oauth_token', 'api_job_token'],
],
required_if=[
('state', 'present', ['registration_token']),
@@ -369,7 +371,7 @@ def main():
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
gitlab_project = None
if project:
try:
@@ -378,28 +380,29 @@ def main():
module.fail_json(msg='No such a project %s' % project, exception=to_native(e))
gitlab_runner = GitLabRunner(module, gitlab_instance, gitlab_project)
runner_exists = gitlab_runner.existsRunner(runner_description, owned)
runner_exists = gitlab_runner.exists_runner(runner_description, owned)
if state == 'absent':
if runner_exists:
gitlab_runner.deleteRunner()
gitlab_runner.delete_runner()
module.exit_json(changed=True, msg="Successfully deleted runner %s" % runner_description)
else:
module.exit_json(changed=False, msg="Runner deleted or does not exists")
if state == 'present':
if gitlab_runner.createOrUpdateRunner(runner_description, {
"active": runner_active,
"tag_list": tag_list,
"run_untagged": run_untagged,
"locked": runner_locked,
"access_level": access_level,
"maximum_timeout": maximum_timeout,
"registration_token": registration_token}):
module.exit_json(changed=True, runner=gitlab_runner.runnerObject._attrs,
if gitlab_runner.create_or_update_runner(runner_description, {
"active": runner_active,
"tag_list": tag_list,
"run_untagged": run_untagged,
"locked": runner_locked,
"access_level": access_level,
"maximum_timeout": maximum_timeout,
"registration_token": registration_token,
}):
module.exit_json(changed=True, runner=gitlab_runner.runner_object._attrs,
msg="Successfully created or updated the runner %s" % runner_description)
else:
module.exit_json(changed=False, runner=gitlab_runner.runnerObject._attrs,
module.exit_json(changed=False, runner=gitlab_runner.runner_object._attrs,
msg="No need to update the runner %s" % runner_description)

View File

@@ -30,13 +30,10 @@ requirements:
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- community.general.auth_basic
- community.general.auth_basic
- community.general.gitlab
options:
api_token:
description:
- GitLab token for logging in.
type: str
name:
description:
- Name of the user you want to create.
@@ -238,33 +235,34 @@ from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.gitlab import findGroup, gitlabAuthentication
from ansible_collections.community.general.plugins.module_utils.gitlab import auth_argument_spec, find_group, gitlab_authentication
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.user_object = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'owner': gitlab.OWNER_ACCESS,
}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
def create_or_update_user(self, username, options):
changed = False
potentionally_changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
if self.user_object is None:
user = self.create_user({
'name': options['name'],
'username': username,
'password': options['password'],
@@ -277,8 +275,8 @@ class GitLabUser(object):
})
changed = True
else:
changed, user = self.updateUser(
self.userObject, {
changed, user = self.update_user(
self.user_object, {
# add "normal" parameters here, put uncheckable
# params in the dict below
'name': {'value': options['name']},
@@ -313,7 +311,7 @@ class GitLabUser(object):
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
key_changed = self.addSshKeyToUser(user, {
key_changed = self.add_ssh_key_to_user(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file'],
'expires_at': options['sshkey_expires_at']})
@@ -321,10 +319,10 @@ class GitLabUser(object):
# Assign group
if options['group_path']:
group_changed = self.assignUserToGroup(user, options['group_path'], options['access_level'])
group_changed = self.assign_user_to_group(user, options['group_path'], options['access_level'])
changed = changed or group_changed
self.userObject = user
self.user_object = user
if (changed or potentionally_changed) and not self._module.check_mode:
try:
user.save()
@@ -341,7 +339,7 @@ class GitLabUser(object):
'''
@param group User object
'''
def getUserId(self, user):
def get_user_id(self, user):
if user is not None:
return user.id
return None
@@ -350,7 +348,7 @@ class GitLabUser(object):
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
def ssh_key_exists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
@@ -359,8 +357,8 @@ class GitLabUser(object):
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": "", "expires_at": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
def add_ssh_key_to_user(self, user, sshkey):
if not self.ssh_key_exists(user, sshkey['name']):
if self._module.check_mode:
return True
@@ -381,7 +379,7 @@ class GitLabUser(object):
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
def find_member(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError:
@@ -392,8 +390,8 @@ class GitLabUser(object):
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
def member_exists(self, group, user_id):
member = self.find_member(group, user_id)
return member is not None
@@ -402,8 +400,8 @@ class GitLabUser(object):
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
def member_as_good_access_level(self, group, user_id, access_level):
member = self.find_member(group, user_id)
return member.access_level == access_level
@@ -412,8 +410,8 @@ class GitLabUser(object):
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
def assign_user_to_group(self, user, group_identifier, access_level):
group = find_group(self._gitlab, group_identifier)
if self._module.check_mode:
return True
@@ -421,16 +419,16 @@ class GitLabUser(object):
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
if self.member_exists(group, self.get_user_id(user)):
member = self.find_member(group, self.get_user_id(user))
if not self.member_as_good_access_level(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'user_id': self.get_user_id(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
@@ -441,7 +439,7 @@ class GitLabUser(object):
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments, uncheckable_args):
def update_user(self, user, arguments, uncheckable_args):
changed = False
for arg_key, arg_value in arguments.items():
@@ -449,7 +447,7 @@ class GitLabUser(object):
if av is not None:
if arg_key == "identities":
changed = self.addIdentities(user, av, uncheckable_args['overwrite_identities']['value'])
changed = self.add_identities(user, av, uncheckable_args['overwrite_identities']['value'])
elif getattr(user, arg_key) != av:
setattr(user, arg_value.get('setter', arg_key), av)
@@ -466,7 +464,7 @@ class GitLabUser(object):
'''
@param arguments User attributes
'''
def createUser(self, arguments):
def create_user(self, arguments):
if self._module.check_mode:
return True
@@ -478,7 +476,7 @@ class GitLabUser(object):
try:
user = self._gitlab.users.create(arguments)
if identities:
self.addIdentities(user, identities)
self.add_identities(user, identities)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
@@ -490,10 +488,10 @@ class GitLabUser(object):
@param identites List of identities to be added/updated
@param overwrite_identities Overwrite user identities with identities passed to this module
'''
def addIdentities(self, user, identities, overwrite_identities=False):
def add_identities(self, user, identities, overwrite_identities=False):
changed = False
if overwrite_identities:
changed = self.deleteIdentities(user, identities)
changed = self.delete_identities(user, identities)
for identity in identities:
if identity not in user.identities:
@@ -508,7 +506,7 @@ class GitLabUser(object):
@param user User object
@param identites List of identities to be added/updated
'''
def deleteIdentities(self, user, identities):
def delete_identities(self, user, identities):
changed = False
for identity in user.identities:
if identity not in identities:
@@ -520,7 +518,7 @@ class GitLabUser(object):
'''
@param username Username of the user
'''
def findUser(self, username):
def find_user(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
@@ -529,42 +527,42 @@ class GitLabUser(object):
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
def exists_user(self, username):
# When user exists, object will be stored in self.user_object.
user = self.find_user(username)
if user:
self.userObject = user
self.user_object = user
return True
return False
'''
@param username Username of the user
'''
def isActive(self, username):
user = self.findUser(username)
def is_active(self, username):
user = self.find_user(username)
return user.attributes['state'] == 'active'
def deleteUser(self):
def delete_user(self):
if self._module.check_mode:
return True
user = self.userObject
user = self.user_object
return user.delete()
def blockUser(self):
def block_user(self):
if self._module.check_mode:
return True
user = self.userObject
user = self.user_object
return user.block()
def unblockUser(self):
def unblock_user(self):
if self._module.check_mode:
return True
user = self.userObject
user = self.user_object
return user.unblock()
@@ -578,8 +576,8 @@ def sanitize_arguments(arguments):
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(auth_argument_spec())
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
name=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present", "blocked", "unblocked"]),
username=dict(type='str', required=True),
@@ -602,13 +600,16 @@ def main():
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
['api_username', 'api_oauth_token'],
['api_username', 'api_job_token'],
['api_token', 'api_oauth_token'],
['api_token', 'api_job_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
['api_username', 'api_token', 'api_oauth_token', 'api_job_token']
],
supports_check_mode=True,
required_if=(
@@ -636,55 +637,56 @@ def main():
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
gitlab_instance = gitlabAuthentication(module)
gitlab_instance = gitlab_authentication(module)
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
user_exists = gitlab_user.exists_user(user_username)
if user_exists:
user_is_active = gitlab_user.isActive(user_username)
user_is_active = gitlab_user.is_active(user_username)
else:
user_is_active = False
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
gitlab_user.delete_user()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'blocked':
if user_exists and user_is_active:
gitlab_user.blockUser()
gitlab_user.block_user()
module.exit_json(changed=True, msg="Successfully blocked user %s" % user_username)
else:
module.exit_json(changed=False, msg="User already blocked or does not exists")
if state == 'unblocked':
if user_exists and not user_is_active:
gitlab_user.unblockUser()
gitlab_user.unblock_user()
module.exit_json(changed=True, msg="Successfully unblocked user %s" % user_username)
else:
module.exit_json(changed=False, msg="User is not blocked or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"reset_password": user_reset_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"sshkey_expires_at": user_sshkey_expires_at,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external,
"identities": user_identities,
"overwrite_identities": overwrite_identities}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
if gitlab_user.create_or_update_user(user_username, {
"name": user_name,
"password": user_password,
"reset_password": user_reset_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"sshkey_expires_at": user_sshkey_expires_at,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external,
"identities": user_identities,
"overwrite_identities": overwrite_identities,
}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.user_object._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.user_object._attrs)
if __name__ == '__main__':

View File

@@ -183,7 +183,7 @@ def _fs_exists(module, filesystem):
:return: True or False.
"""
lsfs_cmd = module.get_bin_path('lsfs', True)
rc, lsfs_out, err = module.run_command("%s -l %s" % (lsfs_cmd, filesystem))
rc, lsfs_out, err = module.run_command([lsfs_cmd, "-l", filesystem])
if rc == 1:
if re.findall("No record matching", err):
return False
@@ -206,8 +206,7 @@ def _check_nfs_device(module, nfs_host, device):
:return: True or False.
"""
showmount_cmd = module.get_bin_path('showmount', True)
rc, showmount_out, err = module.run_command(
"%s -a %s" % (showmount_cmd, nfs_host))
rc, showmount_out, err = module.run_command([showmount_cmd, "-a", nfs_host])
if rc != 0:
module.fail_json(msg="Failed to run showmount. Error message: %s" % err)
else:
@@ -229,11 +228,11 @@ def _validate_vg(module, vg):
None (VG does not exist), message.
"""
lsvg_cmd = module.get_bin_path('lsvg', True)
rc, current_active_vgs, err = module.run_command("%s -o" % lsvg_cmd)
rc, current_active_vgs, err = module.run_command([lsvg_cmd, "-o"])
if rc != 0:
module.fail_json(msg="Failed executing %s command." % lsvg_cmd)
rc, current_all_vgs, err = module.run_command("%s" % lsvg_cmd)
rc, current_all_vgs, err = module.run_command([lsvg_cmd, "%s"])
if rc != 0:
module.fail_json(msg="Failed executing %s command." % lsvg_cmd)
@@ -253,7 +252,7 @@ def resize_fs(module, filesystem, size):
chfs_cmd = module.get_bin_path('chfs', True)
if not module.check_mode:
rc, chfs_out, err = module.run_command('%s -a size="%s" %s' % (chfs_cmd, size, filesystem))
rc, chfs_out, err = module.run_command([chfs_cmd, "-a", "size=%s" % size, filesystem])
if rc == 28:
changed = False
@@ -338,8 +337,7 @@ def create_fs(
# Creates a NFS file system.
mknfsmnt_cmd = module.get_bin_path('mknfsmnt', True)
if not module.check_mode:
rc, mknfsmnt_out, err = module.run_command('%s -f "%s" %s -h "%s" -t "%s" "%s" -w "bg"' % (
mknfsmnt_cmd, filesystem, device, nfs_server, permissions, auto_mount))
rc, mknfsmnt_out, err = module.run_command([mknfsmnt_cmd, "-f", filesystem, device, "-h", nfs_server, "-t", permissions, auto_mount, "-w", "bg"])
if rc != 0:
module.fail_json(msg="Failed to run mknfsmnt. Error message: %s" % err)
else:
@@ -357,8 +355,7 @@ def create_fs(
# Creates a LVM file system.
crfs_cmd = module.get_bin_path('crfs', True)
if not module.check_mode:
cmd = "%s -v %s -m %s %s %s %s %s %s -p %s %s -a %s" % (
crfs_cmd, fs_type, filesystem, vg, device, mount_group, auto_mount, account_subsystem, permissions, size, attributes)
cmd = [crfs_cmd, "-v", fs_type, "-m", filesystem, vg, device, mount_group, auto_mount, account_subsystem, "-p", permissions, size, "-a", attributes]
rc, crfs_out, err = module.run_command(cmd)
if rc == 10:
@@ -392,7 +389,7 @@ def remove_fs(module, filesystem, rm_mount_point):
rmfs_cmd = module.get_bin_path('rmfs', True)
if not module.check_mode:
cmd = "%s -r %s %s" % (rmfs_cmd, rm_mount_point, filesystem)
cmd = [rmfs_cmd, "-r", rm_mount_point, filesystem]
rc, rmfs_out, err = module.run_command(cmd)
if rc != 0:
module.fail_json(msg="Failed to run %s. Error message: %s" % (cmd, err))
@@ -415,8 +412,7 @@ def mount_fs(module, filesystem):
mount_cmd = module.get_bin_path('mount', True)
if not module.check_mode:
rc, mount_out, err = module.run_command(
"%s %s" % (mount_cmd, filesystem))
rc, mount_out, err = module.run_command([mount_cmd, filesystem])
if rc != 0:
module.fail_json(msg="Failed to run mount. Error message: %s" % err)
else:
@@ -436,7 +432,7 @@ def unmount_fs(module, filesystem):
unmount_cmd = module.get_bin_path('unmount', True)
if not module.check_mode:
rc, unmount_out, err = module.run_command("%s %s" % (unmount_cmd, filesystem))
rc, unmount_out, err = module.run_command([unmount_cmd, filesystem])
if rc != 0:
module.fail_json(msg="Failed to run unmount. Error message: %s" % err)
else:

View File

@@ -97,7 +97,7 @@ def _validate_pv(module, vg, pvs):
"""
lspv_cmd = module.get_bin_path('lspv', True)
rc, current_lspv, stderr = module.run_command("%s" % lspv_cmd)
rc, current_lspv, stderr = module.run_command([lspv_cmd])
if rc != 0:
module.fail_json(msg="Failed executing 'lspv' command.", rc=rc, stdout=current_lspv, stderr=stderr)
@@ -116,7 +116,7 @@ def _validate_pv(module, vg, pvs):
# Disk None, looks free.
# Check if PV is not already in use by Oracle ASM.
lquerypv_cmd = module.get_bin_path('lquerypv', True)
rc, current_lquerypv, stderr = module.run_command("%s -h /dev/%s 20 10" % (lquerypv_cmd, pv))
rc, current_lquerypv, stderr = module.run_command([lquerypv_cmd, "-h", "/dev/%s" % pv, "20", "10"])
if rc != 0:
module.fail_json(msg="Failed executing lquerypv command.", rc=rc, stdout=current_lquerypv, stderr=stderr)
@@ -144,11 +144,11 @@ def _validate_vg(module, vg):
None (VG does not exist), message.
"""
lsvg_cmd = module.get_bin_path('lsvg', True)
rc, current_active_vgs, err = module.run_command("%s -o" % lsvg_cmd)
rc, current_active_vgs, err = module.run_command([lsvg_cmd, "-o"])
if rc != 0:
module.fail_json(msg="Failed executing '%s' command." % lsvg_cmd)
rc, current_all_vgs, err = module.run_command("%s" % lsvg_cmd)
rc, current_all_vgs, err = module.run_command([lsvg_cmd])
if rc != 0:
module.fail_json(msg="Failed executing '%s' command." % lsvg_cmd)
@@ -197,7 +197,7 @@ def create_extend_vg(module, vg, pvs, pp_size, vg_type, force, vg_validation):
if not module.check_mode:
extendvg_cmd = module.get_bin_path('extendvg', True)
rc, output, err = module.run_command("%s %s %s" % (extendvg_cmd, vg, ' '.join(pvs)))
rc, output, err = module.run_command([extendvg_cmd, vg] + pvs)
if rc != 0:
changed = False
msg = "Extending volume group '%s' has failed." % vg
@@ -213,7 +213,7 @@ def create_extend_vg(module, vg, pvs, pp_size, vg_type, force, vg_validation):
if not module.check_mode:
mkvg_cmd = module.get_bin_path('mkvg', True)
rc, output, err = module.run_command("%s %s %s %s -y %s %s" % (mkvg_cmd, vg_opt[vg_type], pp_size, force_opt[force], vg, ' '.join(pvs)))
rc, output, err = module.run_command([mkvg_cmd, vg_opt[vg_type], pp_size, force_opt[force], "-y", vg] + pvs)
if rc != 0:
changed = False
msg = "Creating volume group '%s' failed." % vg
@@ -239,7 +239,7 @@ def reduce_vg(module, vg, pvs, vg_validation):
# Remove VG if pvs are note informed.
# Remark: AIX will permit remove only if the VG has not LVs.
lsvg_cmd = module.get_bin_path('lsvg', True)
rc, current_pvs, err = module.run_command("%s -p %s" % (lsvg_cmd, vg))
rc, current_pvs, err = module.run_command([lsvg_cmd, "-p", vg])
if rc != 0:
module.fail_json(msg="Failing to execute '%s' command." % lsvg_cmd)
@@ -263,7 +263,7 @@ def reduce_vg(module, vg, pvs, vg_validation):
if not module.check_mode:
reducevg_cmd = module.get_bin_path('reducevg', True)
rc, stdout, stderr = module.run_command("%s -df %s %s" % (reducevg_cmd, vg, ' '.join(pvs_to_remove)))
rc, stdout, stderr = module.run_command([reducevg_cmd, "-df", vg] + pvs_to_remove)
if rc != 0:
module.fail_json(msg="Unable to remove '%s'." % vg, rc=rc, stdout=stdout, stderr=stderr)
@@ -286,7 +286,7 @@ def state_vg(module, vg, state, vg_validation):
msg = ''
if not module.check_mode:
varyonvg_cmd = module.get_bin_path('varyonvg', True)
rc, varyonvg_out, err = module.run_command("%s %s" % (varyonvg_cmd, vg))
rc, varyonvg_out, err = module.run_command([varyonvg_cmd, vg])
if rc != 0:
module.fail_json(msg="Command 'varyonvg' failed.", rc=rc, err=err)
@@ -303,7 +303,7 @@ def state_vg(module, vg, state, vg_validation):
if not module.check_mode:
varyonvg_cmd = module.get_bin_path('varyoffvg', True)
rc, varyonvg_out, stderr = module.run_command("%s %s" % (varyonvg_cmd, vg))
rc, varyonvg_out, stderr = module.run_command([varyonvg_cmd, vg])
if rc != 0:
module.fail_json(msg="Command 'varyoffvg' failed.", rc=rc, stdout=varyonvg_out, stderr=stderr)

View File

@@ -253,7 +253,7 @@ def set_interface_option(module, lines, iface, option, raw_value, state, address
last_line_dict = iface_lines[-1]
return add_option_after_line(option, value, iface, lines, last_line_dict, iface_options, address_family)
if option in ["pre-up", "up", "down", "post-up"] and all(ito for ito in target_options if ito['value'] != value):
if option in ["pre-up", "up", "down", "post-up"] and all(ito['value'] != value for ito in target_options):
return add_option_after_line(option, value, iface, lines, target_options[-1], iface_options, address_family)
# if more than one option found edit the last one

View File

@@ -439,7 +439,7 @@ def delete_cert(module, executable, keystore_path, keystore_pass, alias, keystor
def test_keytool(module, executable):
''' Test if keytool is actually executable or not '''
module.run_command("%s" % executable, check_rc=True)
module.run_command([executable], check_rc=True)
def test_keystore(module, keystore_path):

View File

@@ -13,11 +13,25 @@ module: listen_ports_facts
author:
- Nathan Davison (@ndavison)
description:
- Gather facts on processes listening on TCP and UDP ports using netstat command.
- Gather facts on processes listening on TCP and UDP ports using the C(netstat) or C(ss) commands.
- This module currently supports Linux only.
requirements:
- netstat
- netstat or ss
short_description: Gather facts on processes listening on TCP and UDP ports.
notes:
- |
C(ss) returns all processes for each listen address and port.
This plugin will return each of them, so multiple entries for the same listen address and port are likely in results.
options:
command:
description:
- Override which command to use for fetching listen ports.
- 'By default module will use first found supported command on the system (in alphanumerical order).'
type: str
choices:
- netstat
- ss
version_added: 4.1.0
'''
EXAMPLES = r'''
@@ -181,10 +195,87 @@ def netStatParse(raw):
return results
def main():
def ss_parse(raw):
results = list()
regex_conns = re.compile(pattern=r'\[?(.+?)\]?:([0-9]+)')
regex_pid = re.compile(pattern=r'"(.*?)",pid=(\d+)')
lines = raw.splitlines()
if len(lines) == 0 or not lines[0].startswith('Netid '):
# unexpected stdout from ss
raise EnvironmentError('Unknown stdout format of `ss`: {0}'.format(raw))
# skip headers (-H arg is not present on e.g. Ubuntu 16)
lines = lines[1:]
for line in lines:
cells = line.split(None, 6)
try:
if len(cells) == 6:
# no process column, e.g. due to unprivileged user
process = str()
protocol, state, recv_q, send_q, local_addr_port, peer_addr_port = cells
else:
protocol, state, recv_q, send_q, local_addr_port, peer_addr_port, process = cells
except ValueError:
# unexpected stdout from ss
raise EnvironmentError(
'Expected `ss` table layout "Netid, State, Recv-Q, Send-Q, Local Address:Port, Peer Address:Port" and optionally "Process", \
but got something else: {0}'.format(line)
)
conns = regex_conns.search(local_addr_port)
pids = regex_pid.findall(process)
if conns is None and pids is None:
continue
if pids is None:
# likely unprivileged user, so add empty name & pid
# as we do in netstat logic to be consistent with output
pids = [(str(), 0)]
address = conns.group(1)
port = conns.group(2)
for name, pid in pids:
result = {
'pid': int(pid),
'address': address,
'port': int(port),
'protocol': protocol,
'name': name
}
results.append(result)
return results
def main():
commands_map = {
'netstat': {
'args': [
'-p',
'-l',
'-u',
'-n',
'-t',
],
'parse_func': netStatParse
},
'ss': {
'args': [
'-p',
'-l',
'-u',
'-n',
'-t',
],
'parse_func': ss_parse
},
}
module = AnsibleModule(
argument_spec={},
argument_spec=dict(
command=dict(type='str', choices=list(sorted(commands_map)))
),
supports_check_mode=True,
)
@@ -220,18 +311,34 @@ def main():
}
try:
netstat_cmd = module.get_bin_path('netstat', True)
command = None
bin_path = None
if module.params['command'] is not None:
command = module.params['command']
bin_path = module.get_bin_path(command, required=True)
else:
for c in sorted(commands_map):
bin_path = module.get_bin_path(c, required=False)
if bin_path is not None:
command = c
break
if bin_path is None:
raise EnvironmentError(msg='Unable to find any of the supported commands in PATH: {0}'.format(", ".join(sorted(commands_map))))
# which ports are listening for connections?
rc, stdout, stderr = module.run_command([netstat_cmd, '-plunt'])
args = commands_map[command]['args']
rc, stdout, stderr = module.run_command([bin_path] + args)
if rc == 0:
netstatOut = netStatParse(stdout)
for p in netstatOut:
parse_func = commands_map[command]['parse_func']
results = parse_func(stdout)
for p in results:
p['stime'] = getPidSTime(p['pid'])
p['user'] = getPidUser(p['pid'])
if p['protocol'] == 'tcp':
if p['protocol'].startswith('tcp'):
result['ansible_facts']['tcp_listen'].append(p)
elif p['protocol'] == 'udp':
elif p['protocol'].startswith('udp'):
result['ansible_facts']['udp_listen'].append(p)
except (KeyError, EnvironmentError) as e:
module.fail_json(msg=to_native(e))

View File

@@ -45,7 +45,7 @@ options:
required: false
type: bool
description:
- Enable or disable the service according to local preferences in *.preset files.
- Enable or disable the service according to local preferences in C(*.preset) files.
Mutually exclusive with I(enabled). Only has an effect if set to true. Will take
effect prior to I(state=reset).
user:

View File

@@ -86,6 +86,14 @@ options:
- Whether the list of nodes in the persistent iSCSI database should be returned by the module.
type: bool
default: false
rescan:
description:
- Rescan an established session for discovering new targets.
- When I(target) is omitted, will rescan all sessions.
type: bool
default: false
version_added: 4.1.0
'''
EXAMPLES = r'''
@@ -124,6 +132,11 @@ EXAMPLES = r'''
portal: 10.1.1.250
auto_portal_startup: false
target: iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
- name: Rescan one or all established sessions to discover new targets (omit target for all sessions)
community.general.open_iscsi:
rescan: true
target: iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
'''
import glob
@@ -179,6 +192,15 @@ def iscsi_discover(module, portal, port):
module.run_command(cmd, check_rc=True)
def iscsi_rescan(module, target=None):
if target is None:
cmd = [iscsiadm_cmd, '--mode', 'session', '--rescan']
else:
cmd = [iscsiadm_cmd, '--mode', 'node', '--rescan', '-T', target]
rc, out, err = module.run_command(cmd)
return out
def target_loggedon(module, target, portal=None, port=None):
cmd = [iscsiadm_cmd, '--mode', 'session']
rc, out, err = module.run_command(cmd)
@@ -305,6 +327,7 @@ def main():
auto_portal_startup=dict(type='bool'),
discover=dict(type='bool', default=False),
show_nodes=dict(type='bool', default=False),
rescan=dict(type='bool', default=False),
),
required_together=[['node_user', 'node_pass'], ['node_user_in', 'node_pass_in']],
@@ -330,6 +353,7 @@ def main():
automatic_portal = module.params['auto_portal_startup']
discover = module.params['discover']
show_nodes = module.params['show_nodes']
rescan = module.params['rescan']
check = module.check_mode
@@ -421,6 +445,10 @@ def main():
result['changed'] |= True
result['automatic_portal_changed'] = True
if rescan is not False:
result['changed'] = True
result['sessions'] = iscsi_rescan(module, target)
module.exit_json(**result)

View File

@@ -10,8 +10,8 @@ DOCUMENTATION = '''
module: python_requirements_info
short_description: Show python path and assert dependency versions
description:
- Get info about available Python requirements on the target host, including listing required libraries and gathering versions.
- This module was called C(python_requirements_facts) before Ansible 2.9. The usage did not change.
- Get info about available Python requirements on the target host, including listing required libraries and gathering versions.
- This module was called C(python_requirements_facts) before Ansible 2.9. The usage did not change.
options:
dependencies:
type: list
@@ -21,9 +21,10 @@ options:
Supported operators: <, >, <=, >=, or ==. The bare module name like
I(ansible), the module with a specific version like I(boto3==1.6.1), or a
partial version like I(requests>2) are all valid specifications.
default: []
author:
- Will Thames (@willthames)
- Ryan Scott Brown (@ryansb)
- Will Thames (@willthames)
- Ryan Scott Brown (@ryansb)
'''
EXAMPLES = '''
@@ -33,8 +34,8 @@ EXAMPLES = '''
- name: Check for modern boto3 and botocore versions
community.general.python_requirements_info:
dependencies:
- boto3>1.6
- botocore<2
- boto3>1.6
- botocore<2
'''
RETURN = '''
@@ -48,14 +49,44 @@ python_version:
returned: always
type: str
sample: "2.7.15 (default, May 1 2018, 16:44:08)\n[GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]"
python_version_info:
description: breakdown version of python
returned: always
type: dict
contains:
major:
description: The C(major) component of the python interpreter version.
returned: always
type: int
sample: 3
minor:
description: The C(minor) component of the python interpreter version.
returned: always
type: int
sample: 8
micro:
description: The C(micro) component of the python interpreter version.
returned: always
type: int
sample: 10
releaselevel:
description: The C(releaselevel) component of the python interpreter version.
returned: always
type: str
sample: final
serial:
description: The C(serial) component of the python interpreter version.
returned: always
type: int
sample: 0
version_added: 4.2.0
python_system_path:
description: List of paths python is looking for modules in
returned: always
type: list
sample:
- /usr/local/opt/python@2/site-packages/
- /usr/lib/python/site-packages/
- /usr/lib/python/site-packages/
- /usr/local/opt/python@2/site-packages/
- /usr/lib/python/site-packages/
valid:
description: A dictionary of dependencies that matched their desired versions. If no version was specified, then I(desired) will be null
returned: always
@@ -80,8 +111,8 @@ not_found:
returned: always
type: list
sample:
- boto4
- requests
- boto4
- requests
'''
import re
@@ -106,11 +137,19 @@ operations = {
'==': operator.eq,
}
python_version_info = dict(
major=sys.version_info[0],
minor=sys.version_info[1],
micro=sys.version_info[2],
releaselevel=sys.version_info[3],
serial=sys.version_info[4],
)
def main():
module = AnsibleModule(
argument_spec=dict(
dependencies=dict(type='list', elements='str')
dependencies=dict(type='list', elements='str', default=[])
),
supports_check_mode=True,
)
@@ -119,9 +158,10 @@ def main():
msg='Could not import "distutils" and "pkg_resources" libraries to introspect python environment.',
python=sys.executable,
python_version=sys.version,
python_version_info=python_version_info,
python_system_path=sys.path,
)
pkg_dep_re = re.compile(r'(^[a-zA-Z][a-zA-Z0-9_-]+)(==|[><]=?)?([0-9.]+)?$')
pkg_dep_re = re.compile(r'(^[a-zA-Z][a-zA-Z0-9_-]+)(?:(==|[><]=?)([0-9.]+))?$')
results = dict(
not_found=[],
@@ -129,9 +169,9 @@ def main():
valid={},
)
for dep in (module.params.get('dependencies') or []):
for dep in module.params['dependencies']:
match = pkg_dep_re.match(dep)
if match is None:
if not match:
module.fail_json(msg="Failed to parse version requirement '{0}'. Must be formatted like 'ansible>2.6'".format(dep))
pkg, op, version = match.groups()
if op is not None and op not in operations:
@@ -161,6 +201,7 @@ def main():
module.exit_json(
python=sys.executable,
python_version=sys.version,
python_version_info=python_version_info,
python_system_path=sys.path,
**results
)

View File

@@ -172,7 +172,7 @@ class Svc(object):
self.execute_command([self.svc_cmd, '-dx', src_log])
def get_status(self):
(rc, out, err) = self.execute_command([self.svstat_cmd, self.svc_full])
rc, out, err = self.execute_command([self.svstat_cmd, self.svc_full])
if err is not None and err:
self.full_state = self.state = err
@@ -223,7 +223,7 @@ class Svc(object):
def execute_command(self, cmd):
try:
(rc, out, err) = self.module.run_command(' '.join(cmd))
rc, out, err = self.module.run_command(cmd)
except Exception as e:
self.module.fail_json(msg="failed to execute: %s" % to_native(e), exception=traceback.format_exc())
return (rc, out, err)

View File

@@ -19,6 +19,8 @@ description:
For Linux it can use C(timedatectl) or edit C(/etc/sysconfig/clock) or C(/etc/timezone) and C(hwclock).
On SmartOS, C(sm-set-timezone), for macOS, C(systemsetup), for BSD, C(/etc/localtime) is modified.
On AIX, C(chtz) is used.
- Make sure that the zoneinfo files are installed with the appropriate OS package, like C(tzdata) (usually always installed,
when not using a minimal installation like Alpine Linux).
- As of Ansible 2.3 support was added for SmartOS and BSDs.
- As of Ansible 2.4 support was added for macOS.
- As of Ansible 2.9 support was added for AIX 6.1+

View File

@@ -128,7 +128,7 @@ RETURN = '''
'''
from ansible_collections.community.general.plugins.module_utils.module_helper import (
ModuleHelper, CmdMixin, StateMixin, ArgFormat, ModuleHelperException
CmdStateModuleHelper, ArgFormat, ModuleHelperException
)
@@ -151,7 +151,7 @@ class XFConfException(Exception):
pass
class XFConfProperty(CmdMixin, StateMixin, ModuleHelper):
class XFConfProperty(CmdStateModuleHelper):
change_params = 'value',
diff_params = 'value',
output_params = ('property', 'channel', 'value')

View File

@@ -36,15 +36,22 @@ options:
username:
type: str
required: true
description:
- The username to log-in with.
- Must be used with I(password). Mutually exclusive with I(token).
password:
type: str
required: true
description:
- The password to log-in with.
- Must be used with I(username). Mutually exclusive with I(token).
token:
type: str
description:
- The personal access token to log-in with.
- Mutually exclusive with I(username) and I(password).
version_added: 4.2.0
project:
type: str
@@ -206,7 +213,7 @@ options:
done.
notes:
- "Currently this only works with basic-auth."
- "Currently this only works with basic-auth, or tokens."
- "To use with JIRA Cloud, pass the login e-mail as the I(username) and the API token as I(password)."
author:
@@ -408,8 +415,9 @@ class JIRA(StateModuleHelper):
choices=['attach', 'create', 'comment', 'edit', 'update', 'fetch', 'transition', 'link', 'search'],
aliases=['command'], required=True
),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
username=dict(type='str'),
password=dict(type='str', no_log=True),
token=dict(type='str', no_log=True),
project=dict(type='str', ),
summary=dict(type='str', ),
description=dict(type='str', ),
@@ -432,6 +440,17 @@ class JIRA(StateModuleHelper):
validate_certs=dict(default=True, type='bool'),
account_id=dict(type='str'),
),
mutually_exclusive=[
['username', 'token'],
['password', 'token'],
['assignee', 'account_id'],
],
required_together=[
['username', 'password'],
],
required_one_of=[
['username', 'token'],
],
required_if=(
('operation', 'attach', ['issue', 'attachment']),
('operation', 'create', ['project', 'issuetype', 'summary']),
@@ -441,7 +460,6 @@ class JIRA(StateModuleHelper):
('operation', 'link', ['linktype', 'inwardissue', 'outwardissue']),
('operation', 'search', ['jql']),
),
mutually_exclusive=[('assignee', 'account_id')],
supports_check_mode=False
)
@@ -642,23 +660,30 @@ class JIRA(StateModuleHelper):
if data and content_type == 'application/json':
data = json.dumps(data)
headers = {}
if isinstance(additional_headers, dict):
headers = additional_headers.copy()
# NOTE: fetch_url uses a password manager, which follows the
# standard request-then-challenge basic-auth semantics. However as
# JIRA allows some unauthorised operations it doesn't necessarily
# send the challenge, so the request occurs as the anonymous user,
# resulting in unexpected results. To work around this we manually
# inject the basic-auth header up-front to ensure that JIRA treats
# inject the auth header up-front to ensure that JIRA treats
# the requests as authorized for this user.
auth = to_text(base64.b64encode(to_bytes('{0}:{1}'.format(self.vars.username, self.vars.password),
errors='surrogate_or_strict')))
headers = {}
if isinstance(additional_headers, dict):
headers = additional_headers.copy()
headers.update({
"Content-Type": content_type,
"Authorization": "Basic %s" % auth,
})
if self.vars.token is not None:
headers.update({
"Content-Type": content_type,
"Authorization": "Bearer %s" % self.vars.token,
})
else:
auth = to_text(base64.b64encode(to_bytes('{0}:{1}'.format(self.vars.username, self.vars.password),
errors='surrogate_or_strict')))
headers.update({
"Content-Type": content_type,
"Authorization": "Basic %s" % auth,
})
response, info = fetch_url(
self.module, url, data=data, method=method, timeout=self.vars.timeout, headers=headers
@@ -669,7 +694,14 @@ class JIRA(StateModuleHelper):
try:
error = json.loads(info['body'])
except Exception:
self.module.fail_json(msg=to_native(info['body']), exception=traceback.format_exc())
msg = 'The request "{method} {url}" returned the unexpected status code {status} {msg}\n{body}'.format(
status=info['status'],
msg=info['msg'],
body=info.get('body'),
url=url,
method=method,
)
self.module.fail_json(msg=to_native(msg), exception=traceback.format_exc())
if error:
msg = []
for key in ('errorMessages', 'errors'):

View File

@@ -6,3 +6,4 @@ skip/osx
skip/rhel8.2
skip/rhel8.3
skip/rhel8.4
skip/rhel8.5

View File

@@ -0,0 +1,2 @@
shippable/posix/group1
disabled

View File

@@ -0,0 +1,2 @@
gitlab_branch: ansible_test_branch
gitlab_project_name: ansible_test_project

View File

@@ -0,0 +1,64 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Install required libs
pip:
name: python-gitlab
state: present
- name: Create {{ gitlab_project_name }}
gitlab_project:
server_url: "{{ gitlab_host }}"
validate_certs: False
login_token: "{{ gitlab_login_token }}"
name: "{{ gitlab_project_name }}"
initialize_with_readme: True
state: present
- name: Create branch {{ gitlab_branch }}
community.general.gitlab_branch:
api_url: https://gitlab.com
api_token: secret_access_token
project: "{{ gitlab_project_name }}"
branch: "{{ gitlab_branch }}"
ref_branch: main
state: present
- name: Create branch {{ gitlab_branch }} ( Idempotency test )
community.general.gitlab_branch:
api_url: https://gitlab.com
api_token: secret_access_token
project: "{{ gitlab_project_name }}"
branch: "{{ gitlab_branch }}"
ref_branch: main
state: present
register: create_branch
- name: Test module is idempotent
assert:
that:
- create_branch is not changed
- name: Cleanup branch {{ gitlab_branch }}
community.general.gitlab_branch:
api_url: https://gitlab.com
api_token: secret_access_token
project: "{{ gitlab_project_name }}"
branch: "{{ gitlab_branch }}"
state: absent
register: delete_branch
- name: Test module is idempotent
assert:
that:
- delete_branch is changed
- name: Clean up {{ gitlab_project_name }}
gitlab_project:
server_url: "{{ gitlab_host }}"
validate_certs: False
login_token: "{{ gitlab_login_token }}"
name: "{{ gitlab_project_name }}"
state: absent

View File

@@ -0,0 +1 @@
unsupported

View File

@@ -0,0 +1,48 @@
- name: Set NTP Servers
ilo_redfish_config:
category: Manager
command: SetNTPServers
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
attribute_name: StaticNTPServers
attribute_value: 1.2.3.4
- name: Set DNS Server
ilo_redfish_config:
category: Manager
command: SetDNSserver
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
attribute_name: DNSServers
attribute_value: 192.168.1.1
- name: Set Domain name
ilo_redfish_config:
category: Manager
command: SetDomainName
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
attribute_name: DomainName
attribute_value: tst.sgp.hp.mfg
- name: Disable WINS Reg
ilo_redfish_config:
category: Manager
command: SetWINSReg
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
attribute_name: WINSRegistration
- name: Set TimeZone
ilo_redfish_config:
category: Manager
command: SetTimeZone
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
attribute_name: TimeZone
attribute_value: Chennai

View File

@@ -0,0 +1 @@
unsupported

Some files were not shown because too many files have changed in this diff Show More