Compare commits

...

66 Commits
4.6.1 ... 4.8.0

Author SHA1 Message Date
Felix Fontein
96f609d1f2 Release 4.8.0. 2022-04-26 11:42:20 +02:00
patchback[bot]
03b128aeff Add 'state' parameter for alternatives (#4557) (#4576)
* Add 'activate' parameter for alternatives

Allow alternatives to be installed without being set as the current
selection.

* add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* rename 'activate' -> 'selected'

* rework 'selected' parameter -> 'state'

* handle unsetting of currently selected alternative

* add integration tests for 'state' parameter

* fix linting issues

* fix for Python 2.7 compatibility

* Remove alternatives file.

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 29c49febd9)

Co-authored-by: Tanner Prestegard <tprestegard@users.noreply.github.com>
2022-04-26 06:41:04 +00:00
patchback[bot]
ab9a4cb58a New module alerta_customer (#4554) (#4575)
* first draft of alerta_customer

* Update BOTMETA.yml

* update after review

* fix pagination and state description

* remove whitespace

(cherry picked from commit d7e5e85f3e)

Co-authored-by: CWollinger <CWollinger@web.de>
2022-04-26 08:20:33 +02:00
patchback[bot]
6b21599def New Module: LXD Projects (#4521) (#4573)
* add lxd_project module

* documentation improvement and version_added entry

* improve documentation

* use os.path.expanduser

* exclude from use-argspec-type-path test

* improve documentation

(cherry picked from commit 1d3506490f)

Co-authored-by: Raymond Chang <xrayjemmy@gmail.com>
2022-04-25 22:34:27 +02:00
patchback[bot]
ca93145e76 Parse lxc key from api data for lxc containers (#4555) (#4574)
* Parse lxc key from api data for lxc containers

When configuring containers in the `/etc/pve/lxc/` file, the API
adds a 'lxc' key that caused the plugin to crash as it tried to
split a list on ','.

This commit introduces logic to convert the list of lists in the
returned data to a dict as with the other keys.

```
'lxc': [['lxc.apparmor.profile', 'unconfined'],
	['lxc.cgroup.devices.allow', 'a']]
```

becomes

```
"proxmox_lxc": {
	"apparmor.profile": "unconfined",
	"cap.drop": "",
	"cgroup.devices.allow": "a"
}
```

* Add changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Philippe Pepos Petitclerc <peposp@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 346bfba9c5)

Co-authored-by: Philippe Pépos-Petitclerc <ppepos@users.noreply.github.com>
2022-04-25 22:32:46 +02:00
patchback[bot]
a163ec3afa Command Runner (#4476) (#4572)
* initial commit, passing unit tests

* passing one very silly integration test

* multiple changes:

- updated copyright year
- cmd_runner
  - added fmt_optval
  - created specific exceptions
  - fixed bug in context class where values from module params were not
    being used for resolving cmd arguments
  - changed order of class declaration for readability purpose
- tests
  - minor improvements in integration test code
  - removed some extraneous code in msimple.yml
  - minor improvements in unit tests
  - added few missing cases to unit test

* multiple changes

cmd_runner.py

- renamed InvalidParameterName to MissingArgumentFormat
  - improved exception parameters
- added repr and str to all exceptions
- added unpacking decorator for fmt functions
- CmdRunner
  - improved parameter validation
- _CmdRunnerContext
  - Context runs must now pass named arguments
  - Simplified passing of additional arguments to module.run_command()
  - Provided multiple context variables with info about the run

Integration tests

- rename msimple.py to cmd_echo.py for clarity
- added more test cases

* cmd_runner: env update can be passed to runner

* adding runner context info to output

* added comment on OrderedDict

* wrong variable

* refactored all fmt functions into static methods of a class

Imports should be simpler now, only one object fmt, with attr access to all callables

* added unit tests for CmdRunner

* fixed sanity checks

* fixed mock imports

* added more unit tests for CmdRunner

* terminology consistency

* multiple adjustments:

- remove extraneous imports
- renamed some variables
- added wrapper around arg formatters to handle individual arg ignore_none behaviour

* removed old code commented out in test

* multiple changes:

- ensure fmt functions return list of strings
- renamed fmt parameter from `option` to `args`
- renamed fmt.mapped to fmt.as_map
- simplified fmt.as_map
- added tests for fmt.as_fixed

* more improvements in formats

* fixed sanity

* args_order can be a string (to be split())

and improved integration test

* simplified integration test

* removed overkill str() on values - run_command does that for us

* as_list makes more sense than as_str in that context

* added changelog fragment

* Update plugins/module_utils/cmd_runner.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* adjusted __repr__ output for the exceptions

* added superclass object to classes

* added additional comment on the testcase sample/example

* suggestion from PR

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f5b1b3c6f0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-25 22:26:36 +02:00
patchback[bot]
868a6303be Allow Proxmox Snapshot Restoring (#4377) (#4571)
* Allow restoring of snapshots

* Fix formatting

* Add documentation for new feature

* Revert unrelated reformatting

* Add documentation for snapshot change

* Remove redundant multiple call to status API

* Remove unneccesary indent

* Add documentation for timeout fix

* Update changelog fragment to reflect real changes

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelog fragment to reflect real changes

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add Tests for Snapshot rollback

* Update tests/unit/plugins/modules/cloud/misc/test_proxmox_snap.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/4377-allow-proxmox-snapshot-restoring.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/misc/proxmox_snap.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit dbad1e0f11)

Co-authored-by: Timon Michel <ich.bin@ein.dev>
2022-04-25 06:54:22 +02:00
patchback[bot]
759e82d403 Proxmox inventory: implement API token auth (#4540) (#4570)
* Proxmox inventory: implement api token auth

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix linter errors

* add changelog fragment

* add examples

* fix a typo and break long lines

* Update changelogs/fragments/4540-proxmox-inventory-token-auth.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c8c2636676)

Co-authored-by: Daniel <mail@h3po.de>
2022-04-24 16:06:19 +02:00
patchback[bot]
ed0c768aaf Removed 'default=None' in a batch of modules (#4556) (#4568)
* removed default=None

* removed default=None

* removed default=None

* removed default=None

* added changelog fragment

(cherry picked from commit b916cb369b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-24 10:49:45 +02:00
patchback[bot]
e933ed782f Removed 'default=None' in a batch of modules 2 (#4567) (#4569)
* removed default=None

* added changelog fragment

(cherry picked from commit 3b103f905e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-24 10:49:29 +02:00
patchback[bot]
69e5a0dbf1 Fix keycloak realm parameters types (#4526) (#4560)
* Fix keycloack realm parameters types

* Add changelog fragment

* Update changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0620cd2e74)

Co-authored-by: Alexandr <36310479+Vespand@users.noreply.github.com>
2022-04-23 08:49:44 +02:00
patchback[bot]
c4d166d3bc nmcli: Change hairpin default mode (#4334) (#4558)
* nmcli: Deprecate default hairpin mode

Deprecate the default hairpin mode for a bridge.
Plain nmcli/bridge tools defaults to no, but for some reason ansible
defaults to yes.

We deprecate the default value so we can switch to default 'no' in
ansible 6.0.0

* Code review fixes

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix comments

* Update changelogs/fragments/4320-nmcli-hairpin.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/4320-nmcli-hairpin.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 53f6c68026)

Co-authored-by: dupondje <jean-louis@dupond.be>
2022-04-23 08:49:33 +02:00
Felix Fontein
9ae8e544cb Prepare 4.8.0 release. 2022-04-22 23:31:28 +02:00
Felix Fontein
94aef4526d Fix filename. 2022-04-22 23:29:56 +02:00
patchback[bot]
aeece5a107 Add project support for lxd_container and lxd_profile module (#4479) (#4561)
* add project support for lxd modules

* fix lxd_container yaml format error

* add changelog fragement add version_add entry

* fix LXD spelling

* complete lxd_profile example

(cherry picked from commit 552db0d353)

Co-authored-by: Raymond Chang <xrayjemmy@gmail.com>
2022-04-22 22:49:29 +02:00
patchback[bot]
bdc4ee496f Fix import. (#4550) (#4552)
(cherry picked from commit 2f980e89fe)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-21 14:41:25 +02:00
patchback[bot]
5f59ec2d01 Implement contructable support for opennebula inventory plugin: keyed… (#4524) (#4549)
* Implement contructable support for opennebula inventory plugin: keyed_groups, compose, groups

* Fixed templating mock issues in unit tests, corrected some linting errors

* trying to make the linter happy

* Now trying to make python2.7 happy

* Added changelog fragment

* changelog fragment needs pluralization

* Update changelogs/fragments/4524-update-opennebula-inventory-plugin-to-match-documentation.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8e72e98adb)

Co-authored-by: Bill Sanders <billysanders@gmail.com>
2022-04-21 14:03:37 +02:00
patchback[bot]
a25e4f679e Remove distutils from unit tests. (#4545) (#4547)
(cherry picked from commit d9ba598938)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-21 11:28:39 +02:00
patchback[bot]
3876df9052 nmap inventory plugin: Add sudo nmap (#4506) (#4544)
* nmap.py: Add sudo nmap

* Update plugins/inventory/nmap.py

Change description of new plugin option adding version_added

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/inventory/nmap.py

Change boolean values of sudo option in example

Co-authored-by: Felix Fontein <felix@fontein.de>

* Create 4506-sudo-in-nmap-inv-plugin.yaml

* Fix typo in yaml format

* Update changelogs/fragments/4506-sudo-in-nmap-inv-plugin.yaml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update changelogs/fragments/4506-sudo-in-nmap-inv-plugin.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Document default as false.

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 3cce1217db)

Co-authored-by: ottobits <vindemaio@gmail.com>
2022-04-21 10:10:56 +02:00
patchback[bot]
12f2ba251b Add Lowess as maintainer of pritunl module utils. (#4539) (#4542)
(cherry picked from commit 405284b513)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-20 22:00:09 +02:00
patchback[bot]
e43a9b6974 xfconf: added missing value types (#4534) (#4541)
* xfconf: added missing value types

* added changelog fragment

* Update plugins/modules/system/xfconf.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit a2bfb96213)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-20 21:33:05 +02:00
patchback[bot]
9e2cb4363c [pritunl] removed unnecessary data from auth string (#4530) (#4538)
* removed unnecessary data from auth string

* add changelog

Co-authored-by: vadim <vadim>
(cherry picked from commit 51a68517ce)

Co-authored-by: vvatlin <vvvvatlin@gmail.com>
2022-04-20 09:33:57 +02:00
patchback[bot]
b61cb29023 xfconf: improve docs (#4533) (#4536)
(cherry picked from commit 3c6cb547f3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-20 08:43:17 +02:00
patchback[bot]
90d31b9403 remove deprecated branch.unprotect() method in community.general.gitlab_branch (#4496) (#4528)
* remove deprecated branch.protect method

* add changelog fragment

* Update changelogs/fragments/4496-remove-deprecated-method-in-gitlab-branch-module.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit a8abb1a5bf)

Co-authored-by: York Wong <eth2net@gmail.com>
2022-04-19 20:04:51 +02:00
patchback[bot]
4d22d0790d Correctly handle exception when no VM name returned by proxmox (#4508) (#4529)
(cherry picked from commit 8076f16aa9)

Co-authored-by: Marcin <stolarek.marcin@gmail.com>
2022-04-19 20:04:43 +02:00
patchback[bot]
bffe4c2a3b Bump version numbers for deprecation and removal since we didn't deprecate this in 4.0.0. (#4515) (#4519)
(cherry picked from commit 9e537d4a6b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-16 21:46:03 +02:00
patchback[bot]
dfdb0a6fe6 CI: remove FreeBSD 12.0 and 12.2, re-enable pkgng tests (#4511) (#4513)
* Remove FreeBSD 12.0 and 12.2 from CI.

* Revert "Temporarily disable the pkgng tests. (#4493)"

This reverts commit 5ecac692de.

(cherry picked from commit 26cebb9c30)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-16 12:44:57 +02:00
patchback[bot]
dd04e11094 Remove no longer true statement. (#4505) (#4510)
(cherry picked from commit efbf02f284)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-15 15:53:15 +02:00
patchback[bot]
5b029c66c5 Terraform init -upgrade flag (#4455) (#4502)
* Adds optional `-upgrade` flag to terraform init.

This allows Terraform to install provider dependencies into an existing project when the provider constraints change.

* fix transposed documentation keys

* Add integration tests for terraform init

* Revert to validate_certs: yes for general public testing

* skip integration tests on irrelevant platforms

* skip legacy Python versions from CI tests

* add changelog fragment

* Update plugins/modules/cloud/misc/terraform.py

Adds version_added metadata to the new module option.

Co-authored-by: Felix Fontein <felix@fontein.de>

* Change terraform_arch constant to Ansible fact mapping

* correct var typo, clarify task purpose

* Squashed some logic bugs, added override for local Terraform

If `existing_terraform_path` is provided, the playbook will not download Terraform or check its version.

I also tested this on a local system with Terraform installed, and squashed some bugs related to using of an
existing binary.

* revert to previous test behavior for TF install

* readability cleanup

* Update plugins/modules/cloud/misc/terraform.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e4a25beedc)

Co-authored-by: Kamil Markowicz <geekifier@users.noreply.github.com>
2022-04-13 19:22:08 +02:00
patchback[bot]
760843b9e5 pacman: Fix removing locally installed packages (#4464) (#4504)
* pacman: Fix removing locally installed packages

Without this, using `absent` state for a locally installed package (for example from AUR, or from a package that was dropped from repositories) would return that package is already removed, despite remaining installed

* Undo unwanted whitespace removal

* Add changelog fragment

* Update changelogs/fragments/4464-pacman-fix-local-remove.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add test.

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3c515dd221)

Co-authored-by: Martin <spleefer90@gmail.com>
2022-04-13 19:21:46 +02:00
patchback[bot]
19ba15a783 gitlab: Use all=True in most list() calls (#4491) (#4503)
If `all=True` is not set then by default only 20 records will be
returned when calling `list()`. Use `all=True` so that all records
will be returned.

For the `list()` use where do not desire to retrieve all entries then
use`all=False` to show explicityly that we don't want to get all of
the entries.

Fixes: #3729
Fixes: #4460
(cherry picked from commit fe4bbc5de3)

Co-authored-by: John Villalovos <john@sodarock.com>
2022-04-13 13:43:21 +02:00
patchback[bot]
70a3dae965 dnsmadeeasy: only get monitor if it is not null api response (#4459) (#4500)
* Only get monitor if it is not null api response

* Add changelog fragment

* Update changelogs/fragments/4459-only-get-monitor-if-it-is-not-null-api-response.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/dnsmadeeasy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: drevai <revai.dominik@gravityrd.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 06675034fe)

Co-authored-by: drevai753 <86595897+drevai753@users.noreply.github.com>
2022-04-13 11:16:39 +00:00
patchback[bot]
26d5409a87 Implement btrfs resize support (#4465) (#4498)
* Implement btrfs resize support

* Add changelog fragment for btrfs resize support

Co-authored-by: Fabian Klemp <fabian.klemp@frequentis.com>
(cherry picked from commit 8ccc4d1fbb)

Co-authored-by: elara-leitstellentechnik <elara-leitstellentechnik@users.noreply.github.com>
2022-04-13 11:16:27 +00:00
patchback[bot]
2f3a7a981d Temporarily disable the pkgng tests. (#4493) (#4495)
(cherry picked from commit 5ecac692de)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-11 20:33:00 +02:00
patchback[bot]
6a74c46e1c Redfish: Added IndicatorLED commands to the Systems category (#4458) (#4494)
* Redfish: Added IndicatorLED commands to the Systems category

Signed-off-by: Mike Raineri <michael.raineri@dell.com>

* Method call typo fix

Signed-off-by: Mike Raineri <michael.raineri@dell.com>

* Update 4084-add-redfish-system-indicator-led.yml

* Backwards compatibility suggestion

Signed-off-by: Mike Raineri <michael.raineri@dell.com>
(cherry picked from commit a9125c02e7)

Co-authored-by: Mike Raineri <michael.raineri@dell.com>
2022-04-11 20:22:58 +02:00
patchback[bot]
bec382df87 add support for datadog monitors of type event-v2 (#4457) (#4490)
* add support for datadog monitors of type event-v2

See https://docs.datadoghq.com/events/guides/migrating_to_new_events_features/

* add changelog fragement for PR

* typos

* add link to PR

* minor_fetaure, not bugfix

* add to description when we added event-v2 type

* Update changelogs/fragments/4457-support-datadog-monitors-type event-v2.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6edc176143)

Co-authored-by: ermeaney <ermeaney@gmail.com>
2022-04-11 08:01:41 +02:00
patchback[bot]
78f69224be modules/xbps: fix error message (#4438) (#4489)
The previous error message was not giving the full or even correct
information to the user.

(cherry picked from commit d3adde4739)

Co-authored-by: Cameron Nemo <CameronNemo@users.noreply.github.com>
2022-04-11 08:01:32 +02:00
patchback[bot]
34682addb8 seport: minor refactor (#4471) (#4485)
* seport: minor refactor

* added changelog fragment

* Update plugins/modules/system/seport.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/system/seport.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7e6a2453d0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-10 18:48:44 +02:00
patchback[bot]
2c106d66a4 Switch from antsibull to antsibull-docs. (#4480) (#4483)
(cherry picked from commit aa27f2152e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-10 11:08:50 +02:00
patchback[bot]
9c4fd63a4d Deprecate want_proxmox_nodes_ansible_host option's default value. (#4466) (#4478)
(cherry picked from commit 865d7ac698)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-10 08:59:25 +02:00
patchback[bot]
d04c18ffce Add discord integration tests (#4463) (#4477)
* add discord integration tests

* fix: var name in readme

(cherry picked from commit aa045d2655)

Co-authored-by: CWollinger <CWollinger@web.de>
2022-04-10 08:59:16 +02:00
patchback[bot]
41fe6663d9 Fix documentation for sudoers module (#4469) (#4474)
* Fix documentation for sudoers module

* Update plugins/modules/system/sudoers.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit fa65b9d1f0)

Co-authored-by: Ulf Tigerstedt <tigerstedt@iki.fi>
2022-04-10 08:41:23 +02:00
Felix Fontein
9f8612f34e Next expected release is 4.7.0. 2022-04-05 16:49:15 +02:00
Felix Fontein
22b72e6684 Release 4.7.0. 2022-04-05 14:02:29 +02:00
patchback[bot]
8e7bee4217 Fix small typo (#4452) (#4454)
(cherry picked from commit 380de2d0c1)

Co-authored-by: Wouter Schoot <wouter@schoot.org>
2022-04-05 14:00:35 +02:00
patchback[bot]
cef6b81e5b Bug fix: Warns user if incorrect SDK version is installed (#4422) (#4450)
* Add error handling to check correct SDK version installed

* Fix CI errors

* Added changelog fragment

* Changed exeption type

* Update changelogs fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e7ffa76db6)

Co-authored-by: Ricky White <rickywhite@outlook.com>
2022-04-05 07:49:30 +02:00
patchback[bot]
182c365d87 nmcli: suggest new routes4 and routes6 format (#4328) (#4447)
* suggest new routes4 and routes6 format

* make new options instead of modifying exiting one

* fix docs and some small errors

* fixing docs

(cherry picked from commit feb0fffd58)

Co-authored-by: Alex Groshev <38885591+haddystuff@users.noreply.github.com>
2022-04-05 07:12:38 +02:00
patchback[bot]
587cdc82e7 Keycloak client, Add always_display_in_console option (#4429) (#4448)
* Keycloak client, Add always_display_in_console option

* Add 4429-keycloak-client-add-always-display-in-console.yml fragment.

* Update changelogs/fragments/4429-keycloak-client-add-always-display-in-console.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/identity/keycloak/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/identity/keycloak/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Michal Vasko <mvasko@cloudwerkstatt.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 79256b2bd2)

Co-authored-by: whoamiUNIX <40315055+whoamiUNIX@users.noreply.github.com>
2022-04-05 07:09:20 +02:00
Felix Fontein
cb1a50a273 Prepare 4.7.0 release. 2022-04-05 07:05:07 +02:00
patchback[bot]
f0df50e665 Bugfix: zypper issue with specified package versions (#4421) (#4446)
* fixed issue with specified package versions

zypper.py was doing nothing on state=present, when ALL requestet/checked packages had a specific version stated. This was caused by get_installed_state() being called with an empty package list, which in this case returns information about all ALL installed packages. This lead to an exessive filter list prerun_state, essentially removing all packages that are installed in ANY version on the target system from the request list.

* Create 4421-zypper_package_version_handling_fix

added changelog fragment for https://github.com/ansible-collections/community.general/pull/4421

* Delete 4421-zypper_package_version_handling_fix

* Create 4421-zypper_package_version_handling_fix.yml

(cherry picked from commit bbe231e261)

Co-authored-by: tover99 <101673769+tover99@users.noreply.github.com>
2022-04-05 06:28:15 +02:00
patchback[bot]
47aa93d970 cronvar: ensure creation of /etc/cron.d in test (#4440) (#4444)
* ensure creation of /etc/cron.d in test

* fixed typo

(cherry picked from commit 9e0ff8ba4b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-03 10:54:44 +02:00
patchback[bot]
e89648a114 Remove OpenSuSE Python 2 from devel CI. (#4442) (#4443)
(cherry picked from commit bd83490b45)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-02 18:31:40 +02:00
patchback[bot]
6f1bdb3e49 pids: re-enabled tests on Alpine Linux (#4405) (#4439)
* [WIP] pids: re-enabled tests on Alpine Linux

* trying to compile a simple-faked sleep command

* make FreeBSD happy

* remove the block testing for Alpine Linux

* simpler version of sleeper.c

* simpler version of sleeper.c, part II

* Update tests/integration/targets/pids/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/integration/targets/pids/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* added license to sleeper.c file

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 21ee4c84b7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-02 08:45:53 +02:00
patchback[bot]
fbf11668f4 CI: Remove 'warn:' that's removed in ansible-core 2.14 (#4434) (#4437)
* Remove 'warn:' that's removed in ansible-core 2.14.

* Install virtualenv when needed.

(cherry picked from commit 24ca69aa05)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-01 23:11:39 +02:00
patchback[bot]
3376442aa2 Proxmox Inventory: Add support for templating in inventory file (#4418) (#4435)
* added templating to the url, user, and password

* added changelog fragment

* typo in description for url, and password

* clarify in the changelog what can you change

* update documentation and added an example

* missing quote from examples

* Apply suggestions from code review

Changed to I for option names

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/inventory/proxmox.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 13d18c9aa8)

Co-authored-by: Ilija Matoski <ilijamt@gmail.com>
2022-04-01 23:07:42 +02:00
patchback[bot]
868edfa664 ipa_service: Add skip_host_check option (#4417) (#4436)
* ipa_service: Add `skip_host_check` option

* Update plugins/modules/identity/ipa/ipa_service.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/identity/ipa/ipa_service.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/identity/ipa/ipa_service.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* changelogs/fragments: Add 4417-ipa_service-add-skip_host_check.yml

Co-authored-by: sodd <4178855+sodd@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1b357bade7)

Co-authored-by: sodd <sodd@users.noreply.github.com>
2022-04-01 23:07:23 +02:00
patchback[bot]
2fcb77f7fb Replace antsibull-lint collection-docs with antsibull-docs lint-collection-docs. (#4423) (#4426)
(cherry picked from commit 668bbed602)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-03-30 08:17:38 +02:00
patchback[bot]
17135dd082 Add stable-2.13 to CI, thin out older version matrix (#4413) (#4414)
* Add stable-2.13 to CI, thin out older version matrix.

* Thin out matrix more.

* And a bit more.

(cherry picked from commit caedcc3075)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-03-29 07:52:32 +02:00
patchback[bot]
7516018cfb keycloak: add missing validate_certs parameters for open_url calls (#4382) (#4410)
* fix: missing `validate_certs` parameters for `open_url` calls

As stated in the documentation, the `validate_certs` parameter can be
used to verify (or not) the TLS certificates. But, for some modules (at
least for the `keycloak_authentication` module), this parameter is not
used with the `open_url` function.

* add changelog fragment

* Update changelogs/fragments/4382-keycloak-add-missing-validate_certs-parameters.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Laurent Meunier <lme@atolcd.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 34420e143e)

Co-authored-by: Laurent Meunier <laurent@deltalima.net>
2022-03-28 22:25:14 +02:00
patchback[bot]
58df1df107 keycloak_client: add default_client_scopes and optional_client_scopes (#4385) (#4409)
* keycloak_client: add default_client_scopes and optional_client_scopes

* Changelog fragment for #4385

* Update changelogs/fragments/4385-keycloak-client-default-optional-scopes.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/identity/keycloak/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/identity/keycloak/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 30c65cd84c)

Co-authored-by: Alex Lubbock <alex@lubbock.uk>
2022-03-28 22:25:00 +02:00
patchback[bot]
e9b3705809 feat: sudoers module supports runas parameter with default of root (#4380) (#4399)
* feat: sudoers module supports runas parameter with default of root

* fix: sudoers tests now pass

* chore: add changelog fragment for 4380

* fix: runas feature now a non-breaking change wh no def with no default

* fix: no trailing space in sudoers.py

* Update plugins/modules/system/sudoers.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 17fe813c18)

Co-authored-by: doubletwist13 <doubletwist@fearthepenguin.net>
2022-03-24 06:44:48 +00:00
patchback[bot]
743e9c851f ldap: added documentation as requested (#4389) (#4398)
* added documentation as requested

* Update plugins/doc_fragments/ldap.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8515c03dc7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-03-24 06:44:30 +00:00
patchback[bot]
a7883ee489 Fixed typo in keycloak_client_rolemapping examples (#4393) (#4401)
* Fixed typo in keycloak_client_rolemapping examples

* Add a changelog fragment.

* Removed changelogs fragment for docs-only change.

Co-authored-by: shnee <shnee@shnee.net>
(cherry picked from commit cb30eb2d30)

Co-authored-by: shnee <CurtyD13@gmail.com>
2022-03-24 06:44:18 +00:00
patchback[bot]
518af70b77 Proxmox inventory plugin - Fix tags parsing (#4378) (#4402)
* Proxmox inventory plugin - Fix tags parsing

  * In some cases the Proxmox API returns a tags string that consists in
    a single space. The Proxmox inventory plugin parsed that into a
    single, empty tag. Stripping the initial string then checking
    whether it actually contains something fixes that.
  * Do not call `_to_safe` on the concatenation of a known safe string
    and a string that was already made safe.

* Changelog fragment for Proxmox inventory plugin tags fix

* Proxmox inventory plugin - Include link to PR in fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 622895fb55)

Co-authored-by: Emmanuel Benoît <tseeker@nocternity.net>
2022-03-24 06:44:02 +00:00
patchback[bot]
ce7d98aa6f Add collection links file. (#4384) (#4386)
(cherry picked from commit eb4495b716)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-03-22 07:21:28 +01:00
Felix Fontein
9f91f4b5cd Next expected release is 4.7.0. 2022-03-16 19:12:20 +01:00
124 changed files with 4371 additions and 540 deletions

View File

@@ -69,6 +69,19 @@ stages:
- test: 3
- test: 4
- test: extra
- stage: Sanity_2_13
displayName: Sanity 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.13/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_12
displayName: Sanity 2.12
dependsOn: []
@@ -138,6 +151,19 @@ stages:
- test: 3.8
- test: 3.9
- test: '3.10'
- stage: Units_2_13
displayName: Units 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.13/units/{0}/1
targets:
- test: 2.7
- test: 3.6
- test: 3.8
- test: 3.9
- stage: Units_2_12
displayName: Units 2.12
dependsOn: []
@@ -148,12 +174,8 @@ stages:
testFormat: 2.12/units/{0}/1
targets:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: '3.10'
- stage: Units_2_11
displayName: Units 2.11
dependsOn: []
@@ -166,9 +188,6 @@ stages:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- stage: Units_2_10
displayName: Units 2.10
@@ -191,11 +210,7 @@ stages:
testFormat: 2.9/units/{0}/1
targets:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
## Remote
- stage: Remote_devel
@@ -220,6 +235,22 @@ stages:
- 1
- 2
- 3
- stage: Remote_2_13
displayName: Remote 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.13/{0}
targets:
- name: macOS 12.0
test: macos/12.0
- name: RHEL 8.5
test: rhel/8.5
groups:
- 1
- 2
- 3
- stage: Remote_2_12
displayName: Remote 2.12
dependsOn: []
@@ -249,8 +280,8 @@ stages:
test: rhel/7.9
- name: RHEL 8.3
test: rhel/8.3
- name: FreeBSD 12.2
test: freebsd/12.2
#- name: FreeBSD 12.2
# test: freebsd/12.2
groups:
- 1
- 2
@@ -281,8 +312,8 @@ stages:
test: rhel/8.2
- name: RHEL 7.8
test: rhel/7.8
- name: FreeBSD 12.0
test: freebsd/12.0
#- name: FreeBSD 12.0
# test: freebsd/12.0
groups:
- 1
- 2
@@ -302,9 +333,7 @@ stages:
test: fedora34
- name: Fedora 35
test: fedora35
- name: openSUSE 15 py2
test: opensuse15py2
- name: openSUSE 15 py3
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 18.04
test: ubuntu1804
@@ -316,6 +345,24 @@ stages:
- 1
- 2
- 3
- stage: Docker_2_13
displayName: Docker 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.13/linux/{0}
targets:
- name: Fedora 35
test: fedora35
- name: openSUSE 15 py2
test: opensuse15py2
- name: Alpine 3
test: alpine3
groups:
- 1
- 2
- 3
- stage: Docker_2_12
displayName: Docker 2.12
dependsOn: []
@@ -328,8 +375,6 @@ stages:
test: centos6
- name: Fedora 34
test: fedora34
- name: openSUSE 15 py3
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
groups:
@@ -344,12 +389,8 @@ stages:
parameters:
testFormat: 2.11/linux/{0}
targets:
- name: CentOS 7
test: centos7
- name: Fedora 33
test: fedora33
- name: openSUSE 15 py2
test: opensuse15py2
- name: Alpine 3
test: alpine3
groups:
@@ -380,8 +421,6 @@ stages:
targets:
- name: Fedora 31
test: fedora31
- name: openSUSE 15 py3
test: opensuse15
groups:
- 2
- 3
@@ -417,6 +456,16 @@ stages:
testFormat: devel/cloud/{0}/1
targets:
- test: 2.7
- test: '3.10'
- stage: Cloud_2_13
displayName: Cloud 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.13/cloud/{0}/1
targets:
- test: 3.9
- stage: Cloud_2_12
displayName: Cloud 2.12
@@ -466,26 +515,31 @@ stages:
- Sanity_2_10
- Sanity_2_11
- Sanity_2_12
- Sanity_2_13
- Units_devel
- Units_2_9
- Units_2_10
- Units_2_11
- Units_2_12
- Units_2_13
- Remote_devel
- Remote_2_9
- Remote_2_10
- Remote_2_11
- Remote_2_12
- Remote_2_13
- Docker_devel
- Docker_2_9
- Docker_2_10
- Docker_2_11
- Docker_2_12
- Docker_2_13
- Docker_community_devel
- Cloud_devel
- Cloud_2_9
- Cloud_2_10
- Cloud_2_11
- Cloud_2_12
- Cloud_2_13
jobs:
- template: templates/coverage.yml

6
.github/BOTMETA.yml vendored
View File

@@ -260,6 +260,8 @@ files:
$module_utils/module_helper.py:
maintainers: russoz
labels: module_helper
$module_utils/net_tools/pritunl/:
maintainers: Lowess
$module_utils/oracle/oci_utils.py:
maintainers: $team_oracle
labels: cloud
@@ -310,6 +312,8 @@ files:
ignore: hnakamur
$modules/cloud/lxd/lxd_profile.py:
maintainers: conloos
$modules/cloud/lxd/lxd_project.py:
maintainers: we10710aa
$modules/cloud/memset/:
maintainers: glitchcrab
$modules/cloud/misc/cloud_init_data_facts.py:
@@ -558,6 +562,8 @@ files:
maintainers: phumpal
labels: airbrake_deployment
ignore: bpennypacker
$modules/monitoring/alerta_customer.py:
maintainers: cwollinger
$modules/monitoring/bigpanda.py:
maintainers: hkariti
$modules/monitoring/circonus_annotation.py:

View File

@@ -6,6 +6,118 @@ Community General Release Notes
This changelog describes changes after version 3.0.0.
v4.8.0
======
Release Summary
---------------
Regular feature and bugfix release. Please note that this is the last minor 4.x.0 release. Further releases with major version 4 will be bugfix releases 4.8.y.
Minor Changes
-------------
- alternatives - add ``state`` parameter, which provides control over whether the alternative should be set as the active selection for its alternatives group (https://github.com/ansible-collections/community.general/issues/4543, https://github.com/ansible-collections/community.general/pull/4557).
- atomic_container - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- clc_alert_policy - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_loadbalancer - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_server - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- cmd_runner module util - reusable command runner with consistent argument formatting and sensible defaults (https://github.com/ansible-collections/community.general/pull/4476).
- datadog_monitor - support new datadog event monitor of type `event-v2 alert` (https://github.com/ansible-collections/community.general/pull/4457)
- filesystem - add support for resizing btrfs (https://github.com/ansible-collections/community.general/issues/4465).
- lxd_container - adds ``project`` option to allow selecting project for LXD instance (https://github.com/ansible-collections/community.general/pull/4479).
- lxd_profile - adds ``project`` option to allow selecting project for LXD profile (https://github.com/ansible-collections/community.general/pull/4479).
- nmap inventory plugin - add ``sudo`` option in plugin in order to execute ``sudo nmap`` so that ``nmap`` runs with elevated privileges (https://github.com/ansible-collections/community.general/pull/4506).
- nomad_job - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- nomad_job_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_device - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_sshkey - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_volume - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- profitbricks - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox inventory plugin - add token authentication as an alternative to username/password (https://github.com/ansible-collections/community.general/pull/4540).
- proxmox inventory plugin - parse LXC configs returned by the proxmox API (https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_snap - add restore snapshot option (https://github.com/ansible-collections/community.general/pull/4377).
- proxmox_snap - fixed timeout value to correctly reflect time in seconds. The timeout was off by one second (https://github.com/ansible-collections/community.general/pull/4377).
- redfish_command - add ``IndicatorLedOn``, ``IndicatorLedOff``, and ``IndicatorLedBlink`` commands to the Systems category for controling system LEDs (https://github.com/ansible-collections/community.general/issues/4084).
- seport - minor refactoring (https://github.com/ansible-collections/community.general/pull/4471).
- smartos_image_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- terraform - adds ``terraform_upgrade`` parameter which allows ``terraform init`` to satisfy new provider constraints in an existing Terraform project (https://github.com/ansible-collections/community.general/issues/4333).
- udm_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- udm_share - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- vmadm - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_app - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_db - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- xfconf - added missing value types ``char``, ``uchar``, ``int64`` and ``uint64`` (https://github.com/ansible-collections/community.general/pull/4534).
Deprecated Features
-------------------
- nmcli - deprecate default hairpin mode for a bridge. This so we can change it to ``false`` in community.general 7.0.0, as this is also the default in ``nmcli`` (https://github.com/ansible-collections/community.general/pull/4334).
- proxmox inventory plugin - the current default ``true`` of the ``want_proxmox_nodes_ansible_host`` option has been deprecated. The default will change to ``false`` in community.general 6.0.0. To keep the current behavior, explicitly set ``want_proxmox_nodes_ansible_host`` to ``true`` in your inventory configuration. We suggest to already switch to the new behavior by explicitly setting it to ``false``, and by using ``compose:`` to set ``ansible_host`` to the correct value. See the examples in the plugin documentation for details (https://github.com/ansible-collections/community.general/pull/4466).
Bugfixes
--------
- dnsmadeeasy - fix failure on deleting DNS entries when API response does not contain monitor value (https://github.com/ansible-collections/community.general/issues/3620).
- git_branch - remove deprecated and unnecessary branch ``unprotect`` method (https://github.com/ansible-collections/community.general/pull/4496).
- gitlab_group - improve searching for projects inside group on deletion (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_group_members - handle more than 20 groups when finding a group (https://github.com/ansible-collections/community.general/pull/4491, https://github.com/ansible-collections/community.general/issues/4460, https://github.com/ansible-collections/community.general/issues/3729).
- gitlab_hook - handle more than 20 hooks when finding a hook (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_project - handle more than 20 namespaces when finding a namespace (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_project_members - handle more than 20 projects and users when finding a project resp. user (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_user - handle more than 20 users and SSH keys when finding a user resp. SSH key (https://github.com/ansible-collections/community.general/pull/4491).
- keycloak - fix parameters types for ``defaultDefaultClientScopes`` and ``defaultOptionalClientScopes`` from list of dictionaries to list of strings (https://github.com/ansible-collections/community.general/pull/4526).
- opennebula inventory plugin - complete the implementation of ``constructable`` for opennebula inventory plugin. Now ``keyed_groups``, ``compose``, ``groups`` actually work (https://github.com/ansible-collections/community.general/issues/4497).
- pacman - fixed bug where ``absent`` state did not work for locally installed packages (https://github.com/ansible-collections/community.general/pull/4464).
- pritunl - fixed bug where pritunl plugin api add unneeded data in ``auth_string`` parameter (https://github.com/ansible-collections/community.general/issues/4527).
- proxmox inventory plugin - fix error when parsing container with LXC configs (https://github.com/ansible-collections/community.general/issues/4472, https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_kvm - fix a bug when getting a state of VM without name will fail (https://github.com/ansible-collections/community.general/pull/4508).
- xbps - fix error message that is reported when installing packages fails (https://github.com/ansible-collections/community.general/pull/4438).
New Modules
-----------
Cloud
~~~~~
lxd
^^^
- lxd_project - Manage LXD projects
Monitoring
~~~~~~~~~~
- alerta_customer - Manage customers in Alerta
v4.7.0
======
Release Summary
---------------
Regular bugfix and feature release.
Minor Changes
-------------
- ipa_service - add ``skip_host_check`` parameter. (https://github.com/ansible-collections/community.general/pull/4417).
- keycloak_client - add ``always_display_in_console`` parameter (https://github.com/ansible-collections/community.general/issues/4390).
- keycloak_client - add ``default_client_scopes`` and ``optional_client_scopes`` parameters. (https://github.com/ansible-collections/community.general/pull/4385).
- proxmox inventory plugin - add support for templating the ``url``, ``user``, and ``password`` options (https://github.com/ansible-collections/community.general/pull/4418).
- sudoers - add support for ``runas`` parameter (https://github.com/ansible-collections/community.general/issues/4379).
Bugfixes
--------
- dsv lookup plugin - raise an Ansible error if the wrong ``python-dsv-sdk`` version is installed (https://github.com/ansible-collections/community.general/pull/4422).
- keycloak_* - the documented ``validate_certs`` parameter was not taken into account when calling the ``open_url`` function in some cases, thus enforcing certificate validation even when ``validate_certs`` was set to ``false``. (https://github.com/ansible-collections/community.general/pull/4382)
- nmcli - fix returning "changed" when routes parameters set, also suggest new routes4 and routes6 format (https://github.com/ansible-collections/community.general/issues/4131).
- proxmox inventory plugin - fixed the ``tags_parsed`` field when Proxmox returns a single space for the ``tags`` entry (https://github.com/ansible-collections/community.general/pull/4378).
- zypper - fixed bug that caused zypper to always report [ok] and do nothing on ``state=present`` when all packages in ``name`` had a version specification (https://github.com/ansible-collections/community.general/issues/4371, https://github.com/ansible-collections/community.general/pull/4421).
v4.6.1
======

View File

@@ -17,7 +17,7 @@ If you encounter abusive behavior violating the [Ansible Code of Conduct](https:
## Tested with Ansible
Tested with the current Ansible 2.9, ansible-base 2.10, ansible-core 2.11, ansible-core 2.12 releases and the current development version of ansible-core. Ansible versions before 2.9.10 are not supported.
Tested with the current Ansible 2.9, ansible-base 2.10, ansible-core 2.11, ansible-core 2.12, ansible-core 2.13 releases and the current development version of ansible-core. Ansible versions before 2.9.10 are not supported.
## External requirements

View File

@@ -1603,3 +1603,182 @@ releases:
- 4351-inventory-lxd-handling_metadata_wo_os_and_release.yml
- 4368-reverts-4281.yml
release_date: '2022-03-16'
4.7.0:
changes:
bugfixes:
- dsv lookup plugin - raise an Ansible error if the wrong ``python-dsv-sdk``
version is installed (https://github.com/ansible-collections/community.general/pull/4422).
- keycloak_* - the documented ``validate_certs`` parameter was not taken into
account when calling the ``open_url`` function in some cases, thus enforcing
certificate validation even when ``validate_certs`` was set to ``false``.
(https://github.com/ansible-collections/community.general/pull/4382)
- nmcli - fix returning "changed" when routes parameters set, also suggest new
routes4 and routes6 format (https://github.com/ansible-collections/community.general/issues/4131).
- proxmox inventory plugin - fixed the ``tags_parsed`` field when Proxmox returns
a single space for the ``tags`` entry (https://github.com/ansible-collections/community.general/pull/4378).
- zypper - fixed bug that caused zypper to always report [ok] and do nothing
on ``state=present`` when all packages in ``name`` had a version specification
(https://github.com/ansible-collections/community.general/issues/4371, https://github.com/ansible-collections/community.general/pull/4421).
minor_changes:
- ipa_service - add ``skip_host_check`` parameter. (https://github.com/ansible-collections/community.general/pull/4417).
- keycloak_client - add ``always_display_in_console`` parameter (https://github.com/ansible-collections/community.general/issues/4390).
- keycloak_client - add ``default_client_scopes`` and ``optional_client_scopes``
parameters. (https://github.com/ansible-collections/community.general/pull/4385).
- proxmox inventory plugin - add support for templating the ``url``, ``user``,
and ``password`` options (https://github.com/ansible-collections/community.general/pull/4418).
- sudoers - add support for ``runas`` parameter (https://github.com/ansible-collections/community.general/issues/4379).
release_summary: Regular bugfix and feature release.
fragments:
- 4.7.0.yml
- 4131-nmcli_fix_reports_changed_for_routes4_parameter.yml
- 4378-proxmox-inventory-tags.yml
- 4380-sudoers-runas-parameter.yml
- 4382-keycloak-add-missing-validate_certs-parameters.yml
- 4385-keycloak-client-default-optional-scopes.yml
- 4386-proxmox-support-templating-in-inventory-file.yml
- 4417-ipa_service-add-skip_host_check.yml
- 4421-zypper_package_version_handling_fix.yml
- 4422-warn-user-if-incorrect-SDK-version-is-installed.yaml
- 4429-keycloak-client-add-always-display-in-console.yml
release_date: '2022-04-05'
4.8.0:
changes:
bugfixes:
- dnsmadeeasy - fix failure on deleting DNS entries when API response does not
contain monitor value (https://github.com/ansible-collections/community.general/issues/3620).
- git_branch - remove deprecated and unnecessary branch ``unprotect`` method
(https://github.com/ansible-collections/community.general/pull/4496).
- 'gitlab_group - improve searching for projects inside group on deletion (https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_group_members - handle more than 20 groups when finding a group (https://github.com/ansible-collections/community.general/pull/4491,
https://github.com/ansible-collections/community.general/issues/4460, https://github.com/ansible-collections/community.general/issues/3729).
'
- 'gitlab_hook - handle more than 20 hooks when finding a hook (https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_project - handle more than 20 namespaces when finding a namespace
(https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_project_members - handle more than 20 projects and users when finding
a project resp. user (https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_user - handle more than 20 users and SSH keys when finding a user
resp. SSH key (https://github.com/ansible-collections/community.general/pull/4491).
'
- keycloak - fix parameters types for ``defaultDefaultClientScopes`` and ``defaultOptionalClientScopes``
from list of dictionaries to list of strings (https://github.com/ansible-collections/community.general/pull/4526).
- opennebula inventory plugin - complete the implementation of ``constructable``
for opennebula inventory plugin. Now ``keyed_groups``, ``compose``, ``groups``
actually work (https://github.com/ansible-collections/community.general/issues/4497).
- pacman - fixed bug where ``absent`` state did not work for locally installed
packages (https://github.com/ansible-collections/community.general/pull/4464).
- pritunl - fixed bug where pritunl plugin api add unneeded data in ``auth_string``
parameter (https://github.com/ansible-collections/community.general/issues/4527).
- proxmox inventory plugin - fix error when parsing container with LXC configs
(https://github.com/ansible-collections/community.general/issues/4472, https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_kvm - fix a bug when getting a state of VM without name will fail
(https://github.com/ansible-collections/community.general/pull/4508).
- xbps - fix error message that is reported when installing packages fails (https://github.com/ansible-collections/community.general/pull/4438).
deprecated_features:
- nmcli - deprecate default hairpin mode for a bridge. This so we can change
it to ``false`` in community.general 7.0.0, as this is also the default in
``nmcli`` (https://github.com/ansible-collections/community.general/pull/4334).
- proxmox inventory plugin - the current default ``true`` of the ``want_proxmox_nodes_ansible_host``
option has been deprecated. The default will change to ``false`` in community.general
6.0.0. To keep the current behavior, explicitly set ``want_proxmox_nodes_ansible_host``
to ``true`` in your inventory configuration. We suggest to already switch
to the new behavior by explicitly setting it to ``false``, and by using ``compose:``
to set ``ansible_host`` to the correct value. See the examples in the plugin
documentation for details (https://github.com/ansible-collections/community.general/pull/4466).
minor_changes:
- alternatives - add ``state`` parameter, which provides control over whether
the alternative should be set as the active selection for its alternatives
group (https://github.com/ansible-collections/community.general/issues/4543,
https://github.com/ansible-collections/community.general/pull/4557).
- atomic_container - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- clc_alert_policy - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_loadbalancer - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_server - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- cmd_runner module util - reusable command runner with consistent argument
formatting and sensible defaults (https://github.com/ansible-collections/community.general/pull/4476).
- datadog_monitor - support new datadog event monitor of type `event-v2 alert`
(https://github.com/ansible-collections/community.general/pull/4457)
- filesystem - add support for resizing btrfs (https://github.com/ansible-collections/community.general/issues/4465).
- lxd_container - adds ``project`` option to allow selecting project for LXD
instance (https://github.com/ansible-collections/community.general/pull/4479).
- lxd_profile - adds ``project`` option to allow selecting project for LXD profile
(https://github.com/ansible-collections/community.general/pull/4479).
- nmap inventory plugin - add ``sudo`` option in plugin in order to execute
``sudo nmap`` so that ``nmap`` runs with elevated privileges (https://github.com/ansible-collections/community.general/pull/4506).
- nomad_job - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- nomad_job_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_device - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_sshkey - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_volume - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- profitbricks - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox inventory plugin - add token authentication as an alternative to username/password
(https://github.com/ansible-collections/community.general/pull/4540).
- proxmox inventory plugin - parse LXC configs returned by the proxmox API (https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_snap - add restore snapshot option (https://github.com/ansible-collections/community.general/pull/4377).
- proxmox_snap - fixed timeout value to correctly reflect time in seconds. The
timeout was off by one second (https://github.com/ansible-collections/community.general/pull/4377).
- redfish_command - add ``IndicatorLedOn``, ``IndicatorLedOff``, and ``IndicatorLedBlink``
commands to the Systems category for controling system LEDs (https://github.com/ansible-collections/community.general/issues/4084).
- seport - minor refactoring (https://github.com/ansible-collections/community.general/pull/4471).
- smartos_image_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- terraform - adds ``terraform_upgrade`` parameter which allows ``terraform
init`` to satisfy new provider constraints in an existing Terraform project
(https://github.com/ansible-collections/community.general/issues/4333).
- udm_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- udm_share - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- vmadm - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_app - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_db - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- xfconf - added missing value types ``char``, ``uchar``, ``int64`` and ``uint64``
(https://github.com/ansible-collections/community.general/pull/4534).
release_summary: Regular feature and bugfix release. Please note that this is
the last minor 4.x.0 release. Further releases with major version 4 will be
bugfix releases 4.8.y.
fragments:
- 4.8.0.yml
- 4084-add-redfish-system-indicator-led.yml
- 4320-nmcli-hairpin.yml
- 4377-allow-proxmox-snapshot-restoring.yml
- 4438-fix-error-message.yaml
- 4455-terraform-provider-upgrade.yml
- 4457-support-datadog-monitors-type-event-v2.yaml
- 4459-only-get-monitor-if-it-is-not-null-api-response.yaml
- 4464-pacman-fix-local-remove.yaml
- 4465-btrfs-resize.yml
- 4466-proxmox-ansible_host-deprecation.yml
- 4471-seport-refactor.yaml
- 4476-cmd_runner.yml
- 4479-add-project-support-for-lxd_container-and-lxd_profile.yml
- 4491-specify_all_in_list_calls.yaml
- 4492-proxmox_kvm_fix_vm_without_name.yaml
- 4496-remove-deprecated-method-in-gitlab-branch-module.yml
- 4506-sudo-in-nmap-inv-plugin.yaml
- 4524-update-opennebula-inventory-plugin-to-match-documentation.yaml
- 4526-keycloak-realm-types.yaml
- 4530-fix-unauthorized-pritunl-request.yaml
- 4534-xfconf-added-value-types.yaml
- 4540-proxmox-inventory-token-auth.yml
- 4555-proxmox-lxc-key.yml
- 4556-remove-default-none-1.yml
- 4557-alternatives-add-state-parameter.yml
- 4567-remove-default-none-2.yml
modules:
- description: Manage customers in Alerta
name: alerta_customer
namespace: monitoring
- description: Manage LXD projects
name: lxd_project
namespace: cloud.lxd
release_date: '2022-04-26'

23
docs/docsite/links.yml Normal file
View File

@@ -0,0 +1,23 @@
---
edit_on_github:
repository: ansible-collections/community.general
branch: main
path_prefix: ''
extra_links:
- description: Submit a bug report
url: https://github.com/ansible-collections/community.general/issues/new?assignees=&labels=&template=bug_report.yml
- description: Request a feature
url: https://github.com/ansible-collections/community.general/issues/new?assignees=&labels=&template=feature_request.yml
communication:
matrix_rooms:
- topic: General usage and support questions
room: '#users:ansible.im'
irc_channels:
- topic: General usage and support questions
network: Libera
channel: '#ansible'
mailing_lists:
- topic: Ansible Project List
url: https://groups.google.com/g/ansible-project

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 4.6.1
version: 4.8.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -38,8 +38,10 @@ options:
version_added: 2.0.0
server_uri:
description:
- A URI to the LDAP server.
- The I(server_uri) parameter may be a comma- or whitespace-separated list of URIs containing only the schema, the host, and the port fields.
- The default value lets the underlying LDAP client library look for a UNIX domain socket in its default location.
- Note that when using multiple URIs you cannot determine to which URI your client gets connected.
- For URIs containing additional fields, particularly when using commas, behavior is undefined.
type: str
default: ldapi:///
start_tls:

View File

@@ -21,6 +21,11 @@ DOCUMENTATION = '''
description: token that ensures this is a source file for the 'nmap' plugin.
required: True
choices: ['nmap', 'community.general.nmap']
sudo:
description: Set to C(true) to execute a C(sudo nmap) plugin scan.
version_added: 4.8.0
default: false
type: boolean
address:
description: Network IP or range of IPs to scan, you can use a simple range (10.2.2.15-25) or CIDR notation.
required: True
@@ -49,6 +54,13 @@ EXAMPLES = '''
plugin: community.general.nmap
strict: False
address: 192.168.0.0/24
# a sudo nmap scan to fully use nmap scan power.
plugin: community.general.nmap
sudo: true
strict: False
address: 192.168.0.0/24
'''
import os
@@ -135,6 +147,10 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
if not user_cache_setting or cache_needs_update:
# setup command
cmd = [self._nmap]
if self._options['sudo']:
cmd.insert(0, 'sudo')
if not self._options['ports']:
cmd.append('-sP')

View File

@@ -206,28 +206,40 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
def _populate(self):
hostname_preference = self.get_option('hostname')
group_by_labels = self.get_option('group_by_labels')
strict = self.get_option('strict')
# Add a top group 'one'
self.inventory.add_group(group='all')
filter_by_label = self.get_option('filter_by_label')
for server in self._retrieve_servers(filter_by_label):
servers = self._retrieve_servers(filter_by_label)
for server in servers:
hostname = server['name']
# check for labels
if group_by_labels and server['LABELS']:
for label in server['LABELS']:
self.inventory.add_group(group=label)
self.inventory.add_host(host=server['name'], group=label)
self.inventory.add_host(host=hostname, group=label)
self.inventory.add_host(host=server['name'], group='all')
self.inventory.add_host(host=hostname, group='all')
for attribute, value in server.items():
self.inventory.set_variable(server['name'], attribute, value)
self.inventory.set_variable(hostname, attribute, value)
if hostname_preference != 'name':
self.inventory.set_variable(server['name'], 'ansible_host', server[hostname_preference])
self.inventory.set_variable(hostname, 'ansible_host', server[hostname_preference])
if server.get('SSH_PORT'):
self.inventory.set_variable(server['name'], 'ansible_port', server['SSH_PORT'])
self.inventory.set_variable(hostname, 'ansible_port', server['SSH_PORT'])
# handle construcable implementation: get composed variables if any
self._set_composite_vars(self.get_option('compose'), server, hostname, strict=strict)
# groups based on jinja conditionals get added to specific groups
self._add_host_to_composed_groups(self.get_option('groups'), server, hostname, strict=strict)
# groups based on variables associated with them in the inventory
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), server, hostname, strict=strict)
def parse(self, inventory, loader, path, cache=True):
if not HAS_PYONE:

View File

@@ -3,6 +3,7 @@
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
@@ -31,6 +32,7 @@ DOCUMENTATION = '''
description:
- URL to Proxmox cluster.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_URL) will be used instead.
- Since community.general 4.7.0 you can also use templating to specify the value of the I(url).
default: 'http://localhost:8006'
type: str
env:
@@ -40,6 +42,7 @@ DOCUMENTATION = '''
description:
- Proxmox authentication user.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_USER) will be used instead.
- Since community.general 4.7.0 you can also use templating to specify the value of the I(user).
required: yes
type: str
env:
@@ -49,11 +52,33 @@ DOCUMENTATION = '''
description:
- Proxmox authentication password.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_PASSWORD) will be used instead.
required: yes
- Since community.general 4.7.0 you can also use templating to specify the value of the I(password).
- If you do not specify a password, you must set I(token_id) and I(token_secret) instead.
type: str
env:
- name: PROXMOX_PASSWORD
version_added: 2.0.0
token_id:
description:
- Proxmox authentication token ID.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_TOKEN_ID) will be used instead.
- To use token authentication, you must also specify I(token_secret). If you do not specify I(token_id) and I(token_secret),
you must set a password instead.
- Make sure to grant explicit pve permissions to the token or disable 'privilege separation' to use the users' privileges instead.
version_added: 4.8.0
type: str
env:
- name: PROXMOX_TOKEN_ID
token_secret:
description:
- Proxmox authentication token secret.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_TOKEN_SECRET) will be used instead.
- To use token authentication, you must also specify I(token_id). If you do not specify I(token_id) and I(token_secret),
you must set a password instead.
version_added: 4.8.0
type: str
env:
- name: PROXMOX_TOKEN_SECRET
validate_certs:
description: Verify SSL certificate if using HTTPS.
type: boolean
@@ -75,7 +100,9 @@ DOCUMENTATION = '''
description:
- Whether to set C(ansbile_host) for proxmox nodes.
- When set to C(true) (default), will use the first available interface. This can be different from what you expect.
default: true
- This currently defaults to C(true), but the default is deprecated since community.general 4.8.0.
The default will change to C(false) in community.general 6.0.0. To avoid a deprecation warning, please
set this parameter explicitly.
type: bool
filters:
version_added: 4.6.0
@@ -100,6 +127,25 @@ EXAMPLES = '''
plugin: community.general.proxmox
user: ansible@pve
password: secure
# Note that this can easily give you wrong values as ansible_host. See further below for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
# Instead of login with password, proxmox supports api token authentication since release 6.2.
plugin: community.general.proxmox
user: ci@pve
token_id: gitlab-1
token_secret: fa256e9c-26ab-41ec-82da-707a2c079829
# The secret can also be a vault string or passed via the environment variable TOKEN_SECRET.
token_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62353634333163633336343265623632626339313032653563653165313262343931643431656138
6134333736323265656466646539663134306166666237630a653363623262636663333762316136
34616361326263383766366663393837626437316462313332663736623066656237386531663731
3037646432383064630a663165303564623338666131353366373630656661333437393937343331
32643131386134396336623736393634373936356332623632306561356361323737313663633633
6231313333666361656537343562333337323030623732323833
# More complete example demonstrating the use of 'want_facts' and the constructed options
# Note that using facts returned by 'want_facts' in constructed options requires 'want_facts=true'
@@ -120,6 +166,9 @@ groups:
mailservers: "'mail' in (proxmox_tags_parsed|list)"
compose:
ansible_port: 2222
# Note that this can easily give you wrong values as ansible_host. See further below for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
# Using the inventory to allow ansible to connect via the first IP address of the VM / Container
# (Default is connection by name of QEMU/LXC guests)
@@ -131,11 +180,23 @@ user: ansible@pve
password: secure
validate_certs: false
want_facts: true
want_proxmox_nodes_ansible_host: false
compose:
ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ipaddr('address')
my_inv_var_1: "'my_var1_value'"
my_inv_var_2: >
"my_var_2_value"
# Specify the url, user and password using templating
# my.proxmox.yml
plugin: community.general.proxmox
url: "{{ lookup('ansible.builtin.ini', 'url', section='proxmox', file='file.ini') }}"
user: "{{ lookup('ansible.builtin.env','PM_USER') | default('ansible@pve') }}"
password: "{{ lookup('community.general.random_string', base64=True) }}"
# Note that this can easily give you wrong values as ansible_host. See further up for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
'''
import itertools
@@ -146,8 +207,10 @@ from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.errors import AnsibleError
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.utils.display import Display
from ansible.template import Templar
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
@@ -198,15 +261,24 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
def _get_auth(self):
credentials = urlencode({'username': self.proxmox_user, 'password': self.proxmox_password, })
a = self._get_session()
ret = a.post('%s/api2/json/access/ticket' % self.proxmox_url, data=credentials)
if self.proxmox_password:
json = ret.json()
credentials = urlencode({'username': self.proxmox_user, 'password': self.proxmox_password, })
self.credentials = {
'ticket': json['data']['ticket'],
'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
}
a = self._get_session()
ret = a.post('%s/api2/json/access/ticket' % self.proxmox_url, data=credentials)
json = ret.json()
self.headers = {
# only required for POST/PUT/DELETE methods, which we are not using currently
# 'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
'Cookie': 'PVEAuthCookie={0}'.format(json['data']['ticket'])
}
else:
self.headers = {'Authorization': 'PVEAPIToken={0}!{1}={2}'.format(self.proxmox_user, self.proxmox_token_id, self.proxmox_token_secret)}
def _get_json(self, url, ignore_errors=None):
@@ -218,8 +290,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
data = []
s = self._get_session()
while True:
headers = {'Cookie': 'PVEAuthCookie={0}'.format(self.credentials['ticket'])}
ret = s.get(url, headers=headers)
ret = s.get(url, headers=self.headers)
if ignore_errors and ret.status_code in ignore_errors:
break
ret.raise_for_status()
@@ -323,8 +394,10 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
# Additional field containing parsed tags as list
if config == 'tags':
parsed_key = self.to_safe('%s%s' % (key, "_parsed"))
properties[parsed_key] = [tag.strip() for tag in value.split(",")]
stripped_value = value.strip()
if stripped_value:
parsed_key = key + "_parsed"
properties[parsed_key] = [tag.strip() for tag in stripped_value.split(",")]
# The first field in the agent string tells you whether the agent is enabled
# the rest of the comma separated string is extra config for the agent
@@ -334,7 +407,16 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
agent_iface_key = self.to_safe('%s%s' % (key, "_interfaces"))
properties[agent_iface_key] = agent_iface_value
if config not in plaintext_configs and not isinstance(value, int) and all("=" in v for v in value.split(",")):
if config == 'lxc':
out_val = {}
for k, v in value:
if k.startswith('lxc.'):
k = k[len('lxc.'):]
out_val[k] = v
value = out_val
if config not in plaintext_configs and isinstance(value, string_types) \
and all("=" in v for v in value.split(",")):
# split off strings with commas to a dict
# skip over any keys that cannot be processed
try:
@@ -453,6 +535,16 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
nodes_group = self._group('nodes')
self.inventory.add_group(nodes_group)
want_proxmox_nodes_ansible_host = self.get_option("want_proxmox_nodes_ansible_host")
if want_proxmox_nodes_ansible_host is None:
display.deprecated(
'The want_proxmox_nodes_ansible_host option of the community.general.proxmox inventory plugin'
' currently defaults to `true`, but this default has been deprecated and will change to `false`'
' in community.general 6.0.0. To keep the current behavior and remove this deprecation warning,'
' explicitly set `want_proxmox_nodes_ansible_host` to `true` in your inventory configuration',
version='6.0.0', collection_name='community.general')
want_proxmox_nodes_ansible_host = True
# gather vm's on nodes
self._get_auth()
hosts = []
@@ -468,7 +560,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
continue
# get node IP address
if self.get_option("want_proxmox_nodes_ansible_host"):
if want_proxmox_nodes_ansible_host:
ip = self._get_node_ip(node['node'])
self.inventory.set_variable(node['node'], 'ansible_host', ip)
@@ -498,10 +590,37 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
# read config from file, this sets 'options'
self._read_config_data(path)
t = Templar(loader=loader)
# read options
self.proxmox_url = self.get_option('url').rstrip('/')
self.proxmox_user = self.get_option('user')
self.proxmox_password = self.get_option('password')
proxmox_url = self.get_option('url')
if t.is_template(proxmox_url):
proxmox_url = t.template(variable=proxmox_url, disable_lookups=False)
self.proxmox_url = proxmox_url.rstrip('/')
proxmox_user = self.get_option('user')
if t.is_template(proxmox_user):
proxmox_user = t.template(variable=proxmox_user, disable_lookups=False)
self.proxmox_user = proxmox_user
proxmox_password = self.get_option('password')
if t.is_template(proxmox_password):
proxmox_password = t.template(variable=proxmox_password, disable_lookups=False)
self.proxmox_password = proxmox_password
proxmox_token_id = self.get_option('token_id')
if t.is_template(proxmox_token_id):
proxmox_token_id = t.template(variable=proxmox_token_id, disable_lookups=False)
self.proxmox_token_id = proxmox_token_id
proxmox_token_secret = self.get_option('token_secret')
if t.is_template(proxmox_token_secret):
proxmox_token_secret = t.template(variable=proxmox_token_secret, disable_lookups=False)
self.proxmox_token_secret = proxmox_token_secret
if proxmox_password is None and (proxmox_token_id is None or proxmox_token_secret is None):
raise AnsibleError('You must specify either a password or both token_id and token_secret.')
self.cache_key = self.get_cache_key(path)
self.use_cache = cache and self.get_option('cache')
self.host_filters = self.get_option('filters')

View File

@@ -105,11 +105,15 @@ display = Display()
class LookupModule(LookupBase):
@staticmethod
def Client(vault_parameters):
return SecretsVault(**vault_parameters)
try:
vault = SecretsVault(**vault_parameters)
return vault
except TypeError:
raise AnsibleError("python-dsv-sdk==0.0.1 must be installed to use this plugin")
def run(self, terms, variables, **kwargs):
if sdk_is_missing:
raise AnsibleError("python-dsv-sdk must be installed to use this plugin")
raise AnsibleError("python-dsv-sdk==0.0.1 must be installed to use this plugin")
self.set_options(var_options=variables, direct=kwargs)

View File

@@ -0,0 +1,291 @@
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky <russoz@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from functools import wraps
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.six import iteritems
def _ensure_list(value):
return list(value) if is_sequence(value) else [value]
def _process_as_is(rc, out, err):
return rc, out, err
class CmdRunnerException(Exception):
pass
class MissingArgumentFormat(CmdRunnerException):
def __init__(self, arg, args_order, args_formats):
self.args_order = args_order
self.arg = arg
self.args_formats = args_formats
def __repr__(self):
return "MissingArgumentFormat({0!r}, {1!r}, {2!r})".format(
self.arg,
self.args_order,
self.args_formats,
)
def __str__(self):
return "Cannot find format for parameter {0} {1} in: {2}".format(
self.arg,
self.args_order,
self.args_formats,
)
class MissingArgumentValue(CmdRunnerException):
def __init__(self, args_order, arg):
self.args_order = args_order
self.arg = arg
def __repr__(self):
return "MissingArgumentValue({0!r}, {1!r})".format(
self.args_order,
self.arg,
)
def __str__(self):
return "Cannot find value for parameter {0} in {1}".format(
self.arg,
self.args_order,
)
class FormatError(CmdRunnerException):
def __init__(self, name, value, args_formats, exc):
self.name = name
self.value = value
self.args_formats = args_formats
self.exc = exc
super(FormatError, self).__init__()
def __repr__(self):
return "FormatError({0!r}, {1!r}, {2!r}, {3!r})".format(
self.name,
self.value,
self.args_formats,
self.exc,
)
def __str__(self):
return "Failed to format parameter {0} with value {1}: {2}".format(
self.name,
self.value,
self.exc,
)
class _ArgFormat(object):
def __init__(self, func, ignore_none=None):
self.func = func
self.ignore_none = ignore_none
def __call__(self, value, ctx_ignore_none):
ignore_none = self.ignore_none if self.ignore_none is not None else ctx_ignore_none
if value is None and ignore_none:
return []
f = self.func
return [str(x) for x in f(value)]
class _Format(object):
@staticmethod
def as_bool(args):
return _ArgFormat(lambda value: _ensure_list(args) if value else [])
@staticmethod
def as_bool_not(args):
return _ArgFormat(lambda value: [] if value else _ensure_list(args), ignore_none=False)
@staticmethod
def as_optval(arg, ignore_none=None):
return _ArgFormat(lambda value: ["{0}{1}".format(arg, value)], ignore_none=ignore_none)
@staticmethod
def as_opt_val(arg, ignore_none=None):
return _ArgFormat(lambda value: [arg, value], ignore_none=ignore_none)
@staticmethod
def as_opt_eq_val(arg, ignore_none=None):
return _ArgFormat(lambda value: ["{0}={1}".format(arg, value)], ignore_none=ignore_none)
@staticmethod
def as_list(ignore_none=None):
return _ArgFormat(_ensure_list, ignore_none=ignore_none)
@staticmethod
def as_fixed(args):
return _ArgFormat(lambda value: _ensure_list(args), ignore_none=False)
@staticmethod
def as_func(func, ignore_none=None):
return _ArgFormat(func, ignore_none=ignore_none)
@staticmethod
def as_map(_map, default=None, ignore_none=None):
return _ArgFormat(lambda value: _ensure_list(_map.get(value, default)), ignore_none=ignore_none)
@staticmethod
def as_default_type(_type, arg="", ignore_none=None):
fmt = _Format
if _type == "dict":
return fmt.as_func(lambda d: ["--{0}={1}".format(*a) for a in iteritems(d)],
ignore_none=ignore_none)
if _type == "list":
return fmt.as_func(lambda value: ["--{0}".format(x) for x in value], ignore_none=ignore_none)
if _type == "bool":
return fmt.as_bool("--{0}".format(arg))
return fmt.as_opt_val("--{0}".format(arg), ignore_none=ignore_none)
@staticmethod
def unpack_args(func):
@wraps(func)
def wrapper(v):
return func(*v)
return wrapper
@staticmethod
def unpack_kwargs(func):
@wraps(func)
def wrapper(v):
return func(**v)
return wrapper
class CmdRunner(object):
"""
Wrapper for ``AnsibleModule.run_command()``.
It aims to provide a reusable runner with consistent argument formatting
and sensible defaults.
"""
@staticmethod
def _prepare_args_order(order):
return tuple(order) if is_sequence(order) else tuple(order.split())
def __init__(self, module, command, arg_formats=None, default_args_order=(),
check_rc=False, force_lang="C", path_prefix=None, environ_update=None):
self.module = module
self.command = _ensure_list(command)
self.default_args_order = self._prepare_args_order(default_args_order)
if arg_formats is None:
arg_formats = {}
self.arg_formats = dict(arg_formats)
self.check_rc = check_rc
self.force_lang = force_lang
self.path_prefix = path_prefix
if environ_update is None:
environ_update = {}
self.environ_update = environ_update
self.command[0] = module.get_bin_path(command[0], opt_dirs=path_prefix, required=True)
for mod_param_name, spec in iteritems(module.argument_spec):
if mod_param_name not in self.arg_formats:
self.arg_formats[mod_param_name] = _Format.as_default_type(spec['type'], mod_param_name)
def context(self, args_order=None, output_process=None, ignore_value_none=True, **kwargs):
if output_process is None:
output_process = _process_as_is
if args_order is None:
args_order = self.default_args_order
args_order = self._prepare_args_order(args_order)
for p in args_order:
if p not in self.arg_formats:
raise MissingArgumentFormat(p, args_order, tuple(self.arg_formats.keys()))
return _CmdRunnerContext(runner=self,
args_order=args_order,
output_process=output_process,
ignore_value_none=ignore_value_none, **kwargs)
def has_arg_format(self, arg):
return arg in self.arg_formats
class _CmdRunnerContext(object):
def __init__(self, runner, args_order, output_process, ignore_value_none, **kwargs):
self.runner = runner
self.args_order = tuple(args_order)
self.output_process = output_process
self.ignore_value_none = ignore_value_none
self.run_command_args = dict(kwargs)
self.environ_update = runner.environ_update
self.environ_update.update(self.run_command_args.get('environ_update', {}))
if runner.force_lang:
self.environ_update.update({
'LANGUAGE': runner.force_lang,
'LC_ALL': runner.force_lang,
})
self.run_command_args['environ_update'] = self.environ_update
if 'check_rc' not in self.run_command_args:
self.run_command_args['check_rc'] = runner.check_rc
self.check_rc = self.run_command_args['check_rc']
self.cmd = None
self.results_rc = None
self.results_out = None
self.results_err = None
self.results_processed = None
def run(self, **kwargs):
runner = self.runner
module = self.runner.module
self.cmd = list(runner.command)
self.context_run_args = dict(kwargs)
named_args = dict(module.params)
named_args.update(kwargs)
for arg_name in self.args_order:
value = None
try:
value = named_args[arg_name]
self.cmd.extend(runner.arg_formats[arg_name](value, ctx_ignore_none=self.ignore_value_none))
except KeyError:
raise MissingArgumentValue(self.args_order, arg_name)
except Exception as e:
raise FormatError(arg_name, value, runner.arg_formats[arg_name], e)
results = module.run_command(self.cmd, **self.run_command_args)
self.results_rc, self.results_out, self.results_err = results
self.results_processed = self.output_process(*results)
return self.results_processed
@property
def run_info(self):
return dict(
ignore_value_none=self.ignore_value_none,
check_rc=self.check_rc,
environ_update=self.environ_update,
args_order=self.args_order,
cmd=self.cmd,
run_command_args=self.run_command_args,
context_run_args=self.context_run_args,
results_rc=self.results_rc,
results_out=self.results_out,
results_err=self.results_err,
results_processed=self.results_processed,
)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
fmt = _Format()

View File

@@ -1237,7 +1237,7 @@ class KeycloakAPI(object):
authentication_flow = {}
# Check if the authentication flow exists on the Keycloak serveraders
authentications = json.load(open_url(URL_AUTHENTICATION_FLOWS.format(url=self.baseurl, realm=realm), method='GET',
headers=self.restheaders, timeout=self.connection_timeout))
headers=self.restheaders, timeout=self.connection_timeout, validate_certs=self.validate_certs))
for authentication in authentications:
if authentication["alias"] == alias:
authentication_flow = authentication
@@ -1281,14 +1281,16 @@ class KeycloakAPI(object):
method='POST',
headers=self.restheaders,
data=json.dumps(new_name),
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
flow_list = json.load(
open_url(
URL_AUTHENTICATION_FLOWS.format(url=self.baseurl,
realm=realm),
method='GET',
headers=self.restheaders,
timeout=self.connection_timeout))
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for flow in flow_list:
if flow["alias"] == config["alias"]:
return flow
@@ -1318,7 +1320,8 @@ class KeycloakAPI(object):
method='POST',
headers=self.restheaders,
data=json.dumps(new_flow),
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
flow_list = json.load(
open_url(
URL_AUTHENTICATION_FLOWS.format(
@@ -1326,7 +1329,8 @@ class KeycloakAPI(object):
realm=realm),
method='GET',
headers=self.restheaders,
timeout=self.connection_timeout))
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for flow in flow_list:
if flow["alias"] == config["alias"]:
return flow
@@ -1351,7 +1355,8 @@ class KeycloakAPI(object):
method='PUT',
headers=self.restheaders,
data=json.dumps(updatedExec),
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Unable to update executions %s: %s" % (updatedExec, str(e)))
@@ -1371,7 +1376,8 @@ class KeycloakAPI(object):
method='POST',
headers=self.restheaders,
data=json.dumps(authenticationConfig),
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Unable to add authenticationConfig %s: %s" % (executionId, str(e)))
@@ -1395,7 +1401,8 @@ class KeycloakAPI(object):
method='POST',
headers=self.restheaders,
data=json.dumps(newSubFlow),
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Unable to create new subflow %s: %s" % (subflowName, str(e)))
@@ -1418,7 +1425,8 @@ class KeycloakAPI(object):
method='POST',
headers=self.restheaders,
data=json.dumps(newExec),
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Unable to create new execution %s: %s" % (execution["provider"], str(e)))
@@ -1440,7 +1448,8 @@ class KeycloakAPI(object):
id=executionId),
method='POST',
headers=self.restheaders,
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
elif diff < 0:
for i in range(-diff):
open_url(
@@ -1450,7 +1459,8 @@ class KeycloakAPI(object):
id=executionId),
method='POST',
headers=self.restheaders,
timeout=self.connection_timeout)
timeout=self.connection_timeout,
validate_certs=self.validate_certs)
except Exception as e:
self.module.fail_json(msg="Unable to change execution priority %s: %s" % (executionId, str(e)))
@@ -1471,7 +1481,8 @@ class KeycloakAPI(object):
flowalias=quote(config["alias"])),
method='GET',
headers=self.restheaders,
timeout=self.connection_timeout))
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
for execution in executions:
if "authenticationConfig" in execution:
execConfigId = execution["authenticationConfig"]
@@ -1483,7 +1494,8 @@ class KeycloakAPI(object):
id=execConfigId),
method='GET',
headers=self.restheaders,
timeout=self.connection_timeout))
timeout=self.connection_timeout,
validate_certs=self.validate_certs))
execution["authenticationConfig"] = execConfig
return executions
except Exception as e:

View File

@@ -337,7 +337,6 @@ def pritunl_auth_request(
auth_string = "&".join(
[api_token, auth_timestamp, auth_nonce, method.upper(), path]
+ ([data] if data else [])
)
auth_signature = base64.b64encode(

View File

@@ -732,14 +732,22 @@ class RedfishUtils(object):
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def manage_indicator_led(self, command):
def manage_system_indicator_led(self, command):
return self.manage_indicator_led(command, self.systems_uri)
def manage_chassis_indicator_led(self, command):
return self.manage_indicator_led(command, self.chassis_uri)
def manage_indicator_led(self, command, resource_uri=None):
result = {}
key = 'IndicatorLED'
if resource_uri is None:
resource_uri = self.chassis_uri
payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'}
result = {}
response = self.get_request(self.root_uri + self.chassis_uri)
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
@@ -749,7 +757,7 @@ class RedfishUtils(object):
if command in payloads.keys():
payload = {'IndicatorLED': payloads[command]}
response = self.patch_request(self.root_uri + self.chassis_uri, payload)
response = self.patch_request(self.root_uri + resource_uri, payload)
if response['ret'] is False:
return response
else:

View File

@@ -0,0 +1 @@
./monitoring/alerta_customer.py

View File

@@ -182,10 +182,10 @@ def core(module):
def main():
module = AnsibleModule(
argument_spec=dict(
mode=dict(default=None, choices=['user', 'system']),
mode=dict(choices=['user', 'system']),
name=dict(required=True),
image=dict(required=True),
rootfs=dict(default=None),
rootfs=dict(),
state=dict(default='latest', choices=['present', 'absent', 'latest', 'rollback']),
backend=dict(required=True, choices=['docker', 'ostree']),
values=dict(type='list', default=[], elements='str'),

View File

@@ -228,8 +228,7 @@ class ClcAlertPolicy:
choices=[
'cpu',
'memory',
'disk'],
default=None),
'disk']),
duration=dict(type='str'),
threshold=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])

View File

@@ -297,9 +297,9 @@ class ClcGroup(object):
"""
argument_spec = dict(
name=dict(required=True),
description=dict(default=None),
parent=dict(default=None),
location=dict(default=None),
description=dict(),
parent=dict(),
location=dict(),
state=dict(default='present', choices=['present', 'absent']),
wait=dict(type='bool', default=True))

View File

@@ -865,7 +865,7 @@ class ClcLoadBalancer:
"""
argument_spec = dict(
name=dict(required=True),
description=dict(default=None),
description=dict(),
location=dict(required=True),
alias=dict(required=True),
port=dict(choices=[80, 443]),

View File

@@ -567,31 +567,31 @@ class ClcServer:
template=dict(),
group=dict(default='Default Group'),
network_id=dict(),
location=dict(default=None),
location=dict(),
cpu=dict(default=1, type='int'),
memory=dict(default=1, type='int'),
alias=dict(default=None),
password=dict(default=None, no_log=True),
ip_address=dict(default=None),
alias=dict(),
password=dict(no_log=True),
ip_address=dict(),
storage_type=dict(
default='standard',
choices=[
'standard',
'hyperscale']),
type=dict(default='standard', choices=['standard', 'hyperscale', 'bareMetal']),
primary_dns=dict(default=None),
secondary_dns=dict(default=None),
primary_dns=dict(),
secondary_dns=dict(),
additional_disks=dict(type='list', default=[], elements='dict'),
custom_fields=dict(type='list', default=[], elements='dict'),
ttl=dict(default=None),
ttl=dict(),
managed_os=dict(type='bool', default=False),
description=dict(default=None),
source_server_password=dict(default=None, no_log=True),
cpu_autoscale_policy_id=dict(default=None),
anti_affinity_policy_id=dict(default=None),
anti_affinity_policy_name=dict(default=None),
alert_policy_id=dict(default=None),
alert_policy_name=dict(default=None),
description=dict(),
source_server_password=dict(no_log=True),
cpu_autoscale_policy_id=dict(),
anti_affinity_policy_id=dict(),
anti_affinity_policy_name=dict(),
alert_policy_id=dict(),
alert_policy_name=dict(),
packages=dict(type='list', default=[], elements='dict'),
state=dict(
default='present',
@@ -601,7 +601,7 @@ class ClcServer:
'started',
'stopped']),
count=dict(type='int', default=1),
exact_count=dict(type='int', default=None),
exact_count=dict(type='int', ),
count_group=dict(),
server_ids=dict(type='list', default=[], elements='str'),
add_public_ip=dict(type='bool', default=False),
@@ -612,14 +612,13 @@ class ClcServer:
'UDP',
'ICMP']),
public_ip_ports=dict(type='list', default=[], elements='dict'),
configuration_id=dict(default=None),
os_type=dict(default=None,
choices=[
'redHat6_64Bit',
'centOS6_64Bit',
'windows2012R2Standard_64Bit',
'ubuntu14_64Bit'
]),
configuration_id=dict(),
os_type=dict(choices=[
'redHat6_64Bit',
'centOS6_64Bit',
'windows2012R2Standard_64Bit',
'ubuntu14_64Bit'
]),
wait=dict(type='bool', default=True))
mutually_exclusive = [

View File

@@ -21,6 +21,13 @@ options:
- Name of an instance.
type: str
required: true
project:
description:
- 'Project of an instance.
See U(https://github.com/lxc/lxd/blob/master/doc/projects.md).'
required: false
type: str
version_added: 4.8.0
architecture:
description:
- 'The architecture for the instance (for example C(x86_64) or C(i686)).
@@ -248,6 +255,26 @@ EXAMPLES = '''
wait_for_ipv4_addresses: true
timeout: 600
# An example for creating container in project other than default
- hosts: localhost
connection: local
tasks:
- name: Create a started container in project mytestproject
community.general.lxd_container:
name: mycontainer
project: mytestproject
ignore_volatile_options: true
state: started
source:
protocol: simplestreams
type: image
mode: pull
server: https://images.linuxcontainers.org
alias: ubuntu/20.04/cloud
profiles: ["default"]
wait_for_ipv4_addresses: true
timeout: 600
# An example for deleting a container
- hosts: localhost
connection: local
@@ -412,6 +439,7 @@ class LXDContainerManagement(object):
"""
self.module = module
self.name = self.module.params['name']
self.project = self.module.params['project']
self._build_config()
self.state = self.module.params['state']
@@ -468,16 +496,16 @@ class LXDContainerManagement(object):
self.config[attr] = param_val
def _get_instance_json(self):
return self.client.do(
'GET', '{0}/{1}'.format(self.api_endpoint, self.name),
ok_error_codes=[404]
)
url = '{0}/{1}'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
return self.client.do('GET', url, ok_error_codes=[404])
def _get_instance_state_json(self):
return self.client.do(
'GET', '{0}/{1}/state'.format(self.api_endpoint, self.name),
ok_error_codes=[404]
)
url = '{0}/{1}/state'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
return self.client.do('GET', url, ok_error_codes=[404])
@staticmethod
def _instance_json_to_module_state(resp_json):
@@ -486,18 +514,26 @@ class LXDContainerManagement(object):
return ANSIBLE_LXD_STATES[resp_json['metadata']['status']]
def _change_state(self, action, force_stop=False):
url = '{0}/{1}/state'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
body_json = {'action': action, 'timeout': self.timeout}
if force_stop:
body_json['force'] = True
return self.client.do('PUT', '{0}/{1}/state'.format(self.api_endpoint, self.name), body_json=body_json)
return self.client.do('PUT', url, body_json=body_json)
def _create_instance(self):
url = self.api_endpoint
url_params = dict()
if self.target:
url_params['target'] = self.target
if self.project:
url_params['project'] = self.project
if url_params:
url = '{0}?{1}'.format(url, urlencode(url_params))
config = self.config.copy()
config['name'] = self.name
if self.target:
self.client.do('POST', '{0}?{1}'.format(self.api_endpoint, urlencode(dict(target=self.target))), config, wait_for_container=self.wait_for_container)
else:
self.client.do('POST', self.api_endpoint, config, wait_for_container=self.wait_for_container)
self.client.do('POST', url, config, wait_for_container=self.wait_for_container)
self.actions.append('create')
def _start_instance(self):
@@ -513,7 +549,10 @@ class LXDContainerManagement(object):
self.actions.append('restart')
def _delete_instance(self):
self.client.do('DELETE', '{0}/{1}'.format(self.api_endpoint, self.name))
url = '{0}/{1}'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('DELETE', url)
self.actions.append('delete')
def _freeze_instance(self):
@@ -666,7 +705,10 @@ class LXDContainerManagement(object):
if self._needs_to_change_instance_config('profiles'):
body_json['profiles'] = self.config['profiles']
self.client.do('PUT', '{0}/{1}'.format(self.api_endpoint, self.name), body_json=body_json)
url = '{0}/{1}'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('PUT', url, body_json=body_json)
self.actions.append('apply_instance_configs')
def run(self):
@@ -715,6 +757,9 @@ def main():
type='str',
required=True
),
project=dict(
type='str',
),
architecture=dict(
type='str',
),

View File

@@ -21,6 +21,13 @@ options:
- Name of a profile.
required: true
type: str
project:
description:
- 'Project of a profile.
See U(https://github.com/lxc/lxd/blob/master/doc/projects.md).'
type: str
required: false
version_added: 4.8.0
description:
description:
- Description of the profile.
@@ -129,6 +136,19 @@ EXAMPLES = '''
parent: br0
type: nic
# An example for creating a profile in project mytestproject
- hosts: localhost
connection: local
tasks:
- name: Create a profile
community.general.lxd_profile:
name: testprofile
project: mytestproject
state: present
config: {}
description: test profile in project mytestproject
devices: {}
# An example for creating a profile via http connection
- hosts: localhost
connection: local
@@ -208,6 +228,7 @@ actions:
import os
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.lxd import LXDClient, LXDClientException
from ansible.module_utils.six.moves.urllib.parse import urlencode
# ANSIBLE_LXD_DEFAULT_URL is a default value of the lxd endpoint
ANSIBLE_LXD_DEFAULT_URL = 'unix:/var/lib/lxd/unix.socket'
@@ -232,6 +253,7 @@ class LXDProfileManagement(object):
"""
self.module = module
self.name = self.module.params['name']
self.project = self.module.params['project']
self._build_config()
self.state = self.module.params['state']
self.new_name = self.module.params.get('new_name', None)
@@ -272,10 +294,10 @@ class LXDProfileManagement(object):
self.config[attr] = param_val
def _get_profile_json(self):
return self.client.do(
'GET', '/1.0/profiles/{0}'.format(self.name),
ok_error_codes=[404]
)
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
return self.client.do('GET', url, ok_error_codes=[404])
@staticmethod
def _profile_json_to_module_state(resp_json):
@@ -307,14 +329,20 @@ class LXDProfileManagement(object):
changed=False)
def _create_profile(self):
url = '/1.0/profiles'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
config = self.config.copy()
config['name'] = self.name
self.client.do('POST', '/1.0/profiles', config)
self.client.do('POST', url, config)
self.actions.append('create')
def _rename_profile(self):
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
config = {'name': self.new_name}
self.client.do('POST', '/1.0/profiles/{0}'.format(self.name), config)
self.client.do('POST', url, config)
self.actions.append('rename')
self.name = self.new_name
@@ -421,11 +449,17 @@ class LXDProfileManagement(object):
config = self._generate_new_config(config)
# upload config to lxd
self.client.do('PUT', '/1.0/profiles/{0}'.format(self.name), config)
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('PUT', url, config)
self.actions.append('apply_profile_configs')
def _delete_profile(self):
self.client.do('DELETE', '/1.0/profiles/{0}'.format(self.name))
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('DELETE', url)
self.actions.append('delete')
def run(self):
@@ -469,6 +503,9 @@ def main():
type='str',
required=True
),
project=dict(
type='str',
),
new_name=dict(
type='str',
),

View File

@@ -0,0 +1,451 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: lxd_project
short_description: Manage LXD projects
version_added: 4.8.0
description:
- Management of LXD projects.
author: "Raymond Chang (@we10710aa)"
options:
name:
description:
- Name of the project.
required: true
type: str
description:
description:
- Description of the project.
type: str
config:
description:
- 'The config for the project (for example C({"features.profiles": "true"})).
See U(https://linuxcontainers.org/lxd/docs/master/projects/).'
- If the project already exists and its "config" value in metadata
obtained from
C(GET /1.0/projects/<name>)
U(https://linuxcontainers.org/lxd/docs/master/api/#/projects/project_get)
are different, then this module tries to apply the configurations.
type: dict
new_name:
description:
- A new name of a project.
- If this parameter is specified a project will be renamed to this name.
See U(https://linuxcontainers.org/lxd/docs/master/api/#/projects/project_post).
required: false
type: str
merge_project:
description:
- Merge the configuration of the present project with the new desired configuration,
instead of replacing it. If configuration is the same after merged, no change will be made.
required: false
default: false
type: bool
state:
choices:
- present
- absent
description:
- Define the state of a project.
required: false
default: present
type: str
url:
description:
- The Unix domain socket path or the https URL for the LXD server.
required: false
default: unix:/var/lib/lxd/unix.socket
type: str
snap_url:
description:
- The Unix domain socket path when LXD is installed by snap package manager.
required: false
default: unix:/var/snap/lxd/common/lxd/unix.socket
type: str
client_key:
description:
- The client certificate key file path.
- If not specified, it defaults to C($HOME/.config/lxc/client.key).
required: false
aliases: [ key_file ]
type: path
client_cert:
description:
- The client certificate file path.
- If not specified, it defaults to C($HOME/.config/lxc/client.crt).
required: false
aliases: [ cert_file ]
type: path
trust_password:
description:
- The client trusted password.
- 'You need to set this password on the LXD server before
running this module using the following command:
C(lxc config set core.trust_password <some random password>)
See U(https://www.stgraber.org/2016/04/18/lxd-api-direct-interaction/).'
- If I(trust_password) is set, this module send a request for
authentication before sending any requests.
required: false
type: str
notes:
- Projects must have a unique name. If you attempt to create a project
with a name that already existed in the users namespace the module will
simply return as "unchanged".
'''
EXAMPLES = '''
# An example for creating a project
- hosts: localhost
connection: local
tasks:
- name: Create a project
community.general.lxd_project:
name: ansible-test-project
state: present
config: {}
description: my new project
# An example for renaming a project
- hosts: localhost
connection: local
tasks:
- name: Rename ansible-test-project to ansible-test-project-new-name
community.general.lxd_project:
name: ansible-test-project
new_name: ansible-test-project-new-name
state: present
config: {}
description: my new project
'''
RETURN = '''
old_state:
description: The old state of the project.
returned: success
type: str
sample: "absent"
logs:
description: The logs of requests and responses.
returned: when ansible-playbook is invoked with -vvvv.
type: list
elements: dict
contains:
type:
description: Type of actions performed, currently only C(sent request).
type: str
sample: "sent request"
request:
description: HTTP request sent to LXD server.
type: dict
contains:
method:
description: Method of HTTP request.
type: str
sample: "GET"
url:
description: URL path of HTTP request.
type: str
sample: "/1.0/projects/test-project"
json:
description: JSON body of HTTP request.
type: str
sample: "(too long to be placed here)"
timeout:
description: Timeout of HTTP request, C(null) if unset.
type: int
sample: null
response:
description: HTTP response received from LXD server.
type: dict
contains:
json:
description: JSON of HTTP response.
type: str
sample: "(too long to be placed here)"
actions:
description: List of actions performed for the project.
returned: success
type: list
elements: str
sample: ["create"]
'''
from ansible_collections.community.general.plugins.module_utils.lxd import LXDClient, LXDClientException
from ansible.module_utils.basic import AnsibleModule
import os
# ANSIBLE_LXD_DEFAULT_URL is a default value of the lxd endpoint
ANSIBLE_LXD_DEFAULT_URL = 'unix:/var/lib/lxd/unix.socket'
# PROJECTS_STATES is a list for states supported
PROJECTS_STATES = [
'present', 'absent'
]
# CONFIG_PARAMS is a list of config attribute names.
CONFIG_PARAMS = [
'config', 'description'
]
class LXDProjectManagement(object):
def __init__(self, module):
"""Management of LXC projects via Ansible.
:param module: Processed Ansible Module.
:type module: ``object``
"""
self.module = module
self.name = self.module.params['name']
self._build_config()
self.state = self.module.params['state']
self.new_name = self.module.params.get('new_name', None)
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = os.path.expanduser('~/.config/lxc/client.key')
self.cert_file = self.module.params.get('client_cert')
if self.cert_file is None:
self.cert_file = os.path.expanduser('~/.config/lxc/client.crt')
self.debug = self.module._verbosity >= 4
try:
if self.module.params['url'] != ANSIBLE_LXD_DEFAULT_URL:
self.url = self.module.params['url']
elif os.path.exists(self.module.params['snap_url'].replace('unix:', '')):
self.url = self.module.params['snap_url']
else:
self.url = self.module.params['url']
except Exception as e:
self.module.fail_json(msg=e.msg)
try:
self.client = LXDClient(
self.url, key_file=self.key_file, cert_file=self.cert_file,
debug=self.debug
)
except LXDClientException as e:
self.module.fail_json(msg=e.msg)
self.trust_password = self.module.params.get('trust_password', None)
self.actions = []
def _build_config(self):
self.config = {}
for attr in CONFIG_PARAMS:
param_val = self.module.params.get(attr, None)
if param_val is not None:
self.config[attr] = param_val
def _get_project_json(self):
return self.client.do(
'GET', '/1.0/projects/{0}'.format(self.name),
ok_error_codes=[404]
)
@staticmethod
def _project_json_to_module_state(resp_json):
if resp_json['type'] == 'error':
return 'absent'
return 'present'
def _update_project(self):
if self.state == 'present':
if self.old_state == 'absent':
if self.new_name is None:
self._create_project()
else:
self.module.fail_json(
msg='new_name must not be set when the project does not exist and the state is present',
changed=False)
else:
if self.new_name is not None and self.new_name != self.name:
self._rename_project()
if self._needs_to_apply_project_configs():
self._apply_project_configs()
elif self.state == 'absent':
if self.old_state == 'present':
if self.new_name is None:
self._delete_project()
else:
self.module.fail_json(
msg='new_name must not be set when the project exists and the specified state is absent',
changed=False)
def _create_project(self):
config = self.config.copy()
config['name'] = self.name
self.client.do('POST', '/1.0/projects', config)
self.actions.append('create')
def _rename_project(self):
config = {'name': self.new_name}
self.client.do('POST', '/1.0/projects/{0}'.format(self.name), config)
self.actions.append('rename')
self.name = self.new_name
def _needs_to_change_project_config(self, key):
if key not in self.config:
return False
old_configs = self.old_project_json['metadata'].get(key, None)
return self.config[key] != old_configs
def _needs_to_apply_project_configs(self):
return (
self._needs_to_change_project_config('config') or
self._needs_to_change_project_config('description')
)
def _merge_dicts(self, source, destination):
""" Return a new dict taht merge two dict,
with values in source dict overwrite destination dict
Args:
dict(source): source dict
dict(destination): destination dict
Kwargs:
None
Raises:
None
Returns:
dict(destination): merged dict"""
result = destination.copy()
for key, value in source.items():
if isinstance(value, dict):
# get node or create one
node = result.setdefault(key, {})
self._merge_dicts(value, node)
else:
result[key] = value
return result
def _apply_project_configs(self):
""" Selection of the procedure: rebuild or merge
The standard behavior is that all information not contained
in the play is discarded.
If "merge_project" is provides in the play and "True", then existing
configurations from the project and new ones defined are merged.
Args:
None
Kwargs:
None
Raises:
None
Returns:
None"""
old_config = dict()
old_metadata = self.old_project_json['metadata'].copy()
for attr in CONFIG_PARAMS:
old_config[attr] = old_metadata[attr]
if self.module.params['merge_project']:
config = self._merge_dicts(self.config, old_config)
if config == old_config:
# no need to call api if merged config is the same
# as old config
return
else:
config = self.config.copy()
# upload config to lxd
self.client.do('PUT', '/1.0/projects/{0}'.format(self.name), config)
self.actions.append('apply_projects_configs')
def _delete_project(self):
self.client.do('DELETE', '/1.0/projects/{0}'.format(self.name))
self.actions.append('delete')
def run(self):
"""Run the main method."""
try:
if self.trust_password is not None:
self.client.authenticate(self.trust_password)
self.old_project_json = self._get_project_json()
self.old_state = self._project_json_to_module_state(
self.old_project_json)
self._update_project()
state_changed = len(self.actions) > 0
result_json = {
'changed': state_changed,
'old_state': self.old_state,
'actions': self.actions
}
if self.client.debug:
result_json['logs'] = self.client.logs
self.module.exit_json(**result_json)
except LXDClientException as e:
state_changed = len(self.actions) > 0
fail_params = {
'msg': e.msg,
'changed': state_changed,
'actions': self.actions
}
if self.client.debug:
fail_params['logs'] = e.kwargs['logs']
self.module.fail_json(**fail_params)
def main():
"""Ansible Main module."""
module = AnsibleModule(
argument_spec=dict(
name=dict(
type='str',
required=True
),
new_name=dict(
type='str',
),
config=dict(
type='dict',
),
description=dict(
type='str',
),
merge_project=dict(
type='bool',
default=False
),
state=dict(
choices=PROJECTS_STATES,
default='present'
),
url=dict(
type='str',
default=ANSIBLE_LXD_DEFAULT_URL
),
snap_url=dict(
type='str',
default='unix:/var/snap/lxd/common/lxd/unix.socket'
),
client_key=dict(
type='path',
aliases=['key_file']
),
client_cert=dict(
type='path',
aliases=['cert_file']
),
trust_password=dict(type='str', no_log=True)
),
supports_check_mode=False,
)
lxd_manage = LXDProjectManagement(module=module)
lxd_manage.run()
if __name__ == '__main__':
main()

View File

@@ -564,7 +564,7 @@ def main():
force=dict(type='bool', default=False),
purge=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted']),
pubkey=dict(type='str', default=None),
pubkey=dict(type='str'),
unprivileged=dict(type='bool', default=False),
description=dict(type='str'),
hookscript=dict(type='str'),

View File

@@ -1397,7 +1397,7 @@ def main():
module.fail_json(msg='VM with name = %s does not exist in cluster' % name)
vm = proxmox.get_vm(vmid)
if not name:
name = vm['name']
name = vm.get('name', '(unnamed)')
current = proxmox.proxmox_api.nodes(vm['node']).qemu(vmid).status.current.get()['status']
status['status'] = current
if status:

View File

@@ -13,7 +13,7 @@ module: proxmox_snap
short_description: Snapshot management of instances in Proxmox VE cluster
version_added: 2.0.0
description:
- Allows you to create/delete snapshots from instances in Proxmox VE cluster.
- Allows you to create/delete/restore snapshots from instances in Proxmox VE cluster.
- Supports both KVM and LXC, OpenVZ has not been tested, as it is no longer supported on Proxmox VE.
options:
hostname:
@@ -28,7 +28,8 @@ options:
state:
description:
- Indicate desired state of the instance snapshot.
choices: ['present', 'absent']
- The C(rollback) value was added in community.general 4.8.0.
choices: ['present', 'absent', 'rollback']
default: present
type: str
force:
@@ -53,7 +54,7 @@ options:
type: int
snapname:
description:
- Name of the snapshot that has to be created.
- Name of the snapshot that has to be created/deleted/restored.
default: 'ansible_snap'
type: str
@@ -84,6 +85,15 @@ EXAMPLES = r'''
vmid: 100
state: absent
snapname: pre-updates
- name: Rollback container snapshot
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: rollback
snapname: pre-updates
'''
RETURN = r'''#'''
@@ -109,15 +119,15 @@ class ProxmoxSnapAnsible(ProxmoxAnsible):
else:
taskid = self.snapshot(vm, vmid).post(snapname=snapname, description=description, vmstate=int(vmstate))
while timeout:
if (self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['status'] == 'stopped' and
self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
status_data = self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()
if status_data['status'] == 'stopped' and status_data['exitstatus'] == 'OK':
return True
timeout -= 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for creating VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
def snapshot_remove(self, vm, vmid, timeout, snapname, force):
@@ -126,15 +136,32 @@ class ProxmoxSnapAnsible(ProxmoxAnsible):
taskid = self.snapshot(vm, vmid).delete(snapname, force=int(force))
while timeout:
if (self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['status'] == 'stopped' and
self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
status_data = self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()
if status_data['status'] == 'stopped' and status_data['exitstatus'] == 'OK':
return True
timeout -= 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for removing VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
def snapshot_rollback(self, vm, vmid, timeout, snapname):
if self.module.check_mode:
return True
taskid = self.snapshot(vm, vmid)(snapname).post("rollback")
while timeout:
status_data = self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()
if status_data['status'] == 'stopped' and status_data['exitstatus'] == 'OK':
return True
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for rolling back VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
@@ -144,7 +171,7 @@ def main():
vmid=dict(required=False),
hostname=dict(),
timeout=dict(type='int', default=30),
state=dict(default='present', choices=['present', 'absent']),
state=dict(default='present', choices=['present', 'absent', 'rollback']),
description=dict(type='str'),
snapname=dict(type='str', default='ansible_snap'),
force=dict(type='bool', default='no'),
@@ -211,6 +238,25 @@ def main():
except Exception as e:
module.fail_json(msg="Removing snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
elif state == 'rollback':
try:
snap_exist = False
for i in proxmox.snapshot(vm, vmid).get():
if i['name'] == snapname:
snap_exist = True
continue
if not snap_exist:
module.exit_json(changed=False, msg="Snapshot %s does not exist" % snapname)
if proxmox.snapshot_rollback(vm, vmid, timeout, snapname):
if module.check_mode:
module.exit_json(changed=True, msg="Snapshot %s would be rolled back" % snapname)
else:
module.exit_json(changed=True, msg="Snapshot %s rolled back" % snapname)
except Exception as e:
module.fail_json(msg="Rollback of snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
if __name__ == '__main__':

View File

@@ -124,6 +124,12 @@ options:
type: list
elements: path
version_added: '0.2.0'
provider_upgrade:
description:
- Allows Terraform init to upgrade providers to versions specified in the project's version constraints.
default: false
type: bool
version_added: 4.8.0
init_reconfigure:
description:
- Forces backend reconfiguration during init.
@@ -266,7 +272,7 @@ def _state_args(state_file):
return []
def init_plugins(bin_path, project_path, backend_config, backend_config_files, init_reconfigure, plugin_paths):
def init_plugins(bin_path, project_path, backend_config, backend_config_files, init_reconfigure, provider_upgrade, plugin_paths):
command = [bin_path, 'init', '-input=false']
if backend_config:
for key, val in backend_config.items():
@@ -279,6 +285,8 @@ def init_plugins(bin_path, project_path, backend_config, backend_config_files, i
command.extend(['-backend-config', f])
if init_reconfigure:
command.extend(['-reconfigure'])
if provider_upgrade:
command.extend(['-upgrade'])
if plugin_paths:
for plugin_path in plugin_paths:
command.extend(['-plugin-dir', plugin_path])
@@ -384,6 +392,7 @@ def main():
overwrite_init=dict(type='bool', default=True),
check_destroy=dict(type='bool', default=False),
parallelism=dict(type='int'),
provider_upgrade=dict(type='bool', default=False),
),
required_if=[('state', 'planned', ['plan_file'])],
supports_check_mode=True,
@@ -405,6 +414,7 @@ def main():
init_reconfigure = module.params.get('init_reconfigure')
overwrite_init = module.params.get('overwrite_init')
check_destroy = module.params.get('check_destroy')
provider_upgrade = module.params.get('provider_upgrade')
if bin_path is not None:
command = [bin_path]
@@ -422,7 +432,7 @@ def main():
if force_init:
if overwrite_init or not os.path.isfile(os.path.join(project_path, ".terraform", "terraform.tfstate")):
init_plugins(command[0], project_path, backend_config, backend_config_files, init_reconfigure, plugin_paths)
init_plugins(command[0], project_path, backend_config, backend_config_files, init_reconfigure, provider_upgrade, plugin_paths)
workspace_ctx = get_workspace_context(command[0], project_path)
if workspace_ctx["current"] != workspace:

View File

@@ -630,7 +630,7 @@ def main():
plan=dict(),
project_id=dict(required=True),
state=dict(choices=ALLOWED_STATES, default='present'),
user_data=dict(default=None),
user_data=dict(),
wait_for_public_IPv=dict(type='int', choices=[4, 6]),
wait_timeout=dict(type='int', default=900),
ipxe_script_url=dict(default=''),

View File

@@ -226,11 +226,11 @@ def main():
state=dict(choices=['present', 'absent'], default='present'),
auth_token=dict(default=os.environ.get(PACKET_API_TOKEN_ENV_VAR),
no_log=True),
label=dict(type='str', aliases=['name'], default=None),
id=dict(type='str', default=None),
fingerprint=dict(type='str', default=None),
key=dict(type='str', default=None, no_log=True),
key_file=dict(type='path', default=None),
label=dict(type='str', aliases=['name']),
id=dict(type='str'),
fingerprint=dict(type='str'),
key=dict(type='str', no_log=True),
key_file=dict(type='path'),
),
mutually_exclusive=[
('label', 'id'),

View File

@@ -263,9 +263,9 @@ def act_on_volume(target_state, module, packet_conn):
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str', default=None),
description=dict(type="str", default=None),
name=dict(type='str', default=None),
id=dict(type='str'),
description=dict(type="str"),
name=dict(type='str'),
state=dict(choices=VOLUME_STATES, default="present"),
auth_token=dict(
type='str',
@@ -277,7 +277,7 @@ def main():
facility=dict(type="str"),
size=dict(type="int"),
locked=dict(type="bool", default=False),
snapshot_policy=dict(type='dict', default=None),
snapshot_policy=dict(type='dict'),
billing_cycle=dict(type='str', choices=BILLING, default="hourly"),
),
supports_check_mode=True,

View File

@@ -583,7 +583,7 @@ def main():
default='AMD_OPTERON'),
volume_size=dict(type='int', default=10),
disk_type=dict(choices=['HDD', 'SSD'], default='HDD'),
image_password=dict(default=None, no_log=True),
image_password=dict(no_log=True),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
bus=dict(choices=['VIRTIO', 'IDE'], default='VIRTIO'),
lan=dict(type='int', default=1),

View File

@@ -95,7 +95,7 @@ class ImageFacts(object):
def main():
module = AnsibleModule(
argument_spec=dict(
filters=dict(default=None),
filters=dict(),
),
supports_check_mode=True,
)

View File

@@ -684,7 +684,7 @@ def main():
choices=['present', 'running', 'absent', 'deleted', 'stopped', 'created', 'restarted', 'rebooted']
),
name=dict(
default=None, type='str',
type='str',
aliases=['alias']
),
brand=dict(
@@ -709,7 +709,7 @@ def main():
# Add our 'simple' options to options dict.
for type in properties:
for p in properties[type]:
option = dict(default=None, type=type)
option = dict(type=type)
options[p] = option
module = AnsibleModule(

View File

@@ -95,8 +95,7 @@ def main():
argument_spec=dict(
name=dict(required=True,
type='str'),
description=dict(default=None,
type='str'),
description=dict(type='str'),
position=dict(default='',
type='str'),
ou=dict(default='',

View File

@@ -354,12 +354,10 @@ def main():
default='0'),
group=dict(type='str',
default='0'),
path=dict(type='path',
default=None),
path=dict(type='path'),
directorymode=dict(type='str',
default='00755'),
host=dict(type='str',
default=None),
host=dict(type='str'),
root_squash=dict(type='bool',
default=True),
subtree_checking=dict(type='bool',
@@ -369,8 +367,7 @@ def main():
writeable=dict(type='bool',
default=True),
sambaBlockSize=dict(type='str',
aliases=['samba_block_size'],
default=None),
aliases=['samba_block_size']),
sambaBlockingLocks=dict(type='bool',
aliases=['samba_blocking_locks'],
default=True),
@@ -408,17 +405,14 @@ def main():
aliases=['samba_force_directory_security_mode'],
default=False),
sambaForceGroup=dict(type='str',
aliases=['samba_force_group'],
default=None),
aliases=['samba_force_group']),
sambaForceSecurityMode=dict(type='bool',
aliases=['samba_force_security_mode'],
default=False),
sambaForceUser=dict(type='str',
aliases=['samba_force_user'],
default=None),
aliases=['samba_force_user']),
sambaHideFiles=dict(type='str',
aliases=['samba_hide_files'],
default=None),
aliases=['samba_hide_files']),
sambaHideUnreadable=dict(type='bool',
aliases=['samba_hide_unreadable'],
default=False),
@@ -438,8 +432,7 @@ def main():
aliases=['samba_inherit_permissions'],
default=False),
sambaInvalidUsers=dict(type='str',
aliases=['samba_invalid_users'],
default=None),
aliases=['samba_invalid_users']),
sambaLevel2Oplocks=dict(type='bool',
aliases=['samba_level_2_oplocks'],
default=True),
@@ -450,8 +443,7 @@ def main():
aliases=['samba_msdfs_root'],
default=False),
sambaName=dict(type='str',
aliases=['samba_name'],
default=None),
aliases=['samba_name']),
sambaNtAclSupport=dict(type='bool',
aliases=['samba_nt_acl_support'],
default=True),
@@ -459,11 +451,9 @@ def main():
aliases=['samba_oplocks'],
default=True),
sambaPostexec=dict(type='str',
aliases=['samba_postexec'],
default=None),
aliases=['samba_postexec']),
sambaPreexec=dict(type='str',
aliases=['samba_preexec'],
default=None),
aliases=['samba_preexec']),
sambaPublic=dict(type='bool',
aliases=['samba_public'],
default=False),
@@ -474,14 +464,11 @@ def main():
aliases=['samba_strict_locking'],
default='Auto'),
sambaVFSObjects=dict(type='str',
aliases=['samba_vfs_objects'],
default=None),
aliases=['samba_vfs_objects']),
sambaValidUsers=dict(type='str',
aliases=['samba_valid_users'],
default=None),
aliases=['samba_valid_users']),
sambaWriteList=dict(type='str',
aliases=['samba_write_list'],
default=None),
aliases=['samba_write_list']),
sambaWriteable=dict(type='bool',
aliases=['samba_writeable'],
default=True),

View File

@@ -110,14 +110,14 @@ def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
state=dict(choices=['present', 'absent'], default='present'),
type=dict(required=True),
autostart=dict(required=False, type='bool', default=False),
extra_info=dict(required=False, default=""),
port_open=dict(required=False, type='bool', default=False),
autostart=dict(type='bool', default=False),
extra_info=dict(default=""),
port_open=dict(type='bool', default=False),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
machine=dict(required=False, default=None),
machine=dict(),
),
supports_check_mode=True
)

View File

@@ -101,13 +101,13 @@ def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
state=dict(choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
type=dict(required=True, choices=['mysql', 'postgresql']),
password=dict(required=False, default=None, no_log=True),
password=dict(no_log=True),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
machine=dict(required=False, default=None),
machine=dict(),
),
supports_check_mode=True
)

View File

@@ -102,14 +102,14 @@ def run():
use_ssl=dict(type='bool', default=True),
timeout=dict(type='int', default=5),
validate_certs=dict(type='bool', default=True),
client_cert=dict(type='path', default=None),
client_key=dict(type='path', default=None),
namespace=dict(type='str', default=None),
name=dict(type='str', default=None),
client_cert=dict(type='path'),
client_key=dict(type='path'),
namespace=dict(type='str'),
name=dict(type='str'),
content_format=dict(choices=['hcl', 'json'], default='hcl'),
content=dict(type='str', default=None),
content=dict(type='str'),
force_start=dict(type='bool', default=False),
token=dict(type='str', default=None, no_log=True)
token=dict(type='str', no_log=True)
),
supports_check_mode=True,
mutually_exclusive=[

View File

@@ -287,11 +287,11 @@ def run():
use_ssl=dict(type='bool', default=True),
timeout=dict(type='int', default=5),
validate_certs=dict(type='bool', default=True),
client_cert=dict(type='path', default=None),
client_key=dict(type='path', default=None),
namespace=dict(type='str', default=None),
name=dict(type='str', default=None),
token=dict(type='str', default=None, no_log=True)
client_cert=dict(type='path'),
client_key=dict(type='path'),
namespace=dict(type='str'),
name=dict(type='str'),
token=dict(type='str', no_log=True)
),
supports_check_mode=True
)

View File

@@ -32,6 +32,14 @@ options:
- Force principal name even if host is not in DNS.
required: false
type: bool
skip_host_check:
description:
- Force service to be created even when host object does not exist to manage it.
- This is only used on creation, not for updating existing services.
required: false
type: bool
default: false
version_added: 4.7.0
state:
description: State to ensure.
required: false
@@ -111,17 +119,19 @@ class ServiceIPAClient(IPAClient):
return self._post_json(method='service_remove_host', name=name, item={'host': item})
def get_service_dict(force=None, krbcanonicalname=None):
def get_service_dict(force=None, krbcanonicalname=None, skip_host_check=None):
data = {}
if force is not None:
data['force'] = force
if krbcanonicalname is not None:
data['krbcanonicalname'] = krbcanonicalname
if skip_host_check is not None:
data['skip_host_check'] = skip_host_check
return data
def get_service_diff(client, ipa_host, module_service):
non_updateable_keys = ['force', 'krbcanonicalname']
non_updateable_keys = ['force', 'krbcanonicalname', 'skip_host_check']
for key in non_updateable_keys:
if key in module_service:
del module_service[key]
@@ -135,7 +145,7 @@ def ensure(module, client):
hosts = module.params['hosts']
ipa_service = client.service_find(name=name)
module_service = get_service_dict(force=module.params['force'])
module_service = get_service_dict(force=module.params['force'], skip_host_check=module.params['skip_host_check'])
changed = False
if state in ['present', 'enabled', 'disabled']:
if not ipa_service:
@@ -183,6 +193,7 @@ def main():
argument_spec.update(
krbcanonicalname=dict(type='str', required=True, aliases=['name']),
force=dict(type='bool', required=False),
skip_host_check=dict(type='bool', default=False, required=False),
hosts=dict(type='list', required=False, elements='str'),
state=dict(type='str', required=False, default='present',
choices=['present', 'absent']))

View File

@@ -301,6 +301,15 @@ options:
- useTemplateMappers
type: bool
always_display_in_console:
description:
- Whether or not to display this client in account console, even if the
user does not have an active session.
aliases:
- alwaysDisplayInConsole
type: bool
version_added: 4.7.0
surrogate_auth_required:
description:
- Whether or not surrogate auth is required.
@@ -326,6 +335,24 @@ options:
- authenticationFlowBindingOverrides
version_added: 3.4.0
default_client_scopes:
description:
- List of default client scopes.
aliases:
- defaultClientScopes
type: list
elements: str
version_added: 4.7.0
optional_client_scopes:
description:
- List of optional client scopes.
aliases:
- optionalClientScopes
type: list
elements: str
version_added: 4.7.0
protocol_mappers:
description:
- a list of dicts defining protocol mappers for this client.
@@ -593,6 +620,7 @@ EXAMPLES = '''
use_template_config: False
use_template_scope: false
use_template_mappers: no
always_display_in_console: true
registered_nodes:
node01.example.com: 1507828202
registration_access_token: eyJWT_TOKEN
@@ -786,9 +814,12 @@ def main():
use_template_config=dict(type='bool', aliases=['useTemplateConfig']),
use_template_scope=dict(type='bool', aliases=['useTemplateScope']),
use_template_mappers=dict(type='bool', aliases=['useTemplateMappers']),
always_display_in_console=dict(type='bool', aliases=['alwaysDisplayInConsole']),
authentication_flow_binding_overrides=dict(type='dict', aliases=['authenticationFlowBindingOverrides']),
protocol_mappers=dict(type='list', elements='dict', options=protmapper_spec, aliases=['protocolMappers']),
authorization_settings=dict(type='dict', aliases=['authorizationSettings']),
default_client_scopes=dict(type='list', elements='str', aliases=['defaultClientScopes']),
optional_client_scopes=dict(type='list', elements='str', aliases=['optionalClientScopes']),
)
argument_spec.update(meta_args)

View File

@@ -104,7 +104,7 @@ author:
EXAMPLES = '''
- name: Map a client role to a group, authentication with credentials
community.general.keycloak_client_rolemappings:
community.general.keycloak_client_rolemapping:
realm: MyCustomRealm
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
@@ -122,7 +122,7 @@ EXAMPLES = '''
delegate_to: localhost
- name: Map a client role to a group, authentication with token
community.general.keycloak_client_rolemappings:
community.general.keycloak_client_rolemapping:
realm: MyCustomRealm
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
@@ -138,7 +138,7 @@ EXAMPLES = '''
delegate_to: localhost
- name: Unmap client role from a group
community.general.keycloak_client_rolemappings:
community.general.keycloak_client_rolemapping:
realm: MyCustomRealm
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth

View File

@@ -156,7 +156,7 @@ options:
aliases:
- defaultDefaultClientScopes
type: list
elements: dict
elements: str
default_groups:
description:
- The realm default groups.
@@ -176,7 +176,7 @@ options:
aliases:
- defaultOptionalClientScopes
type: list
elements: dict
elements: str
default_roles:
description:
- The realm default roles.
@@ -621,10 +621,10 @@ def main():
brute_force_protected=dict(type='bool', aliases=['bruteForceProtected']),
client_authentication_flow=dict(type='str', aliases=['clientAuthenticationFlow']),
client_scope_mappings=dict(type='dict', aliases=['clientScopeMappings']),
default_default_client_scopes=dict(type='list', elements='dict', aliases=['defaultDefaultClientScopes']),
default_default_client_scopes=dict(type='list', elements='str', aliases=['defaultDefaultClientScopes']),
default_groups=dict(type='list', elements='dict', aliases=['defaultGroups']),
default_locale=dict(type='str', aliases=['defaultLocale']),
default_optional_client_scopes=dict(type='list', elements='dict', aliases=['defaultOptionalClientScopes']),
default_optional_client_scopes=dict(type='list', elements='str', aliases=['defaultOptionalClientScopes']),
default_roles=dict(type='list', elements='dict', aliases=['defaultRoles']),
default_signature_algorithm=dict(type='str', aliases=['defaultSignatureAlgorithm']),
direct_grant_flow=dict(type='str', aliases=['directGrantFlow']),

View File

@@ -294,7 +294,7 @@ class OnePasswordInfo(object):
except AnsibleModuleError as e:
module.fail_json(msg="Failed to perform initial sign in to 1Password: %s" % to_native(e))
else:
module.fail_json(msg="Unable to perform an initial sign in to 1Password. Please run '%s sigin' "
module.fail_json(msg="Unable to perform an initial sign in to 1Password. Please run '%s signin' "
"or define credentials in 'auto_login'. See the module documentation for details." % self.cli_path)
def get_token(self):

View File

@@ -0,0 +1 @@
./cloud/lxd/lxd_project.py

View File

@@ -0,0 +1,199 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2022, Christian Wollinger <@cwollinger>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: alerta_customer
short_description: Manage customers in Alerta
version_added: 4.8.0
description:
- Create or delete customers in Alerta with the REST API.
author: Christian Wollinger (@cwollinger)
seealso:
- name: API documentation
description: Documentation for Alerta API
link: https://docs.alerta.io/api/reference.html#customers
options:
customer:
description:
- Name of the customer.
required: true
type: str
match:
description:
- The matching logged in user for the customer.
required: true
type: str
alerta_url:
description:
- The Alerta API endpoint.
required: true
type: str
api_username:
description:
- The username for the API using basic auth.
type: str
api_password:
description:
- The password for the API using basic auth.
type: str
api_key:
description:
- The access token for the API.
type: str
state:
description:
- Whether the customer should exist or not.
- Both I(customer) and I(match) identify a customer that should be added or removed.
type: str
choices: [ absent, present ]
default: present
'''
EXAMPLES = """
- name: Create customer
community.general.alerta_customer:
alerta_url: https://alerta.example.com
api_username: admin@example.com
api_password: password
customer: Developer
match: dev@example.com
- name: Delete customer
community.general.alerta_customer:
alerta_url: https://alerta.example.com
api_username: admin@example.com
api_password: password
customer: Developer
match: dev@example.com
state: absent
"""
RETURN = """
msg:
description:
- Success or failure message.
returned: always
type: str
sample: Customer customer1 created
response:
description:
- The response from the API.
returned: always
type: dict
"""
from ansible.module_utils.urls import fetch_url, basic_auth_header
from ansible.module_utils.basic import AnsibleModule
class AlertaInterface(object):
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.customer = module.params['customer']
self.match = module.params['match']
self.alerta_url = module.params['alerta_url']
self.headers = {"Content-Type": "application/json"}
if module.params.get('api_key', None):
self.headers["Authorization"] = "Key %s" % module.params['api_key']
else:
self.headers["Authorization"] = basic_auth_header(module.params['api_username'], module.params['api_password'])
def send_request(self, url, data=None, method="GET"):
response, info = fetch_url(self.module, url, data=data, headers=self.headers, method=method)
status_code = info["status"]
if status_code == 401:
self.module.fail_json(failed=True, response=info, msg="Unauthorized to request '%s' on '%s'" % (method, url))
elif status_code == 403:
self.module.fail_json(failed=True, response=info, msg="Permission Denied for '%s' on '%s'" % (method, url))
elif status_code == 404:
self.module.fail_json(failed=True, response=info, msg="Not found for request '%s' on '%s'" % (method, url))
elif status_code in (200, 201):
return self.module.from_json(response.read())
self.module.fail_json(failed=True, response=info, msg="Alerta API error with HTTP %d for %s" % (status_code, url))
def get_customers(self):
url = "%s/api/customers" % self.alerta_url
response = self.send_request(url)
pages = response["pages"]
if pages > 1:
for page in range(2, pages + 1):
page_url = url + '?page=' + str(page)
new_results = self.send_request(page_url)
response.update(new_results)
return response
def create_customer(self):
url = "%s/api/customer" % self.alerta_url
payload = {
'customer': self.customer,
'match': self.match,
}
payload = self.module.jsonify(payload)
response = self.send_request(url, payload, 'POST')
return response
def delete_customer(self, id):
url = "%s/api/customer/%s" % (self.alerta_url, id)
response = self.send_request(url, None, 'DELETE')
return response
def find_customer_id(self, customer):
for i in customer['customers']:
if self.customer == i['customer'] and self.match == i['match']:
return i['id']
return None
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(choices=['present', 'absent'], default='present'),
customer=dict(type='str', required=True),
match=dict(type='str', required=True),
alerta_url=dict(type='str', required=True),
api_username=dict(type='str'),
api_password=dict(type='str', no_log=True),
api_key=dict(type='str', no_log=True),
),
required_together=[['api_username', 'api_password']],
mutually_exclusive=[['api_username', 'api_key']],
supports_check_mode=True
)
alerta_iface = AlertaInterface(module)
if alerta_iface.state == 'present':
response = alerta_iface.get_customers()
if alerta_iface.find_customer_id(response):
module.exit_json(changed=False, response=response, msg="Customer %s already exists" % alerta_iface.customer)
else:
if not module.check_mode:
response = alerta_iface.create_customer()
module.exit_json(changed=True, response=response, msg="Customer %s created" % alerta_iface.customer)
else:
response = alerta_iface.get_customers()
id = alerta_iface.find_customer_id(response)
if id:
if not module.check_mode:
alerta_iface.delete_customer(id)
module.exit_json(changed=True, response=response, msg="Customer %s with id %s deleted" % (alerta_iface.customer, id))
else:
module.exit_json(changed=False, response=response, msg="Customer %s does not exists" % alerta_iface.customer)
if __name__ == "__main__":
main()

View File

@@ -15,6 +15,7 @@ short_description: Manages Datadog monitors
description:
- Manages monitors within Datadog.
- Options as described on https://docs.datadoghq.com/api/.
- The type C(event-v2) was added in community.general 4.8.0.
author: Sebastian Kornehl (@skornehl)
requirements: [datadog]
options:
@@ -56,6 +57,7 @@ options:
- metric alert
- service check
- event alert
- event-v2 alert
- process alert
- log alert
- query alert
@@ -222,7 +224,7 @@ def main():
api_host=dict(),
app_key=dict(required=True, no_log=True),
state=dict(required=True, choices=['present', 'absent', 'mute', 'unmute']),
type=dict(choices=['metric alert', 'service check', 'event alert', 'process alert',
type=dict(choices=['metric alert', 'service check', 'event alert', 'event-v2 alert', 'process alert',
'log alert', 'query alert', 'trace-analytics alert',
'rum alert', 'composite']),
name=dict(required=True),

View File

@@ -623,7 +623,7 @@ def main():
# Fetch existing monitor if the A record indicates it should exist and build the new monitor
current_monitor = dict()
new_monitor = dict()
if current_record and current_record['type'] == 'A':
if current_record and current_record['type'] == 'A' and current_record.get('monitor'):
current_monitor = DME.getMonitor(current_record['id'])
# Build the new monitor

View File

@@ -90,11 +90,53 @@ options:
version_added: 3.2.0
routes4:
description:
- The list of ipv4 routes.
- Use the format '192.0.3.0/24 192.0.2.1'
- The list of IPv4 routes.
- Use the format C(192.0.3.0/24 192.0.2.1).
- To specify more complex routes, use the I(routes4_extended) option.
type: list
elements: str
version_added: 2.0.0
routes4_extended:
description:
- The list of IPv4 routes.
type: list
elements: dict
suboptions:
ip:
description:
- IP or prefix of route.
- Use the format C(192.0.3.0/24).
type: str
required: true
next_hop:
description:
- Use the format C(192.0.2.1).
type: str
metric:
description:
- Route metric.
type: int
table:
description:
- The table to add this route to.
- The default depends on C(ipv4.route-table).
type: int
cwnd:
description:
- The clamp for congestion window.
type: int
mtu:
description:
- If non-zero, only transmit packets of the specified size or smaller.
type: int
onlink:
description:
- Pretend that the nexthop is directly attached to this link, even if it does not match any interface prefix.
type: bool
tos:
description:
- The Type Of Service.
type: int
route_metric4:
description:
- Set metric level of ipv4 routes configured on interface.
@@ -165,9 +207,47 @@ options:
description:
- The list of IPv6 routes.
- Use the format C(fd12:3456:789a:1::/64 2001:dead:beef::1).
- To specify more complex routes, use the I(routes6_extended) option.
type: list
elements: str
version_added: 4.4.0
routes6_extended:
description:
- The list of IPv6 routes but with parameters.
type: list
elements: dict
suboptions:
ip:
description:
- IP or prefix of route.
- Use the format C(fd12:3456:789a:1::/64).
type: str
required: true
next_hop:
description:
- Use the format C(2001:dead:beef::1).
type: str
metric:
description:
- Route metric.
type: int
table:
description:
- The table to add this route to.
- The default depends on C(ipv6.route-table).
type: int
cwnd:
description:
- The clamp for congestion window.
type: int
mtu:
description:
- If non-zero, only transmit packets of the specified size or smaller.
type: int
onlink:
description:
- Pretend that the nexthop is directly attached to this link, even if it does not match any interface prefix.
type: bool
route_metric6:
description:
- Set metric level of IPv6 routes configured on interface.
@@ -294,8 +374,9 @@ options:
description:
- This is only used with 'bridge-slave' - 'hairpin mode' for the slave, which allows frames to be sent back out through the slave the
frame was received on.
- The default value is C(true), but that is being deprecated
and it will be changed to C(false) in community.general 7.0.0.
type: bool
default: yes
runner:
description:
- This is the type of device or network connection that you wish to create for a team.
@@ -1260,6 +1341,7 @@ class Nmcli(object):
self.gw4 = module.params['gw4']
self.gw4_ignore_auto = module.params['gw4_ignore_auto']
self.routes4 = module.params['routes4']
self.routes4_extended = module.params['routes4_extended']
self.route_metric4 = module.params['route_metric4']
self.routing_rules4 = module.params['routing_rules4']
self.never_default4 = module.params['never_default4']
@@ -1272,6 +1354,7 @@ class Nmcli(object):
self.gw6 = module.params['gw6']
self.gw6_ignore_auto = module.params['gw6_ignore_auto']
self.routes6 = module.params['routes6']
self.routes6_extended = module.params['routes6_extended']
self.route_metric6 = module.params['route_metric6']
self.dns6 = module.params['dns6']
self.dns6_search = module.params['dns6_search']
@@ -1294,7 +1377,8 @@ class Nmcli(object):
self.hellotime = module.params['hellotime']
self.maxage = module.params['maxage']
self.ageingtime = module.params['ageingtime']
self.hairpin = module.params['hairpin']
# hairpin should be back to normal in 7.0.0
self._hairpin = module.params['hairpin']
self.path_cost = module.params['path_cost']
self.mac = module.params['mac']
self.runner = module.params['runner']
@@ -1341,6 +1425,18 @@ class Nmcli(object):
self.edit_commands = []
@property
def hairpin(self):
if self._hairpin is None:
self.module.deprecate(
"Parameter 'hairpin' default value will change from true to false in community.general 7.0.0. "
"Set the value explicitly to supress this warning.",
version='7.0.0', collection_name='community.general',
)
# Should be False in 7.0.0 but then that should be in argument_specs
self._hairpin = True
return self._hairpin
def execute_command(self, cmd, use_unsafe_shell=False, data=None):
if isinstance(cmd, list):
cmd = [to_text(item) for item in cmd]
@@ -1371,7 +1467,7 @@ class Nmcli(object):
'ipv4.ignore-auto-dns': self.dns4_ignore_auto,
'ipv4.gateway': self.gw4,
'ipv4.ignore-auto-routes': self.gw4_ignore_auto,
'ipv4.routes': self.routes4,
'ipv4.routes': self.enforce_routes_format(self.routes4, self.routes4_extended),
'ipv4.route-metric': self.route_metric4,
'ipv4.routing-rules': self.routing_rules4,
'ipv4.never-default': self.never_default4,
@@ -1383,7 +1479,7 @@ class Nmcli(object):
'ipv6.ignore-auto-dns': self.dns6_ignore_auto,
'ipv6.gateway': self.gw6,
'ipv6.ignore-auto-routes': self.gw6_ignore_auto,
'ipv6.routes': self.routes6,
'ipv6.routes': self.enforce_routes_format(self.routes6, self.routes6_extended),
'ipv6.route-metric': self.route_metric6,
'ipv6.method': self.ipv6_method,
'ipv6.ip6-privacy': self.ip_privacy6,
@@ -1614,6 +1710,29 @@ class Nmcli(object):
return None
return [address if '/' in address else address + '/128' for address in ip6_addresses]
def enforce_routes_format(self, routes, routes_extended):
if routes is not None:
return routes
elif routes_extended is not None:
return [self.route_to_string(route) for route in routes_extended]
else:
return None
@staticmethod
def route_to_string(route):
result_str = ''
result_str += route['ip']
if route.get('next_hop') is not None:
result_str += ' ' + route['next_hop']
if route.get('metric') is not None:
result_str += ' ' + str(route['metric'])
for attribute, value in sorted(route.items()):
if attribute not in ('ip', 'next_hop', 'metric') and value is not None:
result_str += ' {0}={1}'.format(attribute, str(value).lower())
return result_str
@staticmethod
def bool_to_string(boolean):
if boolean:
@@ -1657,6 +1776,20 @@ class Nmcli(object):
return list
return str
def get_route_params(self, raw_values):
routes_params = []
for raw_value in raw_values:
route_params = {}
for parameter, value in re.findall(r'([\w-]*)\s?=\s?([^\s,}]*)', raw_value):
if parameter == 'nh':
route_params['next_hop'] = value
elif parameter == 'mt':
route_params['metric'] = value
else:
route_params[parameter] = value
routes_params.append(route_params)
return [self.route_to_string(route_params) for route_params in routes_params]
def list_connection_info(self):
cmd = [self.nmcli_bin, '--fields', 'name', '--terse', 'con', 'show']
(rc, out, err) = self.execute_command(cmd)
@@ -1852,13 +1985,7 @@ class Nmcli(object):
if key in conn_info:
current_value = conn_info[key]
if key in ('ipv4.routes', 'ipv6.routes') and current_value is not None:
# ipv4.routes and ipv6.routes do not have same options and show_connection() format
# options: ['10.11.0.0/24 10.10.0.2', '10.12.0.0/24 10.10.0.2 200']
# show_connection(): ['{ ip = 10.11.0.0/24, nh = 10.10.0.2 }', '{ ip = 10.12.0.0/24, nh = 10.10.0.2, mt = 200 }']
# Need to convert in order to compare both
current_value = [re.sub(r'^{\s*ip\s*=\s*([^, ]+),\s*nh\s*=\s*([^} ]+),\s*mt\s*=\s*([^} ]+)\s*}', r'\1 \2 \3',
route) for route in current_value]
current_value = [re.sub(r'^{\s*ip\s*=\s*([^, ]+),\s*nh\s*=\s*([^} ]+)\s*}', r'\1 \2', route) for route in current_value]
current_value = self.get_route_params(current_value)
if key == self.mac_setting:
# MAC addresses are case insensitive, nmcli always reports them in uppercase
value = value.upper()
@@ -1942,6 +2069,18 @@ def main():
gw4=dict(type='str'),
gw4_ignore_auto=dict(type='bool', default=False),
routes4=dict(type='list', elements='str'),
routes4_extended=dict(type='list',
elements='dict',
options=dict(
ip=dict(type='str', required=True),
next_hop=dict(type='str'),
metric=dict(type='int'),
table=dict(type='int'),
tos=dict(type='int'),
cwnd=dict(type='int'),
mtu=dict(type='int'),
onlink=dict(type='bool')
)),
route_metric4=dict(type='int'),
routing_rules4=dict(type='list', elements='str'),
never_default4=dict(type='bool', default=False),
@@ -1958,6 +2097,17 @@ def main():
dns6_search=dict(type='list', elements='str'),
dns6_ignore_auto=dict(type='bool', default=False),
routes6=dict(type='list', elements='str'),
routes6_extended=dict(type='list',
elements='dict',
options=dict(
ip=dict(type='str', required=True),
next_hop=dict(type='str'),
metric=dict(type='int'),
table=dict(type='int'),
cwnd=dict(type='int'),
mtu=dict(type='int'),
onlink=dict(type='bool')
)),
route_metric6=dict(type='int'),
method6=dict(type='str', choices=['ignore', 'auto', 'dhcp', 'link-local', 'manual', 'shared', 'disabled']),
ip_privacy6=dict(type='str', choices=['disabled', 'prefer-public-addr', 'prefer-temp-addr', 'unknown']),
@@ -1983,7 +2133,7 @@ def main():
hellotime=dict(type='int', default=2),
maxage=dict(type='int', default=20),
ageingtime=dict(type='int', default=300),
hairpin=dict(type='bool', default=True),
hairpin=dict(type='bool'),
path_cost=dict(type='int', default=100),
# team specific vars
runner=dict(type='str', default='roundrobin',
@@ -2014,7 +2164,9 @@ def main():
gsm=dict(type='dict'),
wireguard=dict(type='dict'),
),
mutually_exclusive=[['never_default4', 'gw4']],
mutually_exclusive=[['never_default4', 'gw4'],
['routes4_extended', 'routes4'],
['routes6_extended', 'routes6']],
required_if=[("type", "wifi", [("ssid")])],
supports_check_mode=True,
)

View File

@@ -72,8 +72,8 @@ options:
default: false
description:
- Avoid loading any C(.gemrc) file. Ignored for RubyGems prior to 2.5.2.
- "The current default value will be deprecated in community.general 4.0.0: if the value is not explicitly specified, a deprecation message will be shown."
- From community.general 5.0.0 on, the default will be changed to C(true).
- "The current default value will be deprecated in community.general 5.0.0: if the value is not explicitly specified, a deprecation message will be shown."
- From community.general 6.0.0 on, the default will be changed to C(true).
version_added: 3.3.0
env_shebang:
description:

View File

@@ -610,8 +610,9 @@ class Pacman(object):
# Expand group members
for group_member in self.inventory["available_groups"][pkg]:
pkg_list.append(Package(name=group_member, source=group_member))
elif pkg in self.inventory["available_pkgs"]:
# just a regular pkg
elif pkg in self.inventory["available_pkgs"] or pkg in self.inventory["installed_pkgs"]:
# Just a regular pkg, either available in the repositories,
# or locally installed, which we need to know for absent state
pkg_list.append(Package(name=pkg, source=pkg))
else:
# Last resort, call out to pacman to extract the info,

View File

@@ -230,7 +230,9 @@ def install_packages(module, xbps_path, state, packages):
module.params['upgrade_xbps'] = False
install_packages(module, xbps_path, state, packages)
elif rc != 0 and not (state == 'latest' and rc == 17):
module.fail_json(msg="failed to install %s" % (package))
module.fail_json(msg="failed to install %s packages(s)"
% (len(toInstall)),
packages=toInstall)
module.exit_json(changed=True, msg="installed %s package(s)"
% (len(toInstall)),

View File

@@ -427,7 +427,9 @@ def package_present(m, name, want_latest):
# if a version is given leave the package in to let zypper handle the version
# resolution
packageswithoutversion = [p for p in packages if not p.version]
prerun_state = get_installed_state(m, packageswithoutversion)
prerun_state = {}
if packageswithoutversion:
prerun_state = get_installed_state(m, packageswithoutversion)
# generate lists of packages to install or remove
packages = [p for p in packages if p.shouldinstall != (p.name in prerun_state)]

View File

@@ -318,6 +318,14 @@ EXAMPLES = '''
category: Systems
command: DisableBootOverride
- name: Set system indicator LED to blink using security token for auth
community.general.redfish_command:
category: Systems
command: IndicatorLedBlink
resource_id: 437XR1138R2
baseuri: "{{ baseuri }}"
auth_token: "{{ result.session.token }}"
- name: Add user
community.general.redfish_command:
category: Accounts
@@ -583,7 +591,8 @@ from ansible.module_utils.common.text.converters import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["PowerOn", "PowerForceOff", "PowerForceRestart", "PowerGracefulRestart",
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride"],
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride",
"IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Chassis": ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",
"UpdateUserRole", "UpdateUserPassword", "UpdateUserName",
@@ -754,6 +763,8 @@ def main():
elif command == "DisableBootOverride":
boot_opts['override_enabled'] = 'Disabled'
result = rf_utils.set_boot_override(boot_opts)
elif command.startswith('IndicatorLed'):
result = rf_utils.manage_system_indicator_led(command)
elif category == "Chassis":
result = rf_utils._find_chassis_resource()
@@ -769,7 +780,7 @@ def main():
else:
for command in command_list:
if command in led_commands:
result = rf_utils.manage_indicator_led(command)
result = rf_utils.manage_chassis_indicator_led(command)
elif category == "Sessions":
# execute only if we find SessionService resources

View File

@@ -110,7 +110,6 @@ class GitlabBranch(object):
return self.project.branches.create({'branch': branch, 'ref': ref_branch})
def delete_branch(self, branch):
branch.unprotect()
return branch.delete()

View File

@@ -279,7 +279,7 @@ class GitLabGroup(object):
def delete_group(self):
group = self.group_object
if len(group.projects.list()) >= 1:
if len(group.projects.list(all=False)) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:

View File

@@ -172,13 +172,13 @@ class GitLabGroup(object):
# get user id if the user exists
def get_user_id(self, gitlab_user):
user_exists = self._gitlab.users.list(username=gitlab_user)
user_exists = self._gitlab.users.list(username=gitlab_user, all=True)
if user_exists:
return user_exists[0].id
# get group id if group exists
def get_group_id(self, gitlab_group):
groups = self._gitlab.groups.list(search=gitlab_group)
groups = self._gitlab.groups.list(search=gitlab_group, all=True)
for group in groups:
if group.full_path == gitlab_group:
return group.id

View File

@@ -268,7 +268,7 @@ class GitLabHook(object):
@param hook_url Url to call on event
'''
def find_hook(self, project, hook_url):
hooks = project.hooks.list()
hooks = project.hooks.list(all=True)
for hook in hooks:
if (hook.url == hook_url):
return hook

View File

@@ -494,9 +494,9 @@ def main():
namespace_id = group.id
else:
if username:
namespace = gitlab_instance.namespaces.list(search=username)[0]
namespace = gitlab_instance.namespaces.list(search=username, all=False)[0]
else:
namespace = gitlab_instance.namespaces.list(search=gitlab_instance.user.username)[0]
namespace = gitlab_instance.namespaces.list(search=gitlab_instance.user.username, all=False)[0]
namespace_id = namespace.id
if not namespace_id:

View File

@@ -178,12 +178,12 @@ class GitLabProjectMembers(object):
project_exists = self._gitlab.projects.get(project_name)
return project_exists.id
except gitlab.exceptions.GitlabGetError as e:
project_exists = self._gitlab.projects.list(search=project_name)
project_exists = self._gitlab.projects.list(search=project_name, all=False)
if project_exists:
return project_exists[0].id
def get_user_id(self, gitlab_user):
user_exists = self._gitlab.users.list(username=gitlab_user)
user_exists = self._gitlab.users.list(username=gitlab_user, all=False)
if user_exists:
return user_exists[0].id

View File

@@ -48,7 +48,6 @@ options:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
- Required only if C(state) is set to C(present).
type: str
reset_password:
description:
@@ -349,7 +348,7 @@ class GitLabUser(object):
@param sshkey_name Name of the ssh key
'''
def ssh_key_exists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
keyList = map(lambda k: k.title, user.keys.list(all=True))
return sshkey_name in keyList
@@ -519,7 +518,7 @@ class GitLabUser(object):
@param username Username of the user
'''
def find_user(self, username):
users = self._gitlab.users.list(search=username)
users = self._gitlab.users.list(search=username, all=True)
for user in users:
if (user.username == username):
return user

View File

@@ -41,6 +41,16 @@ options:
- The priority of the alternative.
type: int
default: 50
state:
description:
- C(present) - install the alternative (if not already installed), but do
not set it as the currently selected alternative for the group.
- C(selected) - install the alternative (if not already installed), and
set it as the currently selected alternative for the group.
choices: [ present, selected ]
default: selected
type: str
version_added: 4.8.0
requirements: [ update-alternatives ]
'''
@@ -61,6 +71,13 @@ EXAMPLES = r'''
name: java
path: /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java
priority: -10
- name: Install Python 3.5 but do not select it
community.general.alternatives:
name: python
path: /usr/bin/python3.5
link: /usr/bin/python
state: present
'''
import os
@@ -70,6 +87,15 @@ import subprocess
from ansible.module_utils.basic import AnsibleModule
class AlternativeState:
PRESENT = "present"
SELECTED = "selected"
@classmethod
def to_list(cls):
return [cls.PRESENT, cls.SELECTED]
def main():
module = AnsibleModule(
@@ -78,6 +104,11 @@ def main():
path=dict(type='path', required=True),
link=dict(type='path'),
priority=dict(type='int', default=50),
state=dict(
type='str',
choices=AlternativeState.to_list(),
default=AlternativeState.SELECTED,
),
),
supports_check_mode=True,
)
@@ -87,6 +118,7 @@ def main():
path = params['path']
link = params['link']
priority = params['priority']
state = params['state']
UPDATE_ALTERNATIVES = module.get_bin_path('update-alternatives', True)
@@ -126,9 +158,20 @@ def main():
link = line.split()[1]
break
changed = False
if current_path != path:
# Check mode: expect a change if this alternative is not already
# installed, or if it is to be set as the current selection.
if module.check_mode:
module.exit_json(changed=True, current_path=current_path)
module.exit_json(
changed=(
path not in all_alternatives or
state == AlternativeState.SELECTED
),
current_path=current_path,
)
try:
# install the requested path if necessary
if path not in all_alternatives:
@@ -141,18 +184,34 @@ def main():
[UPDATE_ALTERNATIVES, '--install', link, name, path, str(priority)],
check_rc=True
)
changed = True
# select the requested path
module.run_command(
[UPDATE_ALTERNATIVES, '--set', name, path],
check_rc=True
)
# set the current selection to this path (if requested)
if state == AlternativeState.SELECTED:
module.run_command(
[UPDATE_ALTERNATIVES, '--set', name, path],
check_rc=True
)
changed = True
module.exit_json(changed=True)
except subprocess.CalledProcessError as cpe:
module.fail_json(msg=str(dir(cpe)))
else:
module.exit_json(changed=False)
elif current_path == path and state == AlternativeState.PRESENT:
# Case where alternative is currently selected, but state is set
# to 'present'. In this case, we set to auto mode.
if module.check_mode:
module.exit_json(changed=True, current_path=current_path)
changed = True
try:
module.run_command(
[UPDATE_ALTERNATIVES, '--auto', name],
check_rc=True,
)
except subprocess.CalledProcessError as cpe:
module.fail_json(msg=str(dir(cpe)))
module.exit_json(changed=changed)
if __name__ == '__main__':

View File

@@ -59,7 +59,7 @@ options:
resizefs:
description:
- If C(yes), if the block device and filesystem size differ, grow the filesystem into the space.
- Supported for C(ext2), C(ext3), C(ext4), C(ext4dev), C(f2fs), C(lvm), C(xfs), C(ufs) and C(vfat) filesystems.
- Supported for C(btrfs), C(ext2), C(ext3), C(ext4), C(ext4dev), C(f2fs), C(lvm), C(xfs), C(ufs) and C(vfat) filesystems.
Attempts to resize other filesystem types will fail.
- XFS Will only grow if mounted. Currently, the module is based on commands
from C(util-linux) package to perform operations, so resizing of XFS is
@@ -331,6 +331,10 @@ class Reiserfs(Filesystem):
class Btrfs(Filesystem):
MKFS = 'mkfs.btrfs'
INFO = 'btrfs'
GROW = 'btrfs'
GROW_MAX_SPACE_FLAGS = ['filesystem', 'resize', 'max']
GROW_MOUNTPOINT_ONLY = True
def __init__(self, module):
super(Btrfs, self).__init__(module)
@@ -349,6 +353,19 @@ class Btrfs(Filesystem):
self.MKFS_FORCE_FLAGS = ['-f']
self.module.warn('Unable to identify mkfs.btrfs version (%r, %r)' % (stdout, stderr))
def get_fs_size(self, dev):
"""Return size in bytes of filesystem on device (integer)."""
mountpoint = dev.get_mountpoint()
if not mountpoint:
self.module.fail_json(msg="%s needs to be mounted for %s operations" % (dev, self.fstype))
dummy, stdout, dummy = self.module.run_command([self.module.get_bin_path(self.INFO),
'filesystem', 'usage', '-b', mountpoint], check_rc=True)
for line in stdout.splitlines():
if "Device size" in line:
return int(line.split()[-1])
raise ValueError(stdout)
class Ocfs2(Filesystem):
MKFS = 'mkfs.ocfs2'

View File

@@ -113,7 +113,7 @@ from ansible.module_utils.common.text.converters import to_native
def get_runtime_status(ignore_selinux_state=False):
return True if ignore_selinux_state is True else selinux.is_selinux_enabled()
return ignore_selinux_state or selinux.is_selinux_enabled()
def semanage_port_get_ports(seport, setype, proto):
@@ -161,10 +161,7 @@ def semanage_port_get_type(seport, port, proto):
key = (int(ports[0]), int(ports[1]), proto)
records = seport.get_all()
if key in records:
return records[key]
else:
return None
return records.get(key)
def semanage_port_add(module, ports, proto, setype, do_reload, serange='s0', sestore=''):
@@ -194,19 +191,23 @@ def semanage_port_add(module, ports, proto, setype, do_reload, serange='s0', ses
:rtype: bool
:return: True if the policy was changed, otherwise False
"""
change = False
try:
seport = seobject.portRecords(sestore)
seport.set_reload(do_reload)
change = False
ports_by_type = semanage_port_get_ports(seport, setype, proto)
for port in ports:
if port not in ports_by_type:
change = True
port_type = semanage_port_get_type(seport, port, proto)
if port_type is None and not module.check_mode:
seport.add(port, proto, serange, setype)
elif port_type is not None and not module.check_mode:
seport.modify(port, proto, serange, setype)
if port in ports_by_type:
continue
change = True
if module.check_mode:
continue
port_type = semanage_port_get_type(seport, port, proto)
if port_type is None:
seport.add(port, proto, serange, setype)
else:
seport.modify(port, proto, serange, setype)
except (ValueError, IOError, KeyError, OSError, RuntimeError) as e:
module.fail_json(msg="%s: %s\n" % (e.__class__.__name__, to_native(e)), exception=traceback.format_exc())
@@ -238,10 +239,10 @@ def semanage_port_del(module, ports, proto, setype, do_reload, sestore=''):
:rtype: bool
:return: True if the policy was changed, otherwise False
"""
change = False
try:
seport = seobject.portRecords(sestore)
seport.set_reload(do_reload)
change = False
ports_by_type = semanage_port_get_ports(seport, setype, proto)
for port in ports:
if port in ports_by_type:

View File

@@ -23,6 +23,7 @@ options:
description:
- The commands allowed by the sudoers rule.
- Multiple can be added by passing a list of commands.
- Use C(ALL) for all commands.
type: list
elements: str
group:
@@ -41,6 +42,11 @@ options:
- Whether a password will be required to run the sudo'd command.
default: true
type: bool
runas:
description:
- Specify the target user the command(s) will run as.
type: str
version_added: 4.7.0
sudoers_path:
description:
- The path which sudoers config files will be managed in.
@@ -69,6 +75,14 @@ EXAMPLES = '''
user: backup
commands: /usr/local/bin/backup
- name: Allow the bob user to run any commands as alice with sudo -u alice
community.general.sudoers:
name: bob-do-as-alice
state: present
user: bob
runas: alice
commands: ALL
- name: >-
Allow the monitoring group to run sudo /usr/local/bin/gather-app-metrics
without requiring a password
@@ -108,6 +122,7 @@ class Sudoers(object):
self.group = module.params['group']
self.state = module.params['state']
self.nopassword = module.params['nopassword']
self.runas = module.params['runas']
self.sudoers_path = module.params['sudoers_path']
self.file = os.path.join(self.sudoers_path, self.name)
self.commands = module.params['commands']
@@ -140,7 +155,8 @@ class Sudoers(object):
commands_str = ', '.join(self.commands)
nopasswd_str = 'NOPASSWD:' if self.nopassword else ''
return "{owner} ALL={nopasswd} {commands}\n".format(owner=owner, nopasswd=nopasswd_str, commands=commands_str)
runas_str = '({runas})'.format(runas=self.runas) if self.runas is not None else ''
return "{owner} ALL={runas}{nopasswd} {commands}\n".format(owner=owner, runas=runas_str, nopasswd=nopasswd_str, commands=commands_str)
def run(self):
if self.state == 'absent' and self.exists():
@@ -168,6 +184,10 @@ def main():
'type': 'bool',
'default': True,
},
'runas': {
'type': 'str',
'default': None,
},
'sudoers_path': {
'type': 'str',
'default': '/etc/sudoers.d',

View File

@@ -10,58 +10,75 @@ __metaclass__ = type
DOCUMENTATION = '''
module: xfconf
author:
- "Joseph Benden (@jbenden)"
- "Alexei Znamensky (@russoz)"
- "Joseph Benden (@jbenden)"
- "Alexei Znamensky (@russoz)"
short_description: Edit XFCE4 Configurations
description:
- This module allows for the manipulation of Xfce 4 Configuration with the help of
xfconf-query. Please see the xfconf-query(1) man pages for more details.
seealso:
- name: C(xfconf-query) man page
description: Manual page of the C(xfconf-query) tool at the XFCE documentation site.
link: 'https://docs.xfce.org/xfce/xfconf/xfconf-query'
- name: xfconf - Configuration Storage System
description: XFCE documentation for the Xfconf configuration system.
link: 'https://docs.xfce.org/xfce/xfconf/start'
options:
channel:
description:
- A Xfconf preference channel is a top-level tree key, inside of the
Xfconf repository that corresponds to the location for which all
application properties/keys are stored. See man xfconf-query(1)
required: yes
- A Xfconf preference channel is a top-level tree key, inside of the
Xfconf repository that corresponds to the location for which all
application properties/keys are stored. See man xfconf-query(1)
required: true
type: str
property:
description:
- A Xfce preference key is an element in the Xfconf repository
that corresponds to an application preference. See man xfconf-query(1)
required: yes
- A Xfce preference key is an element in the Xfconf repository
that corresponds to an application preference. See man xfconf-query(1)
required: true
type: str
value:
description:
- Preference properties typically have simple values such as strings,
integers, or lists of strings and integers. This is ignored if the state
is "get". For array mode, use a list of values. See man xfconf-query(1)
- Preference properties typically have simple values such as strings,
integers, or lists of strings and integers. This is ignored if the state
is "get". For array mode, use a list of values. See man xfconf-query(1)
type: list
elements: raw
value_type:
description:
- The type of value being set. This is ignored if the state is "get".
For array mode, use a list of types.
- The type of value being set. This is ignored if the state is "get".
- When providing more than one I(value_type), the length of the list must
be equal to the length of I(value).
- If only one I(value_type) is provided, but I(value) contains more than
on element, that I(value_type) will be applied to all elements of I(value).
- If the I(property) being set is an array and it can possibly have ony one
element in the array, then I(force_array=true) must be used to ensure
that C(xfconf-query) will interpret the value as an array rather than a
scalar.
- Support for C(uchar), C(char), C(uint64), and C(int64) has been added in community.general 4.8.0.
type: list
elements: str
choices: [ int, uint, bool, float, double, string ]
choices: [ string, int, double, bool, uint, uchar, char, uint64, int64, float ]
state:
type: str
description:
- The action to take upon the property/value.
- State C(get) is deprecated and will be removed in community.general 5.0.0. Please use the module M(community.general.xfconf_info) instead.
- The action to take upon the property/value.
- State C(get) is deprecated and will be removed in community.general 5.0.0. Please use the module M(community.general.xfconf_info) instead.
choices: [ get, present, absent ]
default: "present"
force_array:
description:
- Force array even if only one element
- Force array even if only one element
type: bool
default: 'no'
aliases: ['array']
version_added: 1.0.0
disable_facts:
description:
- The value C(false) is no longer allowed since community.general 4.0.0.
- This option will be deprecated in a future version, and eventually be removed.
- The value C(false) is no longer allowed since community.general 4.0.0.
- This option will be deprecated in a future version, and eventually be removed.
type: bool
default: true
version_added: 2.1.0
@@ -88,7 +105,7 @@ EXAMPLES = """
property: /general/workspace_names
value_type: string
value: ['Main']
force_array: yes
force_array: true
"""
RETURN = '''
@@ -104,27 +121,27 @@ RETURN = '''
sample: "/Xft/DPI"
value_type:
description:
- The type of the value that was changed (C(none) for C(get) and C(reset)
state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
- The type of the value that was changed (C(none) for C(get) and C(reset)
state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
returned: success
type: any
sample: '"int" or ["str", "str", "str"]'
value:
description:
- The value of the preference key after executing the module. Either a
single string value or a list of strings for array types.
- This is a string or a list of strings.
- The value of the preference key after executing the module. Either a
single string value or a list of strings for array types.
- This is a string or a list of strings.
returned: success
type: any
sample: '"192" or ["orange", "yellow", "violet"]'
previous_value:
description:
- The value of the preference key before executing the module (C(none) for
C(get) state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
- The value of the preference key before executing the module (C(none) for
C(get) state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
returned: success
type: any
sample: '"96" or ["red", "blue", "green"]'
@@ -161,15 +178,13 @@ class XFConfProperty(CmdStateModuleHelper):
facts_params = ('property', 'channel', 'value')
module = dict(
argument_spec=dict(
state=dict(default="present",
choices=("present", "get", "absent"),
type='str'),
channel=dict(required=True, type='str'),
property=dict(required=True, type='str'),
value_type=dict(required=False, type='list',
elements='str', choices=('int', 'uint', 'bool', 'float', 'double', 'string')),
value=dict(required=False, type='list', elements='raw'),
force_array=dict(default=False, type='bool', aliases=['array']),
state=dict(type='str', choices=("present", "get", "absent"), default="present"),
channel=dict(type='str', required=True),
property=dict(type='str', required=True),
value_type=dict(type='list', elements='str',
choices=('string', 'int', 'double', 'bool', 'uint', 'uchar', 'char', 'uint64', 'int64', 'float')),
value=dict(type='list', elements='raw'),
force_array=dict(type='bool', default=False, aliases=['array']),
disable_facts=dict(type='bool', default=True),
),
required_if=[('state', 'present', ['value', 'value_type'])],

View File

@@ -0,0 +1,2 @@
shippable/posix/group1
disabled

View File

@@ -0,0 +1,4 @@
alerta_url: http://localhost:8080/
alerta_user: admin@example.com
alerta_password: password
alerta_key: demo-key

View File

@@ -0,0 +1,151 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Create customer (check mode)
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
check_mode: true
register: result
- name: Check result (check mode)
assert:
that:
- result is changed
- name: Create customer
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
register: result
- name: Check customer creation
assert:
that:
- result is changed
- name: Test customer creation idempotency
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
register: result
- name: Check customer creation idempotency
assert:
that:
- result is not changed
- name: Delete customer (check mode)
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
check_mode: true
register: result
- name: Check customer deletion (check mode)
assert:
that:
- result is changed
- name: Delete customer
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
register: result
- name: Check customer deletion
assert:
that:
- result is changed
- name: Test customer deletion idempotency
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
register: result
- name: Check customer deletion idempotency
assert:
that:
- result is not changed
- name: Delete non-existing customer (check mode)
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
check_mode: true
register: result
- name: Check non-existing customer deletion (check mode)
assert:
that:
- result is not changed
- name: Create customer with api key
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_key: "{{ alerta_key }}"
customer: customer1
match: admin@admin.admin
register: result
- name: Check customer creation with api key
assert:
that:
- result is changed
- name: Delete customer with api key
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_key: "{{ alerta_key }}"
customer: customer1
match: admin@admin.admin
state: absent
register: result
- name: Check customer deletion with api key
assert:
that:
- result is changed
- name: Use wrong api key
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_key: wrong_key
customer: customer1
match: admin@admin.admin
register: result
ignore_errors: true
- name: Check customer creation with api key
assert:
that:
- result is not changed
- result is failed

View File

@@ -49,6 +49,12 @@
# Test that path is checked: alternatives must fail when path is nonexistent
- import_tasks: path_is_checked.yml
# Test operation of the 'state' parameter
- block:
- include_tasks: remove_links.yml
- include_tasks: tests_state.yml
# Cleanup
always:
- include_tasks: remove_links.yml
@@ -62,6 +68,7 @@
path: '/usr/bin/dummy{{ item }}'
state: absent
with_sequence: start=1 end=4
# *Disable tests on Fedora 24*
# Shippable Fedora 24 image provides chkconfig-1.7-2.fc24.x86_64 but not the
# latest available version (chkconfig-1.8-1.fc24.x86_64). update-alternatives

View File

@@ -3,5 +3,6 @@
path: '{{ item }}'
state: absent
with_items:
- "{{ alternatives_dir }}/dummy"
- /etc/alternatives/dummy
- /usr/bin/dummy

View File

@@ -0,0 +1,71 @@
# Add a few dummy alternatives with state = present and make sure that the
# group is in 'auto' mode and the highest priority alternative is selected.
- name: Add some dummy alternatives with state = present
alternatives:
name: dummy
path: "/usr/bin/dummy{{ item.n }}"
link: /usr/bin/dummy
priority: "{{ item.priority }}"
state: present
loop:
- { n: 1, priority: 50 }
- { n: 2, priority: 70 }
- { n: 3, priority: 25 }
- name: Ensure that the link group is in auto mode
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^auto$"'
# Execute current selected 'dummy' and ensure it's the alternative we expect
- name: Execute the current dummy command
shell: dummy
register: cmd
- name: Ensure that the expected command was executed
assert:
that:
- cmd.stdout == "dummy2"
# Add another alternative with state = 'selected' and make sure that
# this change results in the group being set to manual mode, and the
# new alternative being the selected one.
- name: Add another dummy alternative with state = selected
alternatives:
name: dummy
path: /usr/bin/dummy4
link: /usr/bin/dummy
priority: 10
state: selected
- name: Ensure that the link group is in manual mode
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^manual$"'
- name: Execute the current dummy command
shell: dummy
register: cmd
- name: Ensure that the expected command was executed
assert:
that:
- cmd.stdout == "dummy4"
# Set the currently selected alternative to state = 'present' (was previously
# selected), and ensure that this results in the group being set to 'auto'
# mode, and the highest priority alternative is selected.
- name: Set current selected dummy to state = present
alternatives:
name: dummy
path: /usr/bin/dummy4
link: /usr/bin/dummy
state: present
- name: Ensure that the link group is in auto mode
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^auto$"'
- name: Execute the current dummy command
shell: dummy
register: cmd
- name: Ensure that the expected command was executed
assert:
that:
- cmd.stdout == "dummy2"

View File

@@ -0,0 +1 @@
shippable/posix/group2

View File

@@ -0,0 +1,78 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky <russoz@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
DOCUMENTATION = '''
module: cmd_echo
author: "Alexei Znamensky (@russoz)"
short_description: Simple module for testing
description:
- Simple module test description.
options:
command:
description: aaa
type: list
elements: str
required: true
arg_formats:
description: bbb
type: dict
required: true
arg_order:
description: ccc
type: raw
required: true
arg_values:
description: ddd
type: list
required: true
aa:
description: eee
type: raw
'''
EXAMPLES = ""
RETURN = ""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, fmt
def main():
module = AnsibleModule(
argument_spec=dict(
arg_formats=dict(type="dict", default={}),
arg_order=dict(type="raw", required=True),
arg_values=dict(type="dict", default={}),
aa=dict(type="raw"),
),
)
p = module.params
arg_formats = {}
for arg, fmt_spec in p['arg_formats'].items():
func = getattr(fmt, fmt_spec['func'])
args = fmt_spec.get("args", [])
arg_formats[arg] = func(*args)
runner = CmdRunner(module, ['echo', '--'], arg_formats=arg_formats)
info = None
with runner.context(p['arg_order']) as ctx:
result = ctx.run(**p['arg_values'])
info = ctx.run_info
rc, out, err = result
module.exit_json(rc=rc, out=out, err=err, info=info)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,7 @@
# (c) 2022, Alexei Znamensky
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: parameterized test cmd_echo
ansible.builtin.include_tasks:
file: test_cmd_echo.yml
loop: "{{ cmd_echo_tests }}"

View File

@@ -0,0 +1,13 @@
---
- name: test cmd_echo [{{ item.name }}]
cmd_echo:
arg_formats: "{{ item.arg_formats|default(omit) }}"
arg_order: "{{ item.arg_order }}"
arg_values: "{{ item.arg_values|default(omit) }}"
aa: "{{ item.aa|default(omit) }}"
register: test_result
ignore_errors: "{{ item.expect_error|default(omit) }}"
- name: check results [{{ item.name }}]
assert:
that: "{{ item.assertions }}"

View File

@@ -0,0 +1,84 @@
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
cmd_echo_tests:
- name: set aa and bb value
arg_formats:
aa:
func: as_opt_eq_val
args: [--answer]
bb:
func: as_bool
args: [--bb-here]
arg_order: 'aa bb'
arg_values:
bb: true
aa: 11
assertions:
- test_result.rc == 0
- test_result.out == "-- --answer=11 --bb-here\n"
- test_result.err == ""
- name: default aa value
arg_formats:
aa:
func: as_opt_eq_val
args: [--answer]
bb:
func: as_bool
args: [--bb-here]
arg_order: ['aa', 'bb']
arg_values:
aa: 43
bb: true
assertions:
- test_result.rc == 0
- test_result.out == "-- --answer=43 --bb-here\n"
- test_result.err == ""
- name: implicit aa format
arg_formats:
bb:
func: as_bool
args: [--bb-here]
arg_order: ['aa', 'bb']
arg_values:
bb: true
aa: 1984
assertions:
- test_result.rc == 0
- test_result.out == "-- --aa 1984 --bb-here\n"
- test_result.err == ""
- name: missing bb format
arg_order: ['aa', 'bb']
arg_values:
bb: true
aa: 1984
expect_error: true
assertions:
- test_result is failed
- test_result.rc == 1
- '"out" not in test_result'
- '"err" not in test_result'
- >-
"MissingArgumentFormat: Cannot find format for parameter bb"
in test_result.module_stderr
- name: missing bb value
arg_formats:
bb:
func: as_bool
args: [--bb-here]
arg_order: 'aa bb'
aa: 1984
expect_error: true
assertions:
- test_result is failed
- test_result.rc == 1
- '"out" not in test_result'
- '"err" not in test_result'
- >-
"MissingArgumentValue: Cannot find value for parameter bb"
in test_result.module_stderr

View File

@@ -3,115 +3,117 @@
# and should not be used as examples of how to write Ansible roles #
####################################################################
- when:
- not (ansible_os_family == 'Alpine' and ansible_distribution_version is version('3.15', '<')) # TODO
block:
- name: Create EMAIL cron var
cronvar:
name: EMAIL
value: doug@ansibmod.con.com
register: create_cronvar1
- name: Ensure /etc/cron.d directory exists
file:
path: /etc/cron.d
state: directory
- name: Create EMAIL cron var again
cronvar:
name: EMAIL
value: doug@ansibmod.con.com
register: create_cronvar2
- name: Create EMAIL cron var
cronvar:
name: EMAIL
value: doug@ansibmod.con.com
register: create_cronvar1
- name: Check cron var value
shell: crontab -l -u root | grep -c EMAIL=doug@ansibmod.con.com
register: varcheck1
- name: Create EMAIL cron var again
cronvar:
name: EMAIL
value: doug@ansibmod.con.com
register: create_cronvar2
- name: Modify EMAIL cron var
cronvar:
name: EMAIL
value: jane@ansibmod.con.com
register: create_cronvar3
- name: Check cron var value
shell: crontab -l -u root | grep -c EMAIL=doug@ansibmod.con.com
register: varcheck1
- name: Check cron var value again
shell: crontab -l -u root | grep -c EMAIL=jane@ansibmod.con.com
register: varcheck2
- name: Modify EMAIL cron var
cronvar:
name: EMAIL
value: jane@ansibmod.con.com
register: create_cronvar3
- name: Remove EMAIL cron var
cronvar:
name: EMAIL
state: absent
register: remove_cronvar1
- name: Check cron var value again
shell: crontab -l -u root | grep -c EMAIL=jane@ansibmod.con.com
register: varcheck2
- name: Remove EMAIL cron var again
cronvar:
name: EMAIL
state: absent
register: remove_cronvar2
- name: Remove EMAIL cron var
cronvar:
name: EMAIL
state: absent
register: remove_cronvar1
- name: Check cron var value again
shell: crontab -l -u root | grep -c EMAIL
register: varcheck3
failed_when: varcheck3.rc == 0
- name: Remove EMAIL cron var again
cronvar:
name: EMAIL
state: absent
register: remove_cronvar2
- name: Add cron var to custom file
cronvar:
name: TESTVAR
value: somevalue
cron_file: cronvar_test
register: custom_cronfile1
- name: Check cron var value again
shell: crontab -l -u root | grep -c EMAIL
register: varcheck3
failed_when: varcheck3.rc == 0
- name: Add cron var to custom file again
cronvar:
name: TESTVAR
value: somevalue
cron_file: cronvar_test
register: custom_cronfile2
- name: Add cron var to custom file
cronvar:
name: TESTVAR
value: somevalue
cron_file: cronvar_test
register: custom_cronfile1
- name: Check cron var value in custom file
command: grep -c TESTVAR=somevalue {{ cron_config_path }}/cronvar_test
register: custom_varcheck1
- name: Add cron var to custom file again
cronvar:
name: TESTVAR
value: somevalue
cron_file: cronvar_test
register: custom_cronfile2
- name: Change cron var in custom file
cronvar:
name: TESTVAR
value: newvalue
cron_file: cronvar_test
register: custom_cronfile3
- name: Check cron var value in custom file
command: grep -c TESTVAR=somevalue {{ cron_config_path }}/cronvar_test
register: custom_varcheck1
- name: Check cron var value in custom file
command: grep -c TESTVAR=newvalue {{ cron_config_path }}/cronvar_test
register: custom_varcheck2
- name: Change cron var in custom file
cronvar:
name: TESTVAR
value: newvalue
cron_file: cronvar_test
register: custom_cronfile3
- name: Remove cron var from custom file
cronvar:
name: TESTVAR
value: newvalue
cron_file: cronvar_test
state: absent
register: custom_remove_cronvar1
- name: Check cron var value in custom file
command: grep -c TESTVAR=newvalue {{ cron_config_path }}/cronvar_test
register: custom_varcheck2
- name: Remove cron var from custom file again
cronvar:
name: TESTVAR
value: newvalue
cron_file: cronvar_test
state: absent
register: custom_remove_cronvar2
- name: Remove cron var from custom file
cronvar:
name: TESTVAR
value: newvalue
cron_file: cronvar_test
state: absent
register: custom_remove_cronvar1
- name: Check cron var value
command: grep -c TESTVAR=newvalue {{ cron_config_path }}/cronvar_test
register: custom_varcheck3
failed_when: custom_varcheck3.rc == 0
- name: Remove cron var from custom file again
cronvar:
name: TESTVAR
value: newvalue
cron_file: cronvar_test
state: absent
register: custom_remove_cronvar2
- name: Esure cronvar tasks did the right thing
assert:
that:
- create_cronvar1 is changed
- create_cronvar2 is not changed
- create_cronvar3 is changed
- remove_cronvar1 is changed
- remove_cronvar2 is not changed
- varcheck1.stdout == '1'
- varcheck2.stdout == '1'
- varcheck3.stdout == '0'
- custom_remove_cronvar1 is changed
- custom_remove_cronvar2 is not changed
- custom_varcheck1.stdout == '1'
- custom_varcheck2.stdout == '1'
- custom_varcheck3.stdout == '0'
- name: Check cron var value
command: grep -c TESTVAR=newvalue {{ cron_config_path }}/cronvar_test
register: custom_varcheck3
failed_when: custom_varcheck3.rc == 0
- name: Ensure cronvar tasks did the right thing
assert:
that:
- create_cronvar1 is changed
- create_cronvar2 is not changed
- create_cronvar3 is changed
- remove_cronvar1 is changed
- remove_cronvar2 is not changed
- varcheck1.stdout == '1'
- varcheck2.stdout == '1'
- varcheck3.stdout == '0'
- custom_remove_cronvar1 is changed
- custom_remove_cronvar2 is not changed
- custom_varcheck1.stdout == '1'
- custom_varcheck2.stdout == '1'
- custom_varcheck3.stdout == '0'

View File

@@ -0,0 +1,14 @@
The integration tests can be executed locally:
1. Create or use an existing discord server
2. Open `Server Settings` and navigate to `Integrations` tab
3. Click `Create Webhook` to create a new webhook
4. Click `Copy Webhook URL` and extract the webhook_id + webhook_token
Example: https://discord.com/api/webhooks/`webhook_id`/`webhook_token`
5. Replace the variables `discord_id` and `discord_token` in the var file
6. Run the integration test
````
ansible-test integration -v --color yes discord --allow-unsupported
````

View File

@@ -0,0 +1 @@
unsupported

View File

@@ -0,0 +1,2 @@
discord_id: 000
discord_token: xxx

View File

@@ -0,0 +1,64 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Send basic message
community.general.discord:
webhook_id: "{{ discord_id }}"
webhook_token: "{{ discord_token }}"
content: "Messages from ansible-test"
register: result
- name: Check result
assert:
that:
- result is changed
- result.http_code == 204
- name: Send embeds
community.general.discord:
webhook_id: "{{ discord_id }}"
webhook_token: "{{ discord_token }}"
embeds:
- title: "Title of embed message 1"
description: "Description embed message 1"
footer:
text: "author ansible-test"
image:
url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
- title: "Title of embed message 2"
description: "Description embed message 2"
footer:
text: "author ansible-test"
icon_url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
fields:
- name: "Field 1"
value: 1
- name: "Field 2"
value: "Text"
timestamp: "{{ ansible_date_time.iso8601 }}"
username: Ansible Test
avatar_url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
register: result
- name: Check result
assert:
that:
- result is changed
- result.http_code == 204
- name: Use a wrong token
community.general.discord:
webhook_id: "{{ discord_id }}"
webhook_token: "wrong_token"
content: "Messages from ansible-test"
register: result
ignore_errors: true
- name: Check result
assert:
that:
- result is not changed
- result.http_code == 401
- result.response.message == "Invalid Webhook Token"

View File

@@ -37,8 +37,6 @@
command: >-
dnf update -y
--setopt=obsoletes=0 {{ _packages | join(' ') }}
args:
warn: false
register: update_locked_packages
changed_when: '"Nothing to do" not in update_locked_packages.stdout'

View File

@@ -16,7 +16,7 @@ tested_filesystems:
ext3: {fssize: 10, grow: True}
ext2: {fssize: 10, grow: True}
xfs: {fssize: 20, grow: False} # grow requires a mounted filesystem
btrfs: {fssize: 150, grow: False} # grow not implemented
btrfs: {fssize: 150, grow: False} # grow requires a mounted filesystem
reiserfs: {fssize: 33, grow: False} # grow not implemented
vfat: {fssize: 20, grow: True}
ocfs2: {fssize: '{{ ocfs2_fssize }}', grow: False} # grow not implemented

View File

@@ -0,0 +1 @@
unsupported

View File

@@ -0,0 +1,140 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: Clean up test project
lxd_project:
name: ansible-test-project
state: absent
- name: Clean up test project
lxd_project:
name: ansible-test-project-renamed
state: absent
- name: Create test project
lxd_project:
name: ansible-test-project
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check project has been created correctly
assert:
that:
- results is changed
- results.actions is defined
- "'create' in results.actions"
- name: Create test project again with merge_project set to true
lxd_project:
name: ansible-test-project
merge_project: true
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check state is not changed
assert:
that:
- results is not changed
- "{{ results.actions | length }} == 0"
- name: Create test project again with merge_project set to false
lxd_project:
name: ansible-test-project
merge_project: false
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check state is not changed
assert:
that:
- results is changed
- "'apply_projects_configs' in results.actions"
- name: Update project test => update description
lxd_project:
name: ansible-test-project
merge_project: false
description: "ansible test project"
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check state is changed
assert:
that:
- results is changed
- "'apply_projects_configs' in results.actions"
- name: Update project test => update project config
lxd_project:
name: ansible-test-project
merge_project: false
description: "ansible test project"
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "4"
state: present
register: results
- name: Check state is changed
assert:
that:
- results is changed
- "'apply_projects_configs' in results.actions"
- name: Rename project test
lxd_project:
name: ansible-test-project
new_name: ansible-test-project-renamed
merge_project: true
description: "ansible test project"
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "4"
state: present
register: results
- name: Check state is changed
assert:
that:
- results is changed
- "'rename' in results.actions"
- name: Clean up test project
lxd_project:
name: ansible-test-project-renamed
state: absent
register: results
- name: Check project is deleted
assert:
that:
- results is changed
- "'delete' in results.actions"

View File

@@ -1,7 +1,5 @@
- name: "{{ reason }} ('up')"
command: "curl -sf http://localhost:8082/hello"
args:
warn: false
when: service_state == 'up'
register: curl_result
until: not curl_result.failed
@@ -10,8 +8,6 @@
- name: "{{ reason }} ('down')"
command: "curl -sf http://localhost:8082/hello"
args:
warn: false
register: curl_result
failed_when: curl_result == 0
when: service_state == 'down'

View File

@@ -43,6 +43,18 @@
src: httpd_echo.py
dest: "{{ process_file }}"
- name: Install virtualenv
package:
name: virtualenv
state: present
when: ansible_distribution == 'CentOS' and ansible_distribution_major_version == '8'
- name: Install virtualenv
package:
name: python-virtualenv
state: present
when: ansible_os_family == 'Archlinux'
- name: install dependencies
pip:
name: "{{ item }}"

View File

@@ -5,3 +5,4 @@ skip/freebsd
skip/osx
skip/macos
skip/rhel
needs/root

View File

@@ -0,0 +1,81 @@
---
- vars:
package_name: ansible-test-foo
username: ansible-regular-user
block:
- name: Install fakeroot
pacman:
state: present
name:
- fakeroot
- name: Create user
user:
name: '{{ username }}'
home: '/home/{{ username }}'
create_home: true
- name: Create directory
file:
path: '/home/{{ username }}/{{ package_name }}'
state: directory
owner: '{{ username }}'
- name: Create PKGBUILD
copy:
dest: '/home/{{ username }}/{{ package_name }}/PKGBUILD'
content: |
pkgname=('{{ package_name }}')
pkgver=1.0.0
pkgrel=1
pkgdesc="Test removing a local package not in the repositories"
arch=('any')
license=('GPL v3+')
owner: '{{ username }}'
- name: Build package
command:
cmd: su {{ username }} -c "makepkg -srf"
chdir: '/home/{{ username }}/{{ package_name }}'
- name: Install package
pacman:
state: present
name:
- '/home/{{ username }}/{{ package_name }}/{{ package_name }}-1.0.0-1-any.pkg.tar.zst'
- name: Remove package (check mode)
pacman:
state: absent
name:
- '{{ package_name }}'
check_mode: true
register: remove_1
- name: Remove package
pacman:
state: absent
name:
- '{{ package_name }}'
register: remove_2
- name: Remove package (idempotent)
pacman:
state: absent
name:
- '{{ package_name }}'
register: remove_3
- name: Check conditions
assert:
that:
- remove_1 is changed
- remove_2 is changed
- remove_3 is not changed
always:
- name: Remove directory
file:
path: '{{ remote_tmp_dir }}/{{ package_name }}'
state: absent
become: true

View File

@@ -11,3 +11,4 @@
- include: 'package_urls.yml'
- include: 'remove_nosave.yml'
- include: 'update_cache.yml'
- include: 'locally_installed_package.yml'

View File

@@ -1,3 +1,3 @@
#!/usr/bin/env bash
"$1" 100 &
"$1" 100 &
echo "$!" > "$2"

View File

@@ -0,0 +1,12 @@
/*
* (c) 2022, Alexei Znamensky <russoz@gmail.com>
* GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
*/
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char **argv) {
int delay = atoi(argv[1]);
sleep(delay);
}

View File

@@ -7,112 +7,105 @@
# Copyright: (c) 2019, Saranya Sridharan
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- when:
- not (ansible_os_family == 'Alpine') # TODO
block:
- name: Attempt installation of latest 'psutil' version
pip:
name: psutil
ignore_errors: true
register: psutil_latest_install
- name: Attempt installation of latest 'psutil' version
pip:
name: psutil
ignore_errors: true
register: psutil_latest_install
- name: Install greatest 'psutil' version which will work with all pip versions
pip:
name: psutil < 5.7.0
when: psutil_latest_install is failed
- name: Install greatest 'psutil' version which will work with all pip versions
pip:
name: psutil < 5.7.0
when: psutil_latest_install is failed
- name: "Checking the empty result"
pids:
name: "blahblah"
register: emptypids
- name: "Checking the empty result"
pids:
name: "blahblah"
register: emptypids
- name: "Verify that the list of Process IDs (PIDs) returned is empty"
assert:
that:
- emptypids is not changed
- emptypids.pids == []
- name: "Verify that the list of Process IDs (PIDs) returned is empty"
assert:
that:
- emptypids is not changed
- emptypids.pids == []
- name: "Picking a random process name"
set_fact:
random_name: some-random-long-name-{{ 99999999 | random }}
- name: "Picking a random process name"
set_fact:
random_name: some-random-long-name-{{ 99999999 | random }}
- name: Copy the fake 'sleep' source code
copy:
src: sleeper.c
dest: "{{ remote_tmp_dir }}/sleeper.c"
mode: 0644
- name: "finding the 'sleep' binary"
command: which sleep
register: find_sleep
- name: Compile fake 'sleep' binary
command: cc {{ remote_tmp_dir }}/sleeper.c -o {{ remote_tmp_dir }}/{{ random_name }}
- name: "Copying 'sleep' binary"
command: cp {{ find_sleep.stdout }} {{ remote_tmp_dir }}/{{ random_name }}
# The following does not work on macOS 11.1 (it uses shutil.copystat, and that will die with a PermissionError):
# copy:
# src: "{{ find_sleep.stdout }}"
# dest: "{{ remote_tmp_dir }}/{{ random_name }}"
# mode: "0777"
# remote_src: true
- name: Copy helper script
copy:
src: obtainpid.sh
dest: "{{ remote_tmp_dir }}/obtainpid.sh"
mode: 0755
- name: Copy helper script
copy:
src: obtainpid.sh
dest: "{{ remote_tmp_dir }}/obtainpid.sh"
- name: "Run the fake 'sleep' binary"
command: "sh {{ remote_tmp_dir }}/obtainpid.sh '{{ remote_tmp_dir }}/{{ random_name }}' '{{ remote_tmp_dir }}/obtainpid.txt'"
- name: "Running the copy of 'sleep' binary"
command: "sh {{ remote_tmp_dir }}/obtainpid.sh '{{ remote_tmp_dir }}/{{ random_name }}' '{{ remote_tmp_dir }}/obtainpid.txt'"
async: 100
poll: 0
async: 100
poll: 0
- name: "Wait for one second to make sure that the fake 'sleep' binary has actually been started"
pause:
seconds: 1
- name: "Wait for one second to make sure that the sleep copy has actually been started"
pause:
seconds: 1
- name: "Checking the process IDs (PIDs) of fake 'sleep' binary"
pids:
name: "{{ random_name }}"
register: pids
- name: "Checking the process IDs (PIDs) of sleep binary"
pids:
name: "{{ random_name }}"
register: pids
- name: "Checking that exact non-substring matches are required"
pids:
name: "{{ random_name[0:5] }}"
register: exactpidmatch
- name: "Checking that exact non-substring matches are required"
pids:
name: "{{ random_name[0:5] }}"
register: exactpidmatch
- name: "Checking that patterns can be used with the pattern option"
pids:
pattern: "{{ random_name[0:5] }}"
register: pattern_pid_match
- name: "Checking that patterns can be used with the pattern option"
pids:
pattern: "{{ random_name[0:5] }}"
register: pattern_pid_match
- name: "Checking that case-insensitive patterns can be used with the pattern option"
pids:
pattern: "{{ random_name[0:5] | upper }}"
ignore_case: true
register: caseinsensitive_pattern_pid_match
- name: "Checking that case-insensitive patterns can be used with the pattern option"
pids:
pattern: "{{ random_name[0:5] | upper }}"
ignore_case: true
register: caseinsensitive_pattern_pid_match
- name: "Checking that .* includes test pid"
pids:
pattern: .*
register: match_all
- name: "Checking that .* includes test pid"
pids:
pattern: .*
register: match_all
- name: "Reading pid from the file"
slurp:
src: "{{ remote_tmp_dir }}/obtainpid.txt"
register: newpid
- name: "Reading pid from the file"
slurp:
src: "{{ remote_tmp_dir }}/obtainpid.txt"
register: newpid
- name: "Verify that the Process IDs (PIDs) returned is not empty and also equal to the PIDs obtained in console"
assert:
that:
- "pids.pids | join(' ') == newpid.content | b64decode | trim"
- "pids.pids | length > 0"
- "exactpidmatch.pids == []"
- "pattern_pid_match.pids | join(' ') == newpid.content | b64decode | trim"
- "caseinsensitive_pattern_pid_match.pids | join(' ') == newpid.content | b64decode | trim"
- newpid.content | b64decode | trim | int in match_all.pids
- name: "Verify that the Process IDs (PIDs) returned is not empty and also equal to the PIDs obtained in console"
assert:
that:
- "pids.pids | join(' ') == newpid.content | b64decode | trim"
- "pids.pids | length > 0"
- "exactpidmatch.pids == []"
- "pattern_pid_match.pids | join(' ') == newpid.content | b64decode | trim"
- "caseinsensitive_pattern_pid_match.pids | join(' ') == newpid.content | b64decode | trim"
- newpid.content | b64decode | trim | int in match_all.pids
- name: "Register output of bad input pattern"
pids:
pattern: (unterminated
register: bad_pattern_result
ignore_errors: true
- name: "Register output of bad input pattern"
pids:
pattern: (unterminated
register: bad_pattern_result
ignore_errors: true
- name: "Verify that bad input pattern result is failed"
assert:
that:
- bad_pattern_result is failed
- name: "Verify that bad input pattern result is failed"
assert:
that:
- bad_pattern_result is failed

View File

@@ -21,7 +21,6 @@
- '{{ pkgng_test_outofdate_pkg_tempdir.path }}'
- '--manifest'
- '{{ pkgng_test_outofdate_pkg_tempdir.path }}/MANIFEST'
warn: no
# pkg switched from .txz to .pkg in version 1.17.0
# Might as well look for all valid pkg extensions.

View File

@@ -32,8 +32,6 @@
until: faketime_package_installed is success
- name: Find libfaketime path
shell: '{{ list_pkg_files }} {{ faketime_pkg }} | grep -F libfaketime.so.1'
args:
warn: false
register: libfaketime_path
- when: ansible_service_mgr == 'systemd'
block:

View File

@@ -165,8 +165,6 @@
- name: Reinstall internationalization files
shell: yum -y reinstall glibc-common || yum -y install glibc-common
args:
warn: false
when: locale_present is failed
- name: Generate locale (RedHat)

View File

@@ -102,6 +102,21 @@
src: "{{ alt_sudoers_path }}/my-sudo-rule-5"
register: rule_5_contents
- name: Create rule to runas another user
community.general.sudoers:
name: my-sudo-rule-6
state: present
user: alice
commands: /usr/local/bin/command
runas: bob
sudoers_path: "{{ sudoers_path }}"
register: rule_6
- name: Grab contents of my-sudo-rule-6 (in alternative directory)
ansible.builtin.slurp:
src: "{{ sudoers_path }}/my-sudo-rule-6"
register: rule_6_contents
- name: Revoke rule 1
community.general.sudoers:
@@ -133,6 +148,7 @@
- "rule_3_contents['content'] | b64decode == 'alice ALL= /usr/local/bin/command\n'"
- "rule_4_contents['content'] | b64decode == '%students ALL=NOPASSWD: /usr/local/bin/command\n'"
- "rule_5_contents['content'] | b64decode == 'alice ALL=NOPASSWD: /usr/local/bin/command\n'"
- "rule_6_contents['content'] | b64decode == 'alice ALL=(bob)NOPASSWD: /usr/local/bin/command\n'"
- name: Check stats
ansible.builtin.assert:

Some files were not shown because too many files have changed in this diff Show More