Compare commits

..

43 Commits
4.7.0 ... 4.8.0

Author SHA1 Message Date
Felix Fontein
96f609d1f2 Release 4.8.0. 2022-04-26 11:42:20 +02:00
patchback[bot]
03b128aeff Add 'state' parameter for alternatives (#4557) (#4576)
* Add 'activate' parameter for alternatives

Allow alternatives to be installed without being set as the current
selection.

* add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* rename 'activate' -> 'selected'

* rework 'selected' parameter -> 'state'

* handle unsetting of currently selected alternative

* add integration tests for 'state' parameter

* fix linting issues

* fix for Python 2.7 compatibility

* Remove alternatives file.

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 29c49febd9)

Co-authored-by: Tanner Prestegard <tprestegard@users.noreply.github.com>
2022-04-26 06:41:04 +00:00
patchback[bot]
ab9a4cb58a New module alerta_customer (#4554) (#4575)
* first draft of alerta_customer

* Update BOTMETA.yml

* update after review

* fix pagination and state description

* remove whitespace

(cherry picked from commit d7e5e85f3e)

Co-authored-by: CWollinger <CWollinger@web.de>
2022-04-26 08:20:33 +02:00
patchback[bot]
6b21599def New Module: LXD Projects (#4521) (#4573)
* add lxd_project module

* documentation improvement and version_added entry

* improve documentation

* use os.path.expanduser

* exclude from use-argspec-type-path test

* improve documentation

(cherry picked from commit 1d3506490f)

Co-authored-by: Raymond Chang <xrayjemmy@gmail.com>
2022-04-25 22:34:27 +02:00
patchback[bot]
ca93145e76 Parse lxc key from api data for lxc containers (#4555) (#4574)
* Parse lxc key from api data for lxc containers

When configuring containers in the `/etc/pve/lxc/` file, the API
adds a 'lxc' key that caused the plugin to crash as it tried to
split a list on ','.

This commit introduces logic to convert the list of lists in the
returned data to a dict as with the other keys.

```
'lxc': [['lxc.apparmor.profile', 'unconfined'],
	['lxc.cgroup.devices.allow', 'a']]
```

becomes

```
"proxmox_lxc": {
	"apparmor.profile": "unconfined",
	"cap.drop": "",
	"cgroup.devices.allow": "a"
}
```

* Add changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Philippe Pepos Petitclerc <peposp@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 346bfba9c5)

Co-authored-by: Philippe Pépos-Petitclerc <ppepos@users.noreply.github.com>
2022-04-25 22:32:46 +02:00
patchback[bot]
a163ec3afa Command Runner (#4476) (#4572)
* initial commit, passing unit tests

* passing one very silly integration test

* multiple changes:

- updated copyright year
- cmd_runner
  - added fmt_optval
  - created specific exceptions
  - fixed bug in context class where values from module params were not
    being used for resolving cmd arguments
  - changed order of class declaration for readability purpose
- tests
  - minor improvements in integration test code
  - removed some extraneous code in msimple.yml
  - minor improvements in unit tests
  - added few missing cases to unit test

* multiple changes

cmd_runner.py

- renamed InvalidParameterName to MissingArgumentFormat
  - improved exception parameters
- added repr and str to all exceptions
- added unpacking decorator for fmt functions
- CmdRunner
  - improved parameter validation
- _CmdRunnerContext
  - Context runs must now pass named arguments
  - Simplified passing of additional arguments to module.run_command()
  - Provided multiple context variables with info about the run

Integration tests

- rename msimple.py to cmd_echo.py for clarity
- added more test cases

* cmd_runner: env update can be passed to runner

* adding runner context info to output

* added comment on OrderedDict

* wrong variable

* refactored all fmt functions into static methods of a class

Imports should be simpler now, only one object fmt, with attr access to all callables

* added unit tests for CmdRunner

* fixed sanity checks

* fixed mock imports

* added more unit tests for CmdRunner

* terminology consistency

* multiple adjustments:

- remove extraneous imports
- renamed some variables
- added wrapper around arg formatters to handle individual arg ignore_none behaviour

* removed old code commented out in test

* multiple changes:

- ensure fmt functions return list of strings
- renamed fmt parameter from `option` to `args`
- renamed fmt.mapped to fmt.as_map
- simplified fmt.as_map
- added tests for fmt.as_fixed

* more improvements in formats

* fixed sanity

* args_order can be a string (to be split())

and improved integration test

* simplified integration test

* removed overkill str() on values - run_command does that for us

* as_list makes more sense than as_str in that context

* added changelog fragment

* Update plugins/module_utils/cmd_runner.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* adjusted __repr__ output for the exceptions

* added superclass object to classes

* added additional comment on the testcase sample/example

* suggestion from PR

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f5b1b3c6f0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-25 22:26:36 +02:00
patchback[bot]
868a6303be Allow Proxmox Snapshot Restoring (#4377) (#4571)
* Allow restoring of snapshots

* Fix formatting

* Add documentation for new feature

* Revert unrelated reformatting

* Add documentation for snapshot change

* Remove redundant multiple call to status API

* Remove unneccesary indent

* Add documentation for timeout fix

* Update changelog fragment to reflect real changes

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelog fragment to reflect real changes

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add Tests for Snapshot rollback

* Update tests/unit/plugins/modules/cloud/misc/test_proxmox_snap.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/4377-allow-proxmox-snapshot-restoring.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/misc/proxmox_snap.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit dbad1e0f11)

Co-authored-by: Timon Michel <ich.bin@ein.dev>
2022-04-25 06:54:22 +02:00
patchback[bot]
759e82d403 Proxmox inventory: implement API token auth (#4540) (#4570)
* Proxmox inventory: implement api token auth

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix linter errors

* add changelog fragment

* add examples

* fix a typo and break long lines

* Update changelogs/fragments/4540-proxmox-inventory-token-auth.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c8c2636676)

Co-authored-by: Daniel <mail@h3po.de>
2022-04-24 16:06:19 +02:00
patchback[bot]
ed0c768aaf Removed 'default=None' in a batch of modules (#4556) (#4568)
* removed default=None

* removed default=None

* removed default=None

* removed default=None

* added changelog fragment

(cherry picked from commit b916cb369b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-24 10:49:45 +02:00
patchback[bot]
e933ed782f Removed 'default=None' in a batch of modules 2 (#4567) (#4569)
* removed default=None

* added changelog fragment

(cherry picked from commit 3b103f905e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-24 10:49:29 +02:00
patchback[bot]
69e5a0dbf1 Fix keycloak realm parameters types (#4526) (#4560)
* Fix keycloack realm parameters types

* Add changelog fragment

* Update changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0620cd2e74)

Co-authored-by: Alexandr <36310479+Vespand@users.noreply.github.com>
2022-04-23 08:49:44 +02:00
patchback[bot]
c4d166d3bc nmcli: Change hairpin default mode (#4334) (#4558)
* nmcli: Deprecate default hairpin mode

Deprecate the default hairpin mode for a bridge.
Plain nmcli/bridge tools defaults to no, but for some reason ansible
defaults to yes.

We deprecate the default value so we can switch to default 'no' in
ansible 6.0.0

* Code review fixes

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix comments

* Update changelogs/fragments/4320-nmcli-hairpin.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/4320-nmcli-hairpin.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 53f6c68026)

Co-authored-by: dupondje <jean-louis@dupond.be>
2022-04-23 08:49:33 +02:00
Felix Fontein
9ae8e544cb Prepare 4.8.0 release. 2022-04-22 23:31:28 +02:00
Felix Fontein
94aef4526d Fix filename. 2022-04-22 23:29:56 +02:00
patchback[bot]
aeece5a107 Add project support for lxd_container and lxd_profile module (#4479) (#4561)
* add project support for lxd modules

* fix lxd_container yaml format error

* add changelog fragement add version_add entry

* fix LXD spelling

* complete lxd_profile example

(cherry picked from commit 552db0d353)

Co-authored-by: Raymond Chang <xrayjemmy@gmail.com>
2022-04-22 22:49:29 +02:00
patchback[bot]
bdc4ee496f Fix import. (#4550) (#4552)
(cherry picked from commit 2f980e89fe)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-21 14:41:25 +02:00
patchback[bot]
5f59ec2d01 Implement contructable support for opennebula inventory plugin: keyed… (#4524) (#4549)
* Implement contructable support for opennebula inventory plugin: keyed_groups, compose, groups

* Fixed templating mock issues in unit tests, corrected some linting errors

* trying to make the linter happy

* Now trying to make python2.7 happy

* Added changelog fragment

* changelog fragment needs pluralization

* Update changelogs/fragments/4524-update-opennebula-inventory-plugin-to-match-documentation.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8e72e98adb)

Co-authored-by: Bill Sanders <billysanders@gmail.com>
2022-04-21 14:03:37 +02:00
patchback[bot]
a25e4f679e Remove distutils from unit tests. (#4545) (#4547)
(cherry picked from commit d9ba598938)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-21 11:28:39 +02:00
patchback[bot]
3876df9052 nmap inventory plugin: Add sudo nmap (#4506) (#4544)
* nmap.py: Add sudo nmap

* Update plugins/inventory/nmap.py

Change description of new plugin option adding version_added

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/inventory/nmap.py

Change boolean values of sudo option in example

Co-authored-by: Felix Fontein <felix@fontein.de>

* Create 4506-sudo-in-nmap-inv-plugin.yaml

* Fix typo in yaml format

* Update changelogs/fragments/4506-sudo-in-nmap-inv-plugin.yaml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update changelogs/fragments/4506-sudo-in-nmap-inv-plugin.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Document default as false.

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 3cce1217db)

Co-authored-by: ottobits <vindemaio@gmail.com>
2022-04-21 10:10:56 +02:00
patchback[bot]
12f2ba251b Add Lowess as maintainer of pritunl module utils. (#4539) (#4542)
(cherry picked from commit 405284b513)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-20 22:00:09 +02:00
patchback[bot]
e43a9b6974 xfconf: added missing value types (#4534) (#4541)
* xfconf: added missing value types

* added changelog fragment

* Update plugins/modules/system/xfconf.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit a2bfb96213)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-20 21:33:05 +02:00
patchback[bot]
9e2cb4363c [pritunl] removed unnecessary data from auth string (#4530) (#4538)
* removed unnecessary data from auth string

* add changelog

Co-authored-by: vadim <vadim>
(cherry picked from commit 51a68517ce)

Co-authored-by: vvatlin <vvvvatlin@gmail.com>
2022-04-20 09:33:57 +02:00
patchback[bot]
b61cb29023 xfconf: improve docs (#4533) (#4536)
(cherry picked from commit 3c6cb547f3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-20 08:43:17 +02:00
patchback[bot]
90d31b9403 remove deprecated branch.unprotect() method in community.general.gitlab_branch (#4496) (#4528)
* remove deprecated branch.protect method

* add changelog fragment

* Update changelogs/fragments/4496-remove-deprecated-method-in-gitlab-branch-module.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit a8abb1a5bf)

Co-authored-by: York Wong <eth2net@gmail.com>
2022-04-19 20:04:51 +02:00
patchback[bot]
4d22d0790d Correctly handle exception when no VM name returned by proxmox (#4508) (#4529)
(cherry picked from commit 8076f16aa9)

Co-authored-by: Marcin <stolarek.marcin@gmail.com>
2022-04-19 20:04:43 +02:00
patchback[bot]
bffe4c2a3b Bump version numbers for deprecation and removal since we didn't deprecate this in 4.0.0. (#4515) (#4519)
(cherry picked from commit 9e537d4a6b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-16 21:46:03 +02:00
patchback[bot]
dfdb0a6fe6 CI: remove FreeBSD 12.0 and 12.2, re-enable pkgng tests (#4511) (#4513)
* Remove FreeBSD 12.0 and 12.2 from CI.

* Revert "Temporarily disable the pkgng tests. (#4493)"

This reverts commit 5ecac692de.

(cherry picked from commit 26cebb9c30)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-16 12:44:57 +02:00
patchback[bot]
dd04e11094 Remove no longer true statement. (#4505) (#4510)
(cherry picked from commit efbf02f284)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-15 15:53:15 +02:00
patchback[bot]
5b029c66c5 Terraform init -upgrade flag (#4455) (#4502)
* Adds optional `-upgrade` flag to terraform init.

This allows Terraform to install provider dependencies into an existing project when the provider constraints change.

* fix transposed documentation keys

* Add integration tests for terraform init

* Revert to validate_certs: yes for general public testing

* skip integration tests on irrelevant platforms

* skip legacy Python versions from CI tests

* add changelog fragment

* Update plugins/modules/cloud/misc/terraform.py

Adds version_added metadata to the new module option.

Co-authored-by: Felix Fontein <felix@fontein.de>

* Change terraform_arch constant to Ansible fact mapping

* correct var typo, clarify task purpose

* Squashed some logic bugs, added override for local Terraform

If `existing_terraform_path` is provided, the playbook will not download Terraform or check its version.

I also tested this on a local system with Terraform installed, and squashed some bugs related to using of an
existing binary.

* revert to previous test behavior for TF install

* readability cleanup

* Update plugins/modules/cloud/misc/terraform.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e4a25beedc)

Co-authored-by: Kamil Markowicz <geekifier@users.noreply.github.com>
2022-04-13 19:22:08 +02:00
patchback[bot]
760843b9e5 pacman: Fix removing locally installed packages (#4464) (#4504)
* pacman: Fix removing locally installed packages

Without this, using `absent` state for a locally installed package (for example from AUR, or from a package that was dropped from repositories) would return that package is already removed, despite remaining installed

* Undo unwanted whitespace removal

* Add changelog fragment

* Update changelogs/fragments/4464-pacman-fix-local-remove.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add test.

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3c515dd221)

Co-authored-by: Martin <spleefer90@gmail.com>
2022-04-13 19:21:46 +02:00
patchback[bot]
19ba15a783 gitlab: Use all=True in most list() calls (#4491) (#4503)
If `all=True` is not set then by default only 20 records will be
returned when calling `list()`. Use `all=True` so that all records
will be returned.

For the `list()` use where do not desire to retrieve all entries then
use`all=False` to show explicityly that we don't want to get all of
the entries.

Fixes: #3729
Fixes: #4460
(cherry picked from commit fe4bbc5de3)

Co-authored-by: John Villalovos <john@sodarock.com>
2022-04-13 13:43:21 +02:00
patchback[bot]
70a3dae965 dnsmadeeasy: only get monitor if it is not null api response (#4459) (#4500)
* Only get monitor if it is not null api response

* Add changelog fragment

* Update changelogs/fragments/4459-only-get-monitor-if-it-is-not-null-api-response.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/dnsmadeeasy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: drevai <revai.dominik@gravityrd.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 06675034fe)

Co-authored-by: drevai753 <86595897+drevai753@users.noreply.github.com>
2022-04-13 11:16:39 +00:00
patchback[bot]
26d5409a87 Implement btrfs resize support (#4465) (#4498)
* Implement btrfs resize support

* Add changelog fragment for btrfs resize support

Co-authored-by: Fabian Klemp <fabian.klemp@frequentis.com>
(cherry picked from commit 8ccc4d1fbb)

Co-authored-by: elara-leitstellentechnik <elara-leitstellentechnik@users.noreply.github.com>
2022-04-13 11:16:27 +00:00
patchback[bot]
2f3a7a981d Temporarily disable the pkgng tests. (#4493) (#4495)
(cherry picked from commit 5ecac692de)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-11 20:33:00 +02:00
patchback[bot]
6a74c46e1c Redfish: Added IndicatorLED commands to the Systems category (#4458) (#4494)
* Redfish: Added IndicatorLED commands to the Systems category

Signed-off-by: Mike Raineri <michael.raineri@dell.com>

* Method call typo fix

Signed-off-by: Mike Raineri <michael.raineri@dell.com>

* Update 4084-add-redfish-system-indicator-led.yml

* Backwards compatibility suggestion

Signed-off-by: Mike Raineri <michael.raineri@dell.com>
(cherry picked from commit a9125c02e7)

Co-authored-by: Mike Raineri <michael.raineri@dell.com>
2022-04-11 20:22:58 +02:00
patchback[bot]
bec382df87 add support for datadog monitors of type event-v2 (#4457) (#4490)
* add support for datadog monitors of type event-v2

See https://docs.datadoghq.com/events/guides/migrating_to_new_events_features/

* add changelog fragement for PR

* typos

* add link to PR

* minor_fetaure, not bugfix

* add to description when we added event-v2 type

* Update changelogs/fragments/4457-support-datadog-monitors-type event-v2.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6edc176143)

Co-authored-by: ermeaney <ermeaney@gmail.com>
2022-04-11 08:01:41 +02:00
patchback[bot]
78f69224be modules/xbps: fix error message (#4438) (#4489)
The previous error message was not giving the full or even correct
information to the user.

(cherry picked from commit d3adde4739)

Co-authored-by: Cameron Nemo <CameronNemo@users.noreply.github.com>
2022-04-11 08:01:32 +02:00
patchback[bot]
34682addb8 seport: minor refactor (#4471) (#4485)
* seport: minor refactor

* added changelog fragment

* Update plugins/modules/system/seport.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/system/seport.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7e6a2453d0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-04-10 18:48:44 +02:00
patchback[bot]
2c106d66a4 Switch from antsibull to antsibull-docs. (#4480) (#4483)
(cherry picked from commit aa27f2152e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-10 11:08:50 +02:00
patchback[bot]
9c4fd63a4d Deprecate want_proxmox_nodes_ansible_host option's default value. (#4466) (#4478)
(cherry picked from commit 865d7ac698)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-04-10 08:59:25 +02:00
patchback[bot]
d04c18ffce Add discord integration tests (#4463) (#4477)
* add discord integration tests

* fix: var name in readme

(cherry picked from commit aa045d2655)

Co-authored-by: CWollinger <CWollinger@web.de>
2022-04-10 08:59:16 +02:00
patchback[bot]
41fe6663d9 Fix documentation for sudoers module (#4469) (#4474)
* Fix documentation for sudoers module

* Update plugins/modules/system/sudoers.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit fa65b9d1f0)

Co-authored-by: Ulf Tigerstedt <tigerstedt@iki.fi>
2022-04-10 08:41:23 +02:00
Felix Fontein
9f8612f34e Next expected release is 4.7.0. 2022-04-05 16:49:15 +02:00
102 changed files with 3583 additions and 277 deletions

View File

@@ -280,8 +280,8 @@ stages:
test: rhel/7.9
- name: RHEL 8.3
test: rhel/8.3
- name: FreeBSD 12.2
test: freebsd/12.2
#- name: FreeBSD 12.2
# test: freebsd/12.2
groups:
- 1
- 2
@@ -312,8 +312,8 @@ stages:
test: rhel/8.2
- name: RHEL 7.8
test: rhel/7.8
- name: FreeBSD 12.0
test: freebsd/12.0
#- name: FreeBSD 12.0
# test: freebsd/12.0
groups:
- 1
- 2

6
.github/BOTMETA.yml vendored
View File

@@ -260,6 +260,8 @@ files:
$module_utils/module_helper.py:
maintainers: russoz
labels: module_helper
$module_utils/net_tools/pritunl/:
maintainers: Lowess
$module_utils/oracle/oci_utils.py:
maintainers: $team_oracle
labels: cloud
@@ -310,6 +312,8 @@ files:
ignore: hnakamur
$modules/cloud/lxd/lxd_profile.py:
maintainers: conloos
$modules/cloud/lxd/lxd_project.py:
maintainers: we10710aa
$modules/cloud/memset/:
maintainers: glitchcrab
$modules/cloud/misc/cloud_init_data_facts.py:
@@ -558,6 +562,8 @@ files:
maintainers: phumpal
labels: airbrake_deployment
ignore: bpennypacker
$modules/monitoring/alerta_customer.py:
maintainers: cwollinger
$modules/monitoring/bigpanda.py:
maintainers: hkariti
$modules/monitoring/circonus_annotation.py:

View File

@@ -6,6 +6,92 @@ Community General Release Notes
This changelog describes changes after version 3.0.0.
v4.8.0
======
Release Summary
---------------
Regular feature and bugfix release. Please note that this is the last minor 4.x.0 release. Further releases with major version 4 will be bugfix releases 4.8.y.
Minor Changes
-------------
- alternatives - add ``state`` parameter, which provides control over whether the alternative should be set as the active selection for its alternatives group (https://github.com/ansible-collections/community.general/issues/4543, https://github.com/ansible-collections/community.general/pull/4557).
- atomic_container - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- clc_alert_policy - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_loadbalancer - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_server - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- cmd_runner module util - reusable command runner with consistent argument formatting and sensible defaults (https://github.com/ansible-collections/community.general/pull/4476).
- datadog_monitor - support new datadog event monitor of type `event-v2 alert` (https://github.com/ansible-collections/community.general/pull/4457)
- filesystem - add support for resizing btrfs (https://github.com/ansible-collections/community.general/issues/4465).
- lxd_container - adds ``project`` option to allow selecting project for LXD instance (https://github.com/ansible-collections/community.general/pull/4479).
- lxd_profile - adds ``project`` option to allow selecting project for LXD profile (https://github.com/ansible-collections/community.general/pull/4479).
- nmap inventory plugin - add ``sudo`` option in plugin in order to execute ``sudo nmap`` so that ``nmap`` runs with elevated privileges (https://github.com/ansible-collections/community.general/pull/4506).
- nomad_job - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- nomad_job_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_device - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_sshkey - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_volume - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- profitbricks - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox inventory plugin - add token authentication as an alternative to username/password (https://github.com/ansible-collections/community.general/pull/4540).
- proxmox inventory plugin - parse LXC configs returned by the proxmox API (https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_snap - add restore snapshot option (https://github.com/ansible-collections/community.general/pull/4377).
- proxmox_snap - fixed timeout value to correctly reflect time in seconds. The timeout was off by one second (https://github.com/ansible-collections/community.general/pull/4377).
- redfish_command - add ``IndicatorLedOn``, ``IndicatorLedOff``, and ``IndicatorLedBlink`` commands to the Systems category for controling system LEDs (https://github.com/ansible-collections/community.general/issues/4084).
- seport - minor refactoring (https://github.com/ansible-collections/community.general/pull/4471).
- smartos_image_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- terraform - adds ``terraform_upgrade`` parameter which allows ``terraform init`` to satisfy new provider constraints in an existing Terraform project (https://github.com/ansible-collections/community.general/issues/4333).
- udm_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- udm_share - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- vmadm - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_app - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_db - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- xfconf - added missing value types ``char``, ``uchar``, ``int64`` and ``uint64`` (https://github.com/ansible-collections/community.general/pull/4534).
Deprecated Features
-------------------
- nmcli - deprecate default hairpin mode for a bridge. This so we can change it to ``false`` in community.general 7.0.0, as this is also the default in ``nmcli`` (https://github.com/ansible-collections/community.general/pull/4334).
- proxmox inventory plugin - the current default ``true`` of the ``want_proxmox_nodes_ansible_host`` option has been deprecated. The default will change to ``false`` in community.general 6.0.0. To keep the current behavior, explicitly set ``want_proxmox_nodes_ansible_host`` to ``true`` in your inventory configuration. We suggest to already switch to the new behavior by explicitly setting it to ``false``, and by using ``compose:`` to set ``ansible_host`` to the correct value. See the examples in the plugin documentation for details (https://github.com/ansible-collections/community.general/pull/4466).
Bugfixes
--------
- dnsmadeeasy - fix failure on deleting DNS entries when API response does not contain monitor value (https://github.com/ansible-collections/community.general/issues/3620).
- git_branch - remove deprecated and unnecessary branch ``unprotect`` method (https://github.com/ansible-collections/community.general/pull/4496).
- gitlab_group - improve searching for projects inside group on deletion (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_group_members - handle more than 20 groups when finding a group (https://github.com/ansible-collections/community.general/pull/4491, https://github.com/ansible-collections/community.general/issues/4460, https://github.com/ansible-collections/community.general/issues/3729).
- gitlab_hook - handle more than 20 hooks when finding a hook (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_project - handle more than 20 namespaces when finding a namespace (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_project_members - handle more than 20 projects and users when finding a project resp. user (https://github.com/ansible-collections/community.general/pull/4491).
- gitlab_user - handle more than 20 users and SSH keys when finding a user resp. SSH key (https://github.com/ansible-collections/community.general/pull/4491).
- keycloak - fix parameters types for ``defaultDefaultClientScopes`` and ``defaultOptionalClientScopes`` from list of dictionaries to list of strings (https://github.com/ansible-collections/community.general/pull/4526).
- opennebula inventory plugin - complete the implementation of ``constructable`` for opennebula inventory plugin. Now ``keyed_groups``, ``compose``, ``groups`` actually work (https://github.com/ansible-collections/community.general/issues/4497).
- pacman - fixed bug where ``absent`` state did not work for locally installed packages (https://github.com/ansible-collections/community.general/pull/4464).
- pritunl - fixed bug where pritunl plugin api add unneeded data in ``auth_string`` parameter (https://github.com/ansible-collections/community.general/issues/4527).
- proxmox inventory plugin - fix error when parsing container with LXC configs (https://github.com/ansible-collections/community.general/issues/4472, https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_kvm - fix a bug when getting a state of VM without name will fail (https://github.com/ansible-collections/community.general/pull/4508).
- xbps - fix error message that is reported when installing packages fails (https://github.com/ansible-collections/community.general/pull/4438).
New Modules
-----------
Cloud
~~~~~
lxd
^^^
- lxd_project - Manage LXD projects
Monitoring
~~~~~~~~~~
- alerta_customer - Manage customers in Alerta
v4.7.0
======

View File

@@ -1641,3 +1641,144 @@ releases:
- 4422-warn-user-if-incorrect-SDK-version-is-installed.yaml
- 4429-keycloak-client-add-always-display-in-console.yml
release_date: '2022-04-05'
4.8.0:
changes:
bugfixes:
- dnsmadeeasy - fix failure on deleting DNS entries when API response does not
contain monitor value (https://github.com/ansible-collections/community.general/issues/3620).
- git_branch - remove deprecated and unnecessary branch ``unprotect`` method
(https://github.com/ansible-collections/community.general/pull/4496).
- 'gitlab_group - improve searching for projects inside group on deletion (https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_group_members - handle more than 20 groups when finding a group (https://github.com/ansible-collections/community.general/pull/4491,
https://github.com/ansible-collections/community.general/issues/4460, https://github.com/ansible-collections/community.general/issues/3729).
'
- 'gitlab_hook - handle more than 20 hooks when finding a hook (https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_project - handle more than 20 namespaces when finding a namespace
(https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_project_members - handle more than 20 projects and users when finding
a project resp. user (https://github.com/ansible-collections/community.general/pull/4491).
'
- 'gitlab_user - handle more than 20 users and SSH keys when finding a user
resp. SSH key (https://github.com/ansible-collections/community.general/pull/4491).
'
- keycloak - fix parameters types for ``defaultDefaultClientScopes`` and ``defaultOptionalClientScopes``
from list of dictionaries to list of strings (https://github.com/ansible-collections/community.general/pull/4526).
- opennebula inventory plugin - complete the implementation of ``constructable``
for opennebula inventory plugin. Now ``keyed_groups``, ``compose``, ``groups``
actually work (https://github.com/ansible-collections/community.general/issues/4497).
- pacman - fixed bug where ``absent`` state did not work for locally installed
packages (https://github.com/ansible-collections/community.general/pull/4464).
- pritunl - fixed bug where pritunl plugin api add unneeded data in ``auth_string``
parameter (https://github.com/ansible-collections/community.general/issues/4527).
- proxmox inventory plugin - fix error when parsing container with LXC configs
(https://github.com/ansible-collections/community.general/issues/4472, https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_kvm - fix a bug when getting a state of VM without name will fail
(https://github.com/ansible-collections/community.general/pull/4508).
- xbps - fix error message that is reported when installing packages fails (https://github.com/ansible-collections/community.general/pull/4438).
deprecated_features:
- nmcli - deprecate default hairpin mode for a bridge. This so we can change
it to ``false`` in community.general 7.0.0, as this is also the default in
``nmcli`` (https://github.com/ansible-collections/community.general/pull/4334).
- proxmox inventory plugin - the current default ``true`` of the ``want_proxmox_nodes_ansible_host``
option has been deprecated. The default will change to ``false`` in community.general
6.0.0. To keep the current behavior, explicitly set ``want_proxmox_nodes_ansible_host``
to ``true`` in your inventory configuration. We suggest to already switch
to the new behavior by explicitly setting it to ``false``, and by using ``compose:``
to set ``ansible_host`` to the correct value. See the examples in the plugin
documentation for details (https://github.com/ansible-collections/community.general/pull/4466).
minor_changes:
- alternatives - add ``state`` parameter, which provides control over whether
the alternative should be set as the active selection for its alternatives
group (https://github.com/ansible-collections/community.general/issues/4543,
https://github.com/ansible-collections/community.general/pull/4557).
- atomic_container - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- clc_alert_policy - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_loadbalancer - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- clc_server - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- cmd_runner module util - reusable command runner with consistent argument
formatting and sensible defaults (https://github.com/ansible-collections/community.general/pull/4476).
- datadog_monitor - support new datadog event monitor of type `event-v2 alert`
(https://github.com/ansible-collections/community.general/pull/4457)
- filesystem - add support for resizing btrfs (https://github.com/ansible-collections/community.general/issues/4465).
- lxd_container - adds ``project`` option to allow selecting project for LXD
instance (https://github.com/ansible-collections/community.general/pull/4479).
- lxd_profile - adds ``project`` option to allow selecting project for LXD profile
(https://github.com/ansible-collections/community.general/pull/4479).
- nmap inventory plugin - add ``sudo`` option in plugin in order to execute
``sudo nmap`` so that ``nmap`` runs with elevated privileges (https://github.com/ansible-collections/community.general/pull/4506).
- nomad_job - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- nomad_job_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_device - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_sshkey - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- packet_volume - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- profitbricks - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- proxmox inventory plugin - add token authentication as an alternative to username/password
(https://github.com/ansible-collections/community.general/pull/4540).
- proxmox inventory plugin - parse LXC configs returned by the proxmox API (https://github.com/ansible-collections/community.general/pull/4472).
- proxmox_snap - add restore snapshot option (https://github.com/ansible-collections/community.general/pull/4377).
- proxmox_snap - fixed timeout value to correctly reflect time in seconds. The
timeout was off by one second (https://github.com/ansible-collections/community.general/pull/4377).
- redfish_command - add ``IndicatorLedOn``, ``IndicatorLedOff``, and ``IndicatorLedBlink``
commands to the Systems category for controling system LEDs (https://github.com/ansible-collections/community.general/issues/4084).
- seport - minor refactoring (https://github.com/ansible-collections/community.general/pull/4471).
- smartos_image_info - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- terraform - adds ``terraform_upgrade`` parameter which allows ``terraform
init`` to satisfy new provider constraints in an existing Terraform project
(https://github.com/ansible-collections/community.general/issues/4333).
- udm_group - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- udm_share - minor refactoring (https://github.com/ansible-collections/community.general/pull/4556).
- vmadm - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_app - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- webfaction_db - minor refactoring (https://github.com/ansible-collections/community.general/pull/4567).
- xfconf - added missing value types ``char``, ``uchar``, ``int64`` and ``uint64``
(https://github.com/ansible-collections/community.general/pull/4534).
release_summary: Regular feature and bugfix release. Please note that this is
the last minor 4.x.0 release. Further releases with major version 4 will be
bugfix releases 4.8.y.
fragments:
- 4.8.0.yml
- 4084-add-redfish-system-indicator-led.yml
- 4320-nmcli-hairpin.yml
- 4377-allow-proxmox-snapshot-restoring.yml
- 4438-fix-error-message.yaml
- 4455-terraform-provider-upgrade.yml
- 4457-support-datadog-monitors-type-event-v2.yaml
- 4459-only-get-monitor-if-it-is-not-null-api-response.yaml
- 4464-pacman-fix-local-remove.yaml
- 4465-btrfs-resize.yml
- 4466-proxmox-ansible_host-deprecation.yml
- 4471-seport-refactor.yaml
- 4476-cmd_runner.yml
- 4479-add-project-support-for-lxd_container-and-lxd_profile.yml
- 4491-specify_all_in_list_calls.yaml
- 4492-proxmox_kvm_fix_vm_without_name.yaml
- 4496-remove-deprecated-method-in-gitlab-branch-module.yml
- 4506-sudo-in-nmap-inv-plugin.yaml
- 4524-update-opennebula-inventory-plugin-to-match-documentation.yaml
- 4526-keycloak-realm-types.yaml
- 4530-fix-unauthorized-pritunl-request.yaml
- 4534-xfconf-added-value-types.yaml
- 4540-proxmox-inventory-token-auth.yml
- 4555-proxmox-lxc-key.yml
- 4556-remove-default-none-1.yml
- 4557-alternatives-add-state-parameter.yml
- 4567-remove-default-none-2.yml
modules:
- description: Manage customers in Alerta
name: alerta_customer
namespace: monitoring
- description: Manage LXD projects
name: lxd_project
namespace: cloud.lxd
release_date: '2022-04-26'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 4.7.0
version: 4.8.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -21,6 +21,11 @@ DOCUMENTATION = '''
description: token that ensures this is a source file for the 'nmap' plugin.
required: True
choices: ['nmap', 'community.general.nmap']
sudo:
description: Set to C(true) to execute a C(sudo nmap) plugin scan.
version_added: 4.8.0
default: false
type: boolean
address:
description: Network IP or range of IPs to scan, you can use a simple range (10.2.2.15-25) or CIDR notation.
required: True
@@ -49,6 +54,13 @@ EXAMPLES = '''
plugin: community.general.nmap
strict: False
address: 192.168.0.0/24
# a sudo nmap scan to fully use nmap scan power.
plugin: community.general.nmap
sudo: true
strict: False
address: 192.168.0.0/24
'''
import os
@@ -135,6 +147,10 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
if not user_cache_setting or cache_needs_update:
# setup command
cmd = [self._nmap]
if self._options['sudo']:
cmd.insert(0, 'sudo')
if not self._options['ports']:
cmd.append('-sP')

View File

@@ -206,28 +206,40 @@ class InventoryModule(BaseInventoryPlugin, Constructable):
def _populate(self):
hostname_preference = self.get_option('hostname')
group_by_labels = self.get_option('group_by_labels')
strict = self.get_option('strict')
# Add a top group 'one'
self.inventory.add_group(group='all')
filter_by_label = self.get_option('filter_by_label')
for server in self._retrieve_servers(filter_by_label):
servers = self._retrieve_servers(filter_by_label)
for server in servers:
hostname = server['name']
# check for labels
if group_by_labels and server['LABELS']:
for label in server['LABELS']:
self.inventory.add_group(group=label)
self.inventory.add_host(host=server['name'], group=label)
self.inventory.add_host(host=hostname, group=label)
self.inventory.add_host(host=server['name'], group='all')
self.inventory.add_host(host=hostname, group='all')
for attribute, value in server.items():
self.inventory.set_variable(server['name'], attribute, value)
self.inventory.set_variable(hostname, attribute, value)
if hostname_preference != 'name':
self.inventory.set_variable(server['name'], 'ansible_host', server[hostname_preference])
self.inventory.set_variable(hostname, 'ansible_host', server[hostname_preference])
if server.get('SSH_PORT'):
self.inventory.set_variable(server['name'], 'ansible_port', server['SSH_PORT'])
self.inventory.set_variable(hostname, 'ansible_port', server['SSH_PORT'])
# handle construcable implementation: get composed variables if any
self._set_composite_vars(self.get_option('compose'), server, hostname, strict=strict)
# groups based on jinja conditionals get added to specific groups
self._add_host_to_composed_groups(self.get_option('groups'), server, hostname, strict=strict)
# groups based on variables associated with them in the inventory
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), server, hostname, strict=strict)
def parse(self, inventory, loader, path, cache=True):
if not HAS_PYONE:

View File

@@ -3,6 +3,7 @@
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
@@ -52,11 +53,32 @@ DOCUMENTATION = '''
- Proxmox authentication password.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_PASSWORD) will be used instead.
- Since community.general 4.7.0 you can also use templating to specify the value of the I(password).
required: yes
- If you do not specify a password, you must set I(token_id) and I(token_secret) instead.
type: str
env:
- name: PROXMOX_PASSWORD
version_added: 2.0.0
token_id:
description:
- Proxmox authentication token ID.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_TOKEN_ID) will be used instead.
- To use token authentication, you must also specify I(token_secret). If you do not specify I(token_id) and I(token_secret),
you must set a password instead.
- Make sure to grant explicit pve permissions to the token or disable 'privilege separation' to use the users' privileges instead.
version_added: 4.8.0
type: str
env:
- name: PROXMOX_TOKEN_ID
token_secret:
description:
- Proxmox authentication token secret.
- If the value is not specified in the inventory configuration, the value of environment variable C(PROXMOX_TOKEN_SECRET) will be used instead.
- To use token authentication, you must also specify I(token_id). If you do not specify I(token_id) and I(token_secret),
you must set a password instead.
version_added: 4.8.0
type: str
env:
- name: PROXMOX_TOKEN_SECRET
validate_certs:
description: Verify SSL certificate if using HTTPS.
type: boolean
@@ -78,7 +100,9 @@ DOCUMENTATION = '''
description:
- Whether to set C(ansbile_host) for proxmox nodes.
- When set to C(true) (default), will use the first available interface. This can be different from what you expect.
default: true
- This currently defaults to C(true), but the default is deprecated since community.general 4.8.0.
The default will change to C(false) in community.general 6.0.0. To avoid a deprecation warning, please
set this parameter explicitly.
type: bool
filters:
version_added: 4.6.0
@@ -103,6 +127,25 @@ EXAMPLES = '''
plugin: community.general.proxmox
user: ansible@pve
password: secure
# Note that this can easily give you wrong values as ansible_host. See further below for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
# Instead of login with password, proxmox supports api token authentication since release 6.2.
plugin: community.general.proxmox
user: ci@pve
token_id: gitlab-1
token_secret: fa256e9c-26ab-41ec-82da-707a2c079829
# The secret can also be a vault string or passed via the environment variable TOKEN_SECRET.
token_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62353634333163633336343265623632626339313032653563653165313262343931643431656138
6134333736323265656466646539663134306166666237630a653363623262636663333762316136
34616361326263383766366663393837626437316462313332663736623066656237386531663731
3037646432383064630a663165303564623338666131353366373630656661333437393937343331
32643131386134396336623736393634373936356332623632306561356361323737313663633633
6231313333666361656537343562333337323030623732323833
# More complete example demonstrating the use of 'want_facts' and the constructed options
# Note that using facts returned by 'want_facts' in constructed options requires 'want_facts=true'
@@ -123,6 +166,9 @@ groups:
mailservers: "'mail' in (proxmox_tags_parsed|list)"
compose:
ansible_port: 2222
# Note that this can easily give you wrong values as ansible_host. See further below for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
# Using the inventory to allow ansible to connect via the first IP address of the VM / Container
# (Default is connection by name of QEMU/LXC guests)
@@ -134,6 +180,7 @@ user: ansible@pve
password: secure
validate_certs: false
want_facts: true
want_proxmox_nodes_ansible_host: false
compose:
ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ipaddr('address')
my_inv_var_1: "'my_var1_value'"
@@ -146,6 +193,9 @@ plugin: community.general.proxmox
url: "{{ lookup('ansible.builtin.ini', 'url', section='proxmox', file='file.ini') }}"
user: "{{ lookup('ansible.builtin.env','PM_USER') | default('ansible@pve') }}"
password: "{{ lookup('community.general.random_string', base64=True) }}"
# Note that this can easily give you wrong values as ansible_host. See further up for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
'''
@@ -157,6 +207,7 @@ from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.errors import AnsibleError
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.utils.display import Display
from ansible.template import Templar
@@ -210,15 +261,24 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
def _get_auth(self):
credentials = urlencode({'username': self.proxmox_user, 'password': self.proxmox_password, })
a = self._get_session()
ret = a.post('%s/api2/json/access/ticket' % self.proxmox_url, data=credentials)
if self.proxmox_password:
json = ret.json()
credentials = urlencode({'username': self.proxmox_user, 'password': self.proxmox_password, })
self.credentials = {
'ticket': json['data']['ticket'],
'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
}
a = self._get_session()
ret = a.post('%s/api2/json/access/ticket' % self.proxmox_url, data=credentials)
json = ret.json()
self.headers = {
# only required for POST/PUT/DELETE methods, which we are not using currently
# 'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
'Cookie': 'PVEAuthCookie={0}'.format(json['data']['ticket'])
}
else:
self.headers = {'Authorization': 'PVEAPIToken={0}!{1}={2}'.format(self.proxmox_user, self.proxmox_token_id, self.proxmox_token_secret)}
def _get_json(self, url, ignore_errors=None):
@@ -230,8 +290,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
data = []
s = self._get_session()
while True:
headers = {'Cookie': 'PVEAuthCookie={0}'.format(self.credentials['ticket'])}
ret = s.get(url, headers=headers)
ret = s.get(url, headers=self.headers)
if ignore_errors and ret.status_code in ignore_errors:
break
ret.raise_for_status()
@@ -348,7 +407,16 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
agent_iface_key = self.to_safe('%s%s' % (key, "_interfaces"))
properties[agent_iface_key] = agent_iface_value
if config not in plaintext_configs and not isinstance(value, int) and all("=" in v for v in value.split(",")):
if config == 'lxc':
out_val = {}
for k, v in value:
if k.startswith('lxc.'):
k = k[len('lxc.'):]
out_val[k] = v
value = out_val
if config not in plaintext_configs and isinstance(value, string_types) \
and all("=" in v for v in value.split(",")):
# split off strings with commas to a dict
# skip over any keys that cannot be processed
try:
@@ -467,6 +535,16 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
nodes_group = self._group('nodes')
self.inventory.add_group(nodes_group)
want_proxmox_nodes_ansible_host = self.get_option("want_proxmox_nodes_ansible_host")
if want_proxmox_nodes_ansible_host is None:
display.deprecated(
'The want_proxmox_nodes_ansible_host option of the community.general.proxmox inventory plugin'
' currently defaults to `true`, but this default has been deprecated and will change to `false`'
' in community.general 6.0.0. To keep the current behavior and remove this deprecation warning,'
' explicitly set `want_proxmox_nodes_ansible_host` to `true` in your inventory configuration',
version='6.0.0', collection_name='community.general')
want_proxmox_nodes_ansible_host = True
# gather vm's on nodes
self._get_auth()
hosts = []
@@ -482,7 +560,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
continue
# get node IP address
if self.get_option("want_proxmox_nodes_ansible_host"):
if want_proxmox_nodes_ansible_host:
ip = self._get_node_ip(node['node'])
self.inventory.set_variable(node['node'], 'ansible_host', ip)
@@ -530,6 +608,19 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
proxmox_password = t.template(variable=proxmox_password, disable_lookups=False)
self.proxmox_password = proxmox_password
proxmox_token_id = self.get_option('token_id')
if t.is_template(proxmox_token_id):
proxmox_token_id = t.template(variable=proxmox_token_id, disable_lookups=False)
self.proxmox_token_id = proxmox_token_id
proxmox_token_secret = self.get_option('token_secret')
if t.is_template(proxmox_token_secret):
proxmox_token_secret = t.template(variable=proxmox_token_secret, disable_lookups=False)
self.proxmox_token_secret = proxmox_token_secret
if proxmox_password is None and (proxmox_token_id is None or proxmox_token_secret is None):
raise AnsibleError('You must specify either a password or both token_id and token_secret.')
self.cache_key = self.get_cache_key(path)
self.use_cache = cache and self.get_option('cache')
self.host_filters = self.get_option('filters')

View File

@@ -0,0 +1,291 @@
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky <russoz@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from functools import wraps
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.six import iteritems
def _ensure_list(value):
return list(value) if is_sequence(value) else [value]
def _process_as_is(rc, out, err):
return rc, out, err
class CmdRunnerException(Exception):
pass
class MissingArgumentFormat(CmdRunnerException):
def __init__(self, arg, args_order, args_formats):
self.args_order = args_order
self.arg = arg
self.args_formats = args_formats
def __repr__(self):
return "MissingArgumentFormat({0!r}, {1!r}, {2!r})".format(
self.arg,
self.args_order,
self.args_formats,
)
def __str__(self):
return "Cannot find format for parameter {0} {1} in: {2}".format(
self.arg,
self.args_order,
self.args_formats,
)
class MissingArgumentValue(CmdRunnerException):
def __init__(self, args_order, arg):
self.args_order = args_order
self.arg = arg
def __repr__(self):
return "MissingArgumentValue({0!r}, {1!r})".format(
self.args_order,
self.arg,
)
def __str__(self):
return "Cannot find value for parameter {0} in {1}".format(
self.arg,
self.args_order,
)
class FormatError(CmdRunnerException):
def __init__(self, name, value, args_formats, exc):
self.name = name
self.value = value
self.args_formats = args_formats
self.exc = exc
super(FormatError, self).__init__()
def __repr__(self):
return "FormatError({0!r}, {1!r}, {2!r}, {3!r})".format(
self.name,
self.value,
self.args_formats,
self.exc,
)
def __str__(self):
return "Failed to format parameter {0} with value {1}: {2}".format(
self.name,
self.value,
self.exc,
)
class _ArgFormat(object):
def __init__(self, func, ignore_none=None):
self.func = func
self.ignore_none = ignore_none
def __call__(self, value, ctx_ignore_none):
ignore_none = self.ignore_none if self.ignore_none is not None else ctx_ignore_none
if value is None and ignore_none:
return []
f = self.func
return [str(x) for x in f(value)]
class _Format(object):
@staticmethod
def as_bool(args):
return _ArgFormat(lambda value: _ensure_list(args) if value else [])
@staticmethod
def as_bool_not(args):
return _ArgFormat(lambda value: [] if value else _ensure_list(args), ignore_none=False)
@staticmethod
def as_optval(arg, ignore_none=None):
return _ArgFormat(lambda value: ["{0}{1}".format(arg, value)], ignore_none=ignore_none)
@staticmethod
def as_opt_val(arg, ignore_none=None):
return _ArgFormat(lambda value: [arg, value], ignore_none=ignore_none)
@staticmethod
def as_opt_eq_val(arg, ignore_none=None):
return _ArgFormat(lambda value: ["{0}={1}".format(arg, value)], ignore_none=ignore_none)
@staticmethod
def as_list(ignore_none=None):
return _ArgFormat(_ensure_list, ignore_none=ignore_none)
@staticmethod
def as_fixed(args):
return _ArgFormat(lambda value: _ensure_list(args), ignore_none=False)
@staticmethod
def as_func(func, ignore_none=None):
return _ArgFormat(func, ignore_none=ignore_none)
@staticmethod
def as_map(_map, default=None, ignore_none=None):
return _ArgFormat(lambda value: _ensure_list(_map.get(value, default)), ignore_none=ignore_none)
@staticmethod
def as_default_type(_type, arg="", ignore_none=None):
fmt = _Format
if _type == "dict":
return fmt.as_func(lambda d: ["--{0}={1}".format(*a) for a in iteritems(d)],
ignore_none=ignore_none)
if _type == "list":
return fmt.as_func(lambda value: ["--{0}".format(x) for x in value], ignore_none=ignore_none)
if _type == "bool":
return fmt.as_bool("--{0}".format(arg))
return fmt.as_opt_val("--{0}".format(arg), ignore_none=ignore_none)
@staticmethod
def unpack_args(func):
@wraps(func)
def wrapper(v):
return func(*v)
return wrapper
@staticmethod
def unpack_kwargs(func):
@wraps(func)
def wrapper(v):
return func(**v)
return wrapper
class CmdRunner(object):
"""
Wrapper for ``AnsibleModule.run_command()``.
It aims to provide a reusable runner with consistent argument formatting
and sensible defaults.
"""
@staticmethod
def _prepare_args_order(order):
return tuple(order) if is_sequence(order) else tuple(order.split())
def __init__(self, module, command, arg_formats=None, default_args_order=(),
check_rc=False, force_lang="C", path_prefix=None, environ_update=None):
self.module = module
self.command = _ensure_list(command)
self.default_args_order = self._prepare_args_order(default_args_order)
if arg_formats is None:
arg_formats = {}
self.arg_formats = dict(arg_formats)
self.check_rc = check_rc
self.force_lang = force_lang
self.path_prefix = path_prefix
if environ_update is None:
environ_update = {}
self.environ_update = environ_update
self.command[0] = module.get_bin_path(command[0], opt_dirs=path_prefix, required=True)
for mod_param_name, spec in iteritems(module.argument_spec):
if mod_param_name not in self.arg_formats:
self.arg_formats[mod_param_name] = _Format.as_default_type(spec['type'], mod_param_name)
def context(self, args_order=None, output_process=None, ignore_value_none=True, **kwargs):
if output_process is None:
output_process = _process_as_is
if args_order is None:
args_order = self.default_args_order
args_order = self._prepare_args_order(args_order)
for p in args_order:
if p not in self.arg_formats:
raise MissingArgumentFormat(p, args_order, tuple(self.arg_formats.keys()))
return _CmdRunnerContext(runner=self,
args_order=args_order,
output_process=output_process,
ignore_value_none=ignore_value_none, **kwargs)
def has_arg_format(self, arg):
return arg in self.arg_formats
class _CmdRunnerContext(object):
def __init__(self, runner, args_order, output_process, ignore_value_none, **kwargs):
self.runner = runner
self.args_order = tuple(args_order)
self.output_process = output_process
self.ignore_value_none = ignore_value_none
self.run_command_args = dict(kwargs)
self.environ_update = runner.environ_update
self.environ_update.update(self.run_command_args.get('environ_update', {}))
if runner.force_lang:
self.environ_update.update({
'LANGUAGE': runner.force_lang,
'LC_ALL': runner.force_lang,
})
self.run_command_args['environ_update'] = self.environ_update
if 'check_rc' not in self.run_command_args:
self.run_command_args['check_rc'] = runner.check_rc
self.check_rc = self.run_command_args['check_rc']
self.cmd = None
self.results_rc = None
self.results_out = None
self.results_err = None
self.results_processed = None
def run(self, **kwargs):
runner = self.runner
module = self.runner.module
self.cmd = list(runner.command)
self.context_run_args = dict(kwargs)
named_args = dict(module.params)
named_args.update(kwargs)
for arg_name in self.args_order:
value = None
try:
value = named_args[arg_name]
self.cmd.extend(runner.arg_formats[arg_name](value, ctx_ignore_none=self.ignore_value_none))
except KeyError:
raise MissingArgumentValue(self.args_order, arg_name)
except Exception as e:
raise FormatError(arg_name, value, runner.arg_formats[arg_name], e)
results = module.run_command(self.cmd, **self.run_command_args)
self.results_rc, self.results_out, self.results_err = results
self.results_processed = self.output_process(*results)
return self.results_processed
@property
def run_info(self):
return dict(
ignore_value_none=self.ignore_value_none,
check_rc=self.check_rc,
environ_update=self.environ_update,
args_order=self.args_order,
cmd=self.cmd,
run_command_args=self.run_command_args,
context_run_args=self.context_run_args,
results_rc=self.results_rc,
results_out=self.results_out,
results_err=self.results_err,
results_processed=self.results_processed,
)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
fmt = _Format()

View File

@@ -337,7 +337,6 @@ def pritunl_auth_request(
auth_string = "&".join(
[api_token, auth_timestamp, auth_nonce, method.upper(), path]
+ ([data] if data else [])
)
auth_signature = base64.b64encode(

View File

@@ -732,14 +732,22 @@ class RedfishUtils(object):
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def manage_indicator_led(self, command):
def manage_system_indicator_led(self, command):
return self.manage_indicator_led(command, self.systems_uri)
def manage_chassis_indicator_led(self, command):
return self.manage_indicator_led(command, self.chassis_uri)
def manage_indicator_led(self, command, resource_uri=None):
result = {}
key = 'IndicatorLED'
if resource_uri is None:
resource_uri = self.chassis_uri
payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'}
result = {}
response = self.get_request(self.root_uri + self.chassis_uri)
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
@@ -749,7 +757,7 @@ class RedfishUtils(object):
if command in payloads.keys():
payload = {'IndicatorLED': payloads[command]}
response = self.patch_request(self.root_uri + self.chassis_uri, payload)
response = self.patch_request(self.root_uri + resource_uri, payload)
if response['ret'] is False:
return response
else:

View File

@@ -0,0 +1 @@
./monitoring/alerta_customer.py

View File

@@ -182,10 +182,10 @@ def core(module):
def main():
module = AnsibleModule(
argument_spec=dict(
mode=dict(default=None, choices=['user', 'system']),
mode=dict(choices=['user', 'system']),
name=dict(required=True),
image=dict(required=True),
rootfs=dict(default=None),
rootfs=dict(),
state=dict(default='latest', choices=['present', 'absent', 'latest', 'rollback']),
backend=dict(required=True, choices=['docker', 'ostree']),
values=dict(type='list', default=[], elements='str'),

View File

@@ -228,8 +228,7 @@ class ClcAlertPolicy:
choices=[
'cpu',
'memory',
'disk'],
default=None),
'disk']),
duration=dict(type='str'),
threshold=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])

View File

@@ -297,9 +297,9 @@ class ClcGroup(object):
"""
argument_spec = dict(
name=dict(required=True),
description=dict(default=None),
parent=dict(default=None),
location=dict(default=None),
description=dict(),
parent=dict(),
location=dict(),
state=dict(default='present', choices=['present', 'absent']),
wait=dict(type='bool', default=True))

View File

@@ -865,7 +865,7 @@ class ClcLoadBalancer:
"""
argument_spec = dict(
name=dict(required=True),
description=dict(default=None),
description=dict(),
location=dict(required=True),
alias=dict(required=True),
port=dict(choices=[80, 443]),

View File

@@ -567,31 +567,31 @@ class ClcServer:
template=dict(),
group=dict(default='Default Group'),
network_id=dict(),
location=dict(default=None),
location=dict(),
cpu=dict(default=1, type='int'),
memory=dict(default=1, type='int'),
alias=dict(default=None),
password=dict(default=None, no_log=True),
ip_address=dict(default=None),
alias=dict(),
password=dict(no_log=True),
ip_address=dict(),
storage_type=dict(
default='standard',
choices=[
'standard',
'hyperscale']),
type=dict(default='standard', choices=['standard', 'hyperscale', 'bareMetal']),
primary_dns=dict(default=None),
secondary_dns=dict(default=None),
primary_dns=dict(),
secondary_dns=dict(),
additional_disks=dict(type='list', default=[], elements='dict'),
custom_fields=dict(type='list', default=[], elements='dict'),
ttl=dict(default=None),
ttl=dict(),
managed_os=dict(type='bool', default=False),
description=dict(default=None),
source_server_password=dict(default=None, no_log=True),
cpu_autoscale_policy_id=dict(default=None),
anti_affinity_policy_id=dict(default=None),
anti_affinity_policy_name=dict(default=None),
alert_policy_id=dict(default=None),
alert_policy_name=dict(default=None),
description=dict(),
source_server_password=dict(no_log=True),
cpu_autoscale_policy_id=dict(),
anti_affinity_policy_id=dict(),
anti_affinity_policy_name=dict(),
alert_policy_id=dict(),
alert_policy_name=dict(),
packages=dict(type='list', default=[], elements='dict'),
state=dict(
default='present',
@@ -601,7 +601,7 @@ class ClcServer:
'started',
'stopped']),
count=dict(type='int', default=1),
exact_count=dict(type='int', default=None),
exact_count=dict(type='int', ),
count_group=dict(),
server_ids=dict(type='list', default=[], elements='str'),
add_public_ip=dict(type='bool', default=False),
@@ -612,14 +612,13 @@ class ClcServer:
'UDP',
'ICMP']),
public_ip_ports=dict(type='list', default=[], elements='dict'),
configuration_id=dict(default=None),
os_type=dict(default=None,
choices=[
'redHat6_64Bit',
'centOS6_64Bit',
'windows2012R2Standard_64Bit',
'ubuntu14_64Bit'
]),
configuration_id=dict(),
os_type=dict(choices=[
'redHat6_64Bit',
'centOS6_64Bit',
'windows2012R2Standard_64Bit',
'ubuntu14_64Bit'
]),
wait=dict(type='bool', default=True))
mutually_exclusive = [

View File

@@ -21,6 +21,13 @@ options:
- Name of an instance.
type: str
required: true
project:
description:
- 'Project of an instance.
See U(https://github.com/lxc/lxd/blob/master/doc/projects.md).'
required: false
type: str
version_added: 4.8.0
architecture:
description:
- 'The architecture for the instance (for example C(x86_64) or C(i686)).
@@ -248,6 +255,26 @@ EXAMPLES = '''
wait_for_ipv4_addresses: true
timeout: 600
# An example for creating container in project other than default
- hosts: localhost
connection: local
tasks:
- name: Create a started container in project mytestproject
community.general.lxd_container:
name: mycontainer
project: mytestproject
ignore_volatile_options: true
state: started
source:
protocol: simplestreams
type: image
mode: pull
server: https://images.linuxcontainers.org
alias: ubuntu/20.04/cloud
profiles: ["default"]
wait_for_ipv4_addresses: true
timeout: 600
# An example for deleting a container
- hosts: localhost
connection: local
@@ -412,6 +439,7 @@ class LXDContainerManagement(object):
"""
self.module = module
self.name = self.module.params['name']
self.project = self.module.params['project']
self._build_config()
self.state = self.module.params['state']
@@ -468,16 +496,16 @@ class LXDContainerManagement(object):
self.config[attr] = param_val
def _get_instance_json(self):
return self.client.do(
'GET', '{0}/{1}'.format(self.api_endpoint, self.name),
ok_error_codes=[404]
)
url = '{0}/{1}'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
return self.client.do('GET', url, ok_error_codes=[404])
def _get_instance_state_json(self):
return self.client.do(
'GET', '{0}/{1}/state'.format(self.api_endpoint, self.name),
ok_error_codes=[404]
)
url = '{0}/{1}/state'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
return self.client.do('GET', url, ok_error_codes=[404])
@staticmethod
def _instance_json_to_module_state(resp_json):
@@ -486,18 +514,26 @@ class LXDContainerManagement(object):
return ANSIBLE_LXD_STATES[resp_json['metadata']['status']]
def _change_state(self, action, force_stop=False):
url = '{0}/{1}/state'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
body_json = {'action': action, 'timeout': self.timeout}
if force_stop:
body_json['force'] = True
return self.client.do('PUT', '{0}/{1}/state'.format(self.api_endpoint, self.name), body_json=body_json)
return self.client.do('PUT', url, body_json=body_json)
def _create_instance(self):
url = self.api_endpoint
url_params = dict()
if self.target:
url_params['target'] = self.target
if self.project:
url_params['project'] = self.project
if url_params:
url = '{0}?{1}'.format(url, urlencode(url_params))
config = self.config.copy()
config['name'] = self.name
if self.target:
self.client.do('POST', '{0}?{1}'.format(self.api_endpoint, urlencode(dict(target=self.target))), config, wait_for_container=self.wait_for_container)
else:
self.client.do('POST', self.api_endpoint, config, wait_for_container=self.wait_for_container)
self.client.do('POST', url, config, wait_for_container=self.wait_for_container)
self.actions.append('create')
def _start_instance(self):
@@ -513,7 +549,10 @@ class LXDContainerManagement(object):
self.actions.append('restart')
def _delete_instance(self):
self.client.do('DELETE', '{0}/{1}'.format(self.api_endpoint, self.name))
url = '{0}/{1}'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('DELETE', url)
self.actions.append('delete')
def _freeze_instance(self):
@@ -666,7 +705,10 @@ class LXDContainerManagement(object):
if self._needs_to_change_instance_config('profiles'):
body_json['profiles'] = self.config['profiles']
self.client.do('PUT', '{0}/{1}'.format(self.api_endpoint, self.name), body_json=body_json)
url = '{0}/{1}'.format(self.api_endpoint, self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('PUT', url, body_json=body_json)
self.actions.append('apply_instance_configs')
def run(self):
@@ -715,6 +757,9 @@ def main():
type='str',
required=True
),
project=dict(
type='str',
),
architecture=dict(
type='str',
),

View File

@@ -21,6 +21,13 @@ options:
- Name of a profile.
required: true
type: str
project:
description:
- 'Project of a profile.
See U(https://github.com/lxc/lxd/blob/master/doc/projects.md).'
type: str
required: false
version_added: 4.8.0
description:
description:
- Description of the profile.
@@ -129,6 +136,19 @@ EXAMPLES = '''
parent: br0
type: nic
# An example for creating a profile in project mytestproject
- hosts: localhost
connection: local
tasks:
- name: Create a profile
community.general.lxd_profile:
name: testprofile
project: mytestproject
state: present
config: {}
description: test profile in project mytestproject
devices: {}
# An example for creating a profile via http connection
- hosts: localhost
connection: local
@@ -208,6 +228,7 @@ actions:
import os
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.lxd import LXDClient, LXDClientException
from ansible.module_utils.six.moves.urllib.parse import urlencode
# ANSIBLE_LXD_DEFAULT_URL is a default value of the lxd endpoint
ANSIBLE_LXD_DEFAULT_URL = 'unix:/var/lib/lxd/unix.socket'
@@ -232,6 +253,7 @@ class LXDProfileManagement(object):
"""
self.module = module
self.name = self.module.params['name']
self.project = self.module.params['project']
self._build_config()
self.state = self.module.params['state']
self.new_name = self.module.params.get('new_name', None)
@@ -272,10 +294,10 @@ class LXDProfileManagement(object):
self.config[attr] = param_val
def _get_profile_json(self):
return self.client.do(
'GET', '/1.0/profiles/{0}'.format(self.name),
ok_error_codes=[404]
)
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
return self.client.do('GET', url, ok_error_codes=[404])
@staticmethod
def _profile_json_to_module_state(resp_json):
@@ -307,14 +329,20 @@ class LXDProfileManagement(object):
changed=False)
def _create_profile(self):
url = '/1.0/profiles'
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
config = self.config.copy()
config['name'] = self.name
self.client.do('POST', '/1.0/profiles', config)
self.client.do('POST', url, config)
self.actions.append('create')
def _rename_profile(self):
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
config = {'name': self.new_name}
self.client.do('POST', '/1.0/profiles/{0}'.format(self.name), config)
self.client.do('POST', url, config)
self.actions.append('rename')
self.name = self.new_name
@@ -421,11 +449,17 @@ class LXDProfileManagement(object):
config = self._generate_new_config(config)
# upload config to lxd
self.client.do('PUT', '/1.0/profiles/{0}'.format(self.name), config)
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('PUT', url, config)
self.actions.append('apply_profile_configs')
def _delete_profile(self):
self.client.do('DELETE', '/1.0/profiles/{0}'.format(self.name))
url = '/1.0/profiles/{0}'.format(self.name)
if self.project:
url = '{0}?{1}'.format(url, urlencode(dict(project=self.project)))
self.client.do('DELETE', url)
self.actions.append('delete')
def run(self):
@@ -469,6 +503,9 @@ def main():
type='str',
required=True
),
project=dict(
type='str',
),
new_name=dict(
type='str',
),

View File

@@ -0,0 +1,451 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: lxd_project
short_description: Manage LXD projects
version_added: 4.8.0
description:
- Management of LXD projects.
author: "Raymond Chang (@we10710aa)"
options:
name:
description:
- Name of the project.
required: true
type: str
description:
description:
- Description of the project.
type: str
config:
description:
- 'The config for the project (for example C({"features.profiles": "true"})).
See U(https://linuxcontainers.org/lxd/docs/master/projects/).'
- If the project already exists and its "config" value in metadata
obtained from
C(GET /1.0/projects/<name>)
U(https://linuxcontainers.org/lxd/docs/master/api/#/projects/project_get)
are different, then this module tries to apply the configurations.
type: dict
new_name:
description:
- A new name of a project.
- If this parameter is specified a project will be renamed to this name.
See U(https://linuxcontainers.org/lxd/docs/master/api/#/projects/project_post).
required: false
type: str
merge_project:
description:
- Merge the configuration of the present project with the new desired configuration,
instead of replacing it. If configuration is the same after merged, no change will be made.
required: false
default: false
type: bool
state:
choices:
- present
- absent
description:
- Define the state of a project.
required: false
default: present
type: str
url:
description:
- The Unix domain socket path or the https URL for the LXD server.
required: false
default: unix:/var/lib/lxd/unix.socket
type: str
snap_url:
description:
- The Unix domain socket path when LXD is installed by snap package manager.
required: false
default: unix:/var/snap/lxd/common/lxd/unix.socket
type: str
client_key:
description:
- The client certificate key file path.
- If not specified, it defaults to C($HOME/.config/lxc/client.key).
required: false
aliases: [ key_file ]
type: path
client_cert:
description:
- The client certificate file path.
- If not specified, it defaults to C($HOME/.config/lxc/client.crt).
required: false
aliases: [ cert_file ]
type: path
trust_password:
description:
- The client trusted password.
- 'You need to set this password on the LXD server before
running this module using the following command:
C(lxc config set core.trust_password <some random password>)
See U(https://www.stgraber.org/2016/04/18/lxd-api-direct-interaction/).'
- If I(trust_password) is set, this module send a request for
authentication before sending any requests.
required: false
type: str
notes:
- Projects must have a unique name. If you attempt to create a project
with a name that already existed in the users namespace the module will
simply return as "unchanged".
'''
EXAMPLES = '''
# An example for creating a project
- hosts: localhost
connection: local
tasks:
- name: Create a project
community.general.lxd_project:
name: ansible-test-project
state: present
config: {}
description: my new project
# An example for renaming a project
- hosts: localhost
connection: local
tasks:
- name: Rename ansible-test-project to ansible-test-project-new-name
community.general.lxd_project:
name: ansible-test-project
new_name: ansible-test-project-new-name
state: present
config: {}
description: my new project
'''
RETURN = '''
old_state:
description: The old state of the project.
returned: success
type: str
sample: "absent"
logs:
description: The logs of requests and responses.
returned: when ansible-playbook is invoked with -vvvv.
type: list
elements: dict
contains:
type:
description: Type of actions performed, currently only C(sent request).
type: str
sample: "sent request"
request:
description: HTTP request sent to LXD server.
type: dict
contains:
method:
description: Method of HTTP request.
type: str
sample: "GET"
url:
description: URL path of HTTP request.
type: str
sample: "/1.0/projects/test-project"
json:
description: JSON body of HTTP request.
type: str
sample: "(too long to be placed here)"
timeout:
description: Timeout of HTTP request, C(null) if unset.
type: int
sample: null
response:
description: HTTP response received from LXD server.
type: dict
contains:
json:
description: JSON of HTTP response.
type: str
sample: "(too long to be placed here)"
actions:
description: List of actions performed for the project.
returned: success
type: list
elements: str
sample: ["create"]
'''
from ansible_collections.community.general.plugins.module_utils.lxd import LXDClient, LXDClientException
from ansible.module_utils.basic import AnsibleModule
import os
# ANSIBLE_LXD_DEFAULT_URL is a default value of the lxd endpoint
ANSIBLE_LXD_DEFAULT_URL = 'unix:/var/lib/lxd/unix.socket'
# PROJECTS_STATES is a list for states supported
PROJECTS_STATES = [
'present', 'absent'
]
# CONFIG_PARAMS is a list of config attribute names.
CONFIG_PARAMS = [
'config', 'description'
]
class LXDProjectManagement(object):
def __init__(self, module):
"""Management of LXC projects via Ansible.
:param module: Processed Ansible Module.
:type module: ``object``
"""
self.module = module
self.name = self.module.params['name']
self._build_config()
self.state = self.module.params['state']
self.new_name = self.module.params.get('new_name', None)
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = os.path.expanduser('~/.config/lxc/client.key')
self.cert_file = self.module.params.get('client_cert')
if self.cert_file is None:
self.cert_file = os.path.expanduser('~/.config/lxc/client.crt')
self.debug = self.module._verbosity >= 4
try:
if self.module.params['url'] != ANSIBLE_LXD_DEFAULT_URL:
self.url = self.module.params['url']
elif os.path.exists(self.module.params['snap_url'].replace('unix:', '')):
self.url = self.module.params['snap_url']
else:
self.url = self.module.params['url']
except Exception as e:
self.module.fail_json(msg=e.msg)
try:
self.client = LXDClient(
self.url, key_file=self.key_file, cert_file=self.cert_file,
debug=self.debug
)
except LXDClientException as e:
self.module.fail_json(msg=e.msg)
self.trust_password = self.module.params.get('trust_password', None)
self.actions = []
def _build_config(self):
self.config = {}
for attr in CONFIG_PARAMS:
param_val = self.module.params.get(attr, None)
if param_val is not None:
self.config[attr] = param_val
def _get_project_json(self):
return self.client.do(
'GET', '/1.0/projects/{0}'.format(self.name),
ok_error_codes=[404]
)
@staticmethod
def _project_json_to_module_state(resp_json):
if resp_json['type'] == 'error':
return 'absent'
return 'present'
def _update_project(self):
if self.state == 'present':
if self.old_state == 'absent':
if self.new_name is None:
self._create_project()
else:
self.module.fail_json(
msg='new_name must not be set when the project does not exist and the state is present',
changed=False)
else:
if self.new_name is not None and self.new_name != self.name:
self._rename_project()
if self._needs_to_apply_project_configs():
self._apply_project_configs()
elif self.state == 'absent':
if self.old_state == 'present':
if self.new_name is None:
self._delete_project()
else:
self.module.fail_json(
msg='new_name must not be set when the project exists and the specified state is absent',
changed=False)
def _create_project(self):
config = self.config.copy()
config['name'] = self.name
self.client.do('POST', '/1.0/projects', config)
self.actions.append('create')
def _rename_project(self):
config = {'name': self.new_name}
self.client.do('POST', '/1.0/projects/{0}'.format(self.name), config)
self.actions.append('rename')
self.name = self.new_name
def _needs_to_change_project_config(self, key):
if key not in self.config:
return False
old_configs = self.old_project_json['metadata'].get(key, None)
return self.config[key] != old_configs
def _needs_to_apply_project_configs(self):
return (
self._needs_to_change_project_config('config') or
self._needs_to_change_project_config('description')
)
def _merge_dicts(self, source, destination):
""" Return a new dict taht merge two dict,
with values in source dict overwrite destination dict
Args:
dict(source): source dict
dict(destination): destination dict
Kwargs:
None
Raises:
None
Returns:
dict(destination): merged dict"""
result = destination.copy()
for key, value in source.items():
if isinstance(value, dict):
# get node or create one
node = result.setdefault(key, {})
self._merge_dicts(value, node)
else:
result[key] = value
return result
def _apply_project_configs(self):
""" Selection of the procedure: rebuild or merge
The standard behavior is that all information not contained
in the play is discarded.
If "merge_project" is provides in the play and "True", then existing
configurations from the project and new ones defined are merged.
Args:
None
Kwargs:
None
Raises:
None
Returns:
None"""
old_config = dict()
old_metadata = self.old_project_json['metadata'].copy()
for attr in CONFIG_PARAMS:
old_config[attr] = old_metadata[attr]
if self.module.params['merge_project']:
config = self._merge_dicts(self.config, old_config)
if config == old_config:
# no need to call api if merged config is the same
# as old config
return
else:
config = self.config.copy()
# upload config to lxd
self.client.do('PUT', '/1.0/projects/{0}'.format(self.name), config)
self.actions.append('apply_projects_configs')
def _delete_project(self):
self.client.do('DELETE', '/1.0/projects/{0}'.format(self.name))
self.actions.append('delete')
def run(self):
"""Run the main method."""
try:
if self.trust_password is not None:
self.client.authenticate(self.trust_password)
self.old_project_json = self._get_project_json()
self.old_state = self._project_json_to_module_state(
self.old_project_json)
self._update_project()
state_changed = len(self.actions) > 0
result_json = {
'changed': state_changed,
'old_state': self.old_state,
'actions': self.actions
}
if self.client.debug:
result_json['logs'] = self.client.logs
self.module.exit_json(**result_json)
except LXDClientException as e:
state_changed = len(self.actions) > 0
fail_params = {
'msg': e.msg,
'changed': state_changed,
'actions': self.actions
}
if self.client.debug:
fail_params['logs'] = e.kwargs['logs']
self.module.fail_json(**fail_params)
def main():
"""Ansible Main module."""
module = AnsibleModule(
argument_spec=dict(
name=dict(
type='str',
required=True
),
new_name=dict(
type='str',
),
config=dict(
type='dict',
),
description=dict(
type='str',
),
merge_project=dict(
type='bool',
default=False
),
state=dict(
choices=PROJECTS_STATES,
default='present'
),
url=dict(
type='str',
default=ANSIBLE_LXD_DEFAULT_URL
),
snap_url=dict(
type='str',
default='unix:/var/snap/lxd/common/lxd/unix.socket'
),
client_key=dict(
type='path',
aliases=['key_file']
),
client_cert=dict(
type='path',
aliases=['cert_file']
),
trust_password=dict(type='str', no_log=True)
),
supports_check_mode=False,
)
lxd_manage = LXDProjectManagement(module=module)
lxd_manage.run()
if __name__ == '__main__':
main()

View File

@@ -564,7 +564,7 @@ def main():
force=dict(type='bool', default=False),
purge=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted']),
pubkey=dict(type='str', default=None),
pubkey=dict(type='str'),
unprivileged=dict(type='bool', default=False),
description=dict(type='str'),
hookscript=dict(type='str'),

View File

@@ -1397,7 +1397,7 @@ def main():
module.fail_json(msg='VM with name = %s does not exist in cluster' % name)
vm = proxmox.get_vm(vmid)
if not name:
name = vm['name']
name = vm.get('name', '(unnamed)')
current = proxmox.proxmox_api.nodes(vm['node']).qemu(vmid).status.current.get()['status']
status['status'] = current
if status:

View File

@@ -13,7 +13,7 @@ module: proxmox_snap
short_description: Snapshot management of instances in Proxmox VE cluster
version_added: 2.0.0
description:
- Allows you to create/delete snapshots from instances in Proxmox VE cluster.
- Allows you to create/delete/restore snapshots from instances in Proxmox VE cluster.
- Supports both KVM and LXC, OpenVZ has not been tested, as it is no longer supported on Proxmox VE.
options:
hostname:
@@ -28,7 +28,8 @@ options:
state:
description:
- Indicate desired state of the instance snapshot.
choices: ['present', 'absent']
- The C(rollback) value was added in community.general 4.8.0.
choices: ['present', 'absent', 'rollback']
default: present
type: str
force:
@@ -53,7 +54,7 @@ options:
type: int
snapname:
description:
- Name of the snapshot that has to be created.
- Name of the snapshot that has to be created/deleted/restored.
default: 'ansible_snap'
type: str
@@ -84,6 +85,15 @@ EXAMPLES = r'''
vmid: 100
state: absent
snapname: pre-updates
- name: Rollback container snapshot
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: rollback
snapname: pre-updates
'''
RETURN = r'''#'''
@@ -109,15 +119,15 @@ class ProxmoxSnapAnsible(ProxmoxAnsible):
else:
taskid = self.snapshot(vm, vmid).post(snapname=snapname, description=description, vmstate=int(vmstate))
while timeout:
if (self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['status'] == 'stopped' and
self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
status_data = self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()
if status_data['status'] == 'stopped' and status_data['exitstatus'] == 'OK':
return True
timeout -= 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for creating VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
def snapshot_remove(self, vm, vmid, timeout, snapname, force):
@@ -126,15 +136,32 @@ class ProxmoxSnapAnsible(ProxmoxAnsible):
taskid = self.snapshot(vm, vmid).delete(snapname, force=int(force))
while timeout:
if (self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['status'] == 'stopped' and
self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
status_data = self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()
if status_data['status'] == 'stopped' and status_data['exitstatus'] == 'OK':
return True
timeout -= 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for removing VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
def snapshot_rollback(self, vm, vmid, timeout, snapname):
if self.module.check_mode:
return True
taskid = self.snapshot(vm, vmid)(snapname).post("rollback")
while timeout:
status_data = self.proxmox_api.nodes(vm['node']).tasks(taskid).status.get()
if status_data['status'] == 'stopped' and status_data['exitstatus'] == 'OK':
return True
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for rolling back VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
@@ -144,7 +171,7 @@ def main():
vmid=dict(required=False),
hostname=dict(),
timeout=dict(type='int', default=30),
state=dict(default='present', choices=['present', 'absent']),
state=dict(default='present', choices=['present', 'absent', 'rollback']),
description=dict(type='str'),
snapname=dict(type='str', default='ansible_snap'),
force=dict(type='bool', default='no'),
@@ -211,6 +238,25 @@ def main():
except Exception as e:
module.fail_json(msg="Removing snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
elif state == 'rollback':
try:
snap_exist = False
for i in proxmox.snapshot(vm, vmid).get():
if i['name'] == snapname:
snap_exist = True
continue
if not snap_exist:
module.exit_json(changed=False, msg="Snapshot %s does not exist" % snapname)
if proxmox.snapshot_rollback(vm, vmid, timeout, snapname):
if module.check_mode:
module.exit_json(changed=True, msg="Snapshot %s would be rolled back" % snapname)
else:
module.exit_json(changed=True, msg="Snapshot %s rolled back" % snapname)
except Exception as e:
module.fail_json(msg="Rollback of snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
if __name__ == '__main__':

View File

@@ -124,6 +124,12 @@ options:
type: list
elements: path
version_added: '0.2.0'
provider_upgrade:
description:
- Allows Terraform init to upgrade providers to versions specified in the project's version constraints.
default: false
type: bool
version_added: 4.8.0
init_reconfigure:
description:
- Forces backend reconfiguration during init.
@@ -266,7 +272,7 @@ def _state_args(state_file):
return []
def init_plugins(bin_path, project_path, backend_config, backend_config_files, init_reconfigure, plugin_paths):
def init_plugins(bin_path, project_path, backend_config, backend_config_files, init_reconfigure, provider_upgrade, plugin_paths):
command = [bin_path, 'init', '-input=false']
if backend_config:
for key, val in backend_config.items():
@@ -279,6 +285,8 @@ def init_plugins(bin_path, project_path, backend_config, backend_config_files, i
command.extend(['-backend-config', f])
if init_reconfigure:
command.extend(['-reconfigure'])
if provider_upgrade:
command.extend(['-upgrade'])
if plugin_paths:
for plugin_path in plugin_paths:
command.extend(['-plugin-dir', plugin_path])
@@ -384,6 +392,7 @@ def main():
overwrite_init=dict(type='bool', default=True),
check_destroy=dict(type='bool', default=False),
parallelism=dict(type='int'),
provider_upgrade=dict(type='bool', default=False),
),
required_if=[('state', 'planned', ['plan_file'])],
supports_check_mode=True,
@@ -405,6 +414,7 @@ def main():
init_reconfigure = module.params.get('init_reconfigure')
overwrite_init = module.params.get('overwrite_init')
check_destroy = module.params.get('check_destroy')
provider_upgrade = module.params.get('provider_upgrade')
if bin_path is not None:
command = [bin_path]
@@ -422,7 +432,7 @@ def main():
if force_init:
if overwrite_init or not os.path.isfile(os.path.join(project_path, ".terraform", "terraform.tfstate")):
init_plugins(command[0], project_path, backend_config, backend_config_files, init_reconfigure, plugin_paths)
init_plugins(command[0], project_path, backend_config, backend_config_files, init_reconfigure, provider_upgrade, plugin_paths)
workspace_ctx = get_workspace_context(command[0], project_path)
if workspace_ctx["current"] != workspace:

View File

@@ -630,7 +630,7 @@ def main():
plan=dict(),
project_id=dict(required=True),
state=dict(choices=ALLOWED_STATES, default='present'),
user_data=dict(default=None),
user_data=dict(),
wait_for_public_IPv=dict(type='int', choices=[4, 6]),
wait_timeout=dict(type='int', default=900),
ipxe_script_url=dict(default=''),

View File

@@ -226,11 +226,11 @@ def main():
state=dict(choices=['present', 'absent'], default='present'),
auth_token=dict(default=os.environ.get(PACKET_API_TOKEN_ENV_VAR),
no_log=True),
label=dict(type='str', aliases=['name'], default=None),
id=dict(type='str', default=None),
fingerprint=dict(type='str', default=None),
key=dict(type='str', default=None, no_log=True),
key_file=dict(type='path', default=None),
label=dict(type='str', aliases=['name']),
id=dict(type='str'),
fingerprint=dict(type='str'),
key=dict(type='str', no_log=True),
key_file=dict(type='path'),
),
mutually_exclusive=[
('label', 'id'),

View File

@@ -263,9 +263,9 @@ def act_on_volume(target_state, module, packet_conn):
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str', default=None),
description=dict(type="str", default=None),
name=dict(type='str', default=None),
id=dict(type='str'),
description=dict(type="str"),
name=dict(type='str'),
state=dict(choices=VOLUME_STATES, default="present"),
auth_token=dict(
type='str',
@@ -277,7 +277,7 @@ def main():
facility=dict(type="str"),
size=dict(type="int"),
locked=dict(type="bool", default=False),
snapshot_policy=dict(type='dict', default=None),
snapshot_policy=dict(type='dict'),
billing_cycle=dict(type='str', choices=BILLING, default="hourly"),
),
supports_check_mode=True,

View File

@@ -583,7 +583,7 @@ def main():
default='AMD_OPTERON'),
volume_size=dict(type='int', default=10),
disk_type=dict(choices=['HDD', 'SSD'], default='HDD'),
image_password=dict(default=None, no_log=True),
image_password=dict(no_log=True),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
bus=dict(choices=['VIRTIO', 'IDE'], default='VIRTIO'),
lan=dict(type='int', default=1),

View File

@@ -95,7 +95,7 @@ class ImageFacts(object):
def main():
module = AnsibleModule(
argument_spec=dict(
filters=dict(default=None),
filters=dict(),
),
supports_check_mode=True,
)

View File

@@ -684,7 +684,7 @@ def main():
choices=['present', 'running', 'absent', 'deleted', 'stopped', 'created', 'restarted', 'rebooted']
),
name=dict(
default=None, type='str',
type='str',
aliases=['alias']
),
brand=dict(
@@ -709,7 +709,7 @@ def main():
# Add our 'simple' options to options dict.
for type in properties:
for p in properties[type]:
option = dict(default=None, type=type)
option = dict(type=type)
options[p] = option
module = AnsibleModule(

View File

@@ -95,8 +95,7 @@ def main():
argument_spec=dict(
name=dict(required=True,
type='str'),
description=dict(default=None,
type='str'),
description=dict(type='str'),
position=dict(default='',
type='str'),
ou=dict(default='',

View File

@@ -354,12 +354,10 @@ def main():
default='0'),
group=dict(type='str',
default='0'),
path=dict(type='path',
default=None),
path=dict(type='path'),
directorymode=dict(type='str',
default='00755'),
host=dict(type='str',
default=None),
host=dict(type='str'),
root_squash=dict(type='bool',
default=True),
subtree_checking=dict(type='bool',
@@ -369,8 +367,7 @@ def main():
writeable=dict(type='bool',
default=True),
sambaBlockSize=dict(type='str',
aliases=['samba_block_size'],
default=None),
aliases=['samba_block_size']),
sambaBlockingLocks=dict(type='bool',
aliases=['samba_blocking_locks'],
default=True),
@@ -408,17 +405,14 @@ def main():
aliases=['samba_force_directory_security_mode'],
default=False),
sambaForceGroup=dict(type='str',
aliases=['samba_force_group'],
default=None),
aliases=['samba_force_group']),
sambaForceSecurityMode=dict(type='bool',
aliases=['samba_force_security_mode'],
default=False),
sambaForceUser=dict(type='str',
aliases=['samba_force_user'],
default=None),
aliases=['samba_force_user']),
sambaHideFiles=dict(type='str',
aliases=['samba_hide_files'],
default=None),
aliases=['samba_hide_files']),
sambaHideUnreadable=dict(type='bool',
aliases=['samba_hide_unreadable'],
default=False),
@@ -438,8 +432,7 @@ def main():
aliases=['samba_inherit_permissions'],
default=False),
sambaInvalidUsers=dict(type='str',
aliases=['samba_invalid_users'],
default=None),
aliases=['samba_invalid_users']),
sambaLevel2Oplocks=dict(type='bool',
aliases=['samba_level_2_oplocks'],
default=True),
@@ -450,8 +443,7 @@ def main():
aliases=['samba_msdfs_root'],
default=False),
sambaName=dict(type='str',
aliases=['samba_name'],
default=None),
aliases=['samba_name']),
sambaNtAclSupport=dict(type='bool',
aliases=['samba_nt_acl_support'],
default=True),
@@ -459,11 +451,9 @@ def main():
aliases=['samba_oplocks'],
default=True),
sambaPostexec=dict(type='str',
aliases=['samba_postexec'],
default=None),
aliases=['samba_postexec']),
sambaPreexec=dict(type='str',
aliases=['samba_preexec'],
default=None),
aliases=['samba_preexec']),
sambaPublic=dict(type='bool',
aliases=['samba_public'],
default=False),
@@ -474,14 +464,11 @@ def main():
aliases=['samba_strict_locking'],
default='Auto'),
sambaVFSObjects=dict(type='str',
aliases=['samba_vfs_objects'],
default=None),
aliases=['samba_vfs_objects']),
sambaValidUsers=dict(type='str',
aliases=['samba_valid_users'],
default=None),
aliases=['samba_valid_users']),
sambaWriteList=dict(type='str',
aliases=['samba_write_list'],
default=None),
aliases=['samba_write_list']),
sambaWriteable=dict(type='bool',
aliases=['samba_writeable'],
default=True),

View File

@@ -110,14 +110,14 @@ def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
state=dict(choices=['present', 'absent'], default='present'),
type=dict(required=True),
autostart=dict(required=False, type='bool', default=False),
extra_info=dict(required=False, default=""),
port_open=dict(required=False, type='bool', default=False),
autostart=dict(type='bool', default=False),
extra_info=dict(default=""),
port_open=dict(type='bool', default=False),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
machine=dict(required=False, default=None),
machine=dict(),
),
supports_check_mode=True
)

View File

@@ -101,13 +101,13 @@ def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
state=dict(choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
type=dict(required=True, choices=['mysql', 'postgresql']),
password=dict(required=False, default=None, no_log=True),
password=dict(no_log=True),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
machine=dict(required=False, default=None),
machine=dict(),
),
supports_check_mode=True
)

View File

@@ -102,14 +102,14 @@ def run():
use_ssl=dict(type='bool', default=True),
timeout=dict(type='int', default=5),
validate_certs=dict(type='bool', default=True),
client_cert=dict(type='path', default=None),
client_key=dict(type='path', default=None),
namespace=dict(type='str', default=None),
name=dict(type='str', default=None),
client_cert=dict(type='path'),
client_key=dict(type='path'),
namespace=dict(type='str'),
name=dict(type='str'),
content_format=dict(choices=['hcl', 'json'], default='hcl'),
content=dict(type='str', default=None),
content=dict(type='str'),
force_start=dict(type='bool', default=False),
token=dict(type='str', default=None, no_log=True)
token=dict(type='str', no_log=True)
),
supports_check_mode=True,
mutually_exclusive=[

View File

@@ -287,11 +287,11 @@ def run():
use_ssl=dict(type='bool', default=True),
timeout=dict(type='int', default=5),
validate_certs=dict(type='bool', default=True),
client_cert=dict(type='path', default=None),
client_key=dict(type='path', default=None),
namespace=dict(type='str', default=None),
name=dict(type='str', default=None),
token=dict(type='str', default=None, no_log=True)
client_cert=dict(type='path'),
client_key=dict(type='path'),
namespace=dict(type='str'),
name=dict(type='str'),
token=dict(type='str', no_log=True)
),
supports_check_mode=True
)

View File

@@ -156,7 +156,7 @@ options:
aliases:
- defaultDefaultClientScopes
type: list
elements: dict
elements: str
default_groups:
description:
- The realm default groups.
@@ -176,7 +176,7 @@ options:
aliases:
- defaultOptionalClientScopes
type: list
elements: dict
elements: str
default_roles:
description:
- The realm default roles.
@@ -621,10 +621,10 @@ def main():
brute_force_protected=dict(type='bool', aliases=['bruteForceProtected']),
client_authentication_flow=dict(type='str', aliases=['clientAuthenticationFlow']),
client_scope_mappings=dict(type='dict', aliases=['clientScopeMappings']),
default_default_client_scopes=dict(type='list', elements='dict', aliases=['defaultDefaultClientScopes']),
default_default_client_scopes=dict(type='list', elements='str', aliases=['defaultDefaultClientScopes']),
default_groups=dict(type='list', elements='dict', aliases=['defaultGroups']),
default_locale=dict(type='str', aliases=['defaultLocale']),
default_optional_client_scopes=dict(type='list', elements='dict', aliases=['defaultOptionalClientScopes']),
default_optional_client_scopes=dict(type='list', elements='str', aliases=['defaultOptionalClientScopes']),
default_roles=dict(type='list', elements='dict', aliases=['defaultRoles']),
default_signature_algorithm=dict(type='str', aliases=['defaultSignatureAlgorithm']),
direct_grant_flow=dict(type='str', aliases=['directGrantFlow']),

View File

@@ -0,0 +1 @@
./cloud/lxd/lxd_project.py

View File

@@ -0,0 +1,199 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2022, Christian Wollinger <@cwollinger>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: alerta_customer
short_description: Manage customers in Alerta
version_added: 4.8.0
description:
- Create or delete customers in Alerta with the REST API.
author: Christian Wollinger (@cwollinger)
seealso:
- name: API documentation
description: Documentation for Alerta API
link: https://docs.alerta.io/api/reference.html#customers
options:
customer:
description:
- Name of the customer.
required: true
type: str
match:
description:
- The matching logged in user for the customer.
required: true
type: str
alerta_url:
description:
- The Alerta API endpoint.
required: true
type: str
api_username:
description:
- The username for the API using basic auth.
type: str
api_password:
description:
- The password for the API using basic auth.
type: str
api_key:
description:
- The access token for the API.
type: str
state:
description:
- Whether the customer should exist or not.
- Both I(customer) and I(match) identify a customer that should be added or removed.
type: str
choices: [ absent, present ]
default: present
'''
EXAMPLES = """
- name: Create customer
community.general.alerta_customer:
alerta_url: https://alerta.example.com
api_username: admin@example.com
api_password: password
customer: Developer
match: dev@example.com
- name: Delete customer
community.general.alerta_customer:
alerta_url: https://alerta.example.com
api_username: admin@example.com
api_password: password
customer: Developer
match: dev@example.com
state: absent
"""
RETURN = """
msg:
description:
- Success or failure message.
returned: always
type: str
sample: Customer customer1 created
response:
description:
- The response from the API.
returned: always
type: dict
"""
from ansible.module_utils.urls import fetch_url, basic_auth_header
from ansible.module_utils.basic import AnsibleModule
class AlertaInterface(object):
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.customer = module.params['customer']
self.match = module.params['match']
self.alerta_url = module.params['alerta_url']
self.headers = {"Content-Type": "application/json"}
if module.params.get('api_key', None):
self.headers["Authorization"] = "Key %s" % module.params['api_key']
else:
self.headers["Authorization"] = basic_auth_header(module.params['api_username'], module.params['api_password'])
def send_request(self, url, data=None, method="GET"):
response, info = fetch_url(self.module, url, data=data, headers=self.headers, method=method)
status_code = info["status"]
if status_code == 401:
self.module.fail_json(failed=True, response=info, msg="Unauthorized to request '%s' on '%s'" % (method, url))
elif status_code == 403:
self.module.fail_json(failed=True, response=info, msg="Permission Denied for '%s' on '%s'" % (method, url))
elif status_code == 404:
self.module.fail_json(failed=True, response=info, msg="Not found for request '%s' on '%s'" % (method, url))
elif status_code in (200, 201):
return self.module.from_json(response.read())
self.module.fail_json(failed=True, response=info, msg="Alerta API error with HTTP %d for %s" % (status_code, url))
def get_customers(self):
url = "%s/api/customers" % self.alerta_url
response = self.send_request(url)
pages = response["pages"]
if pages > 1:
for page in range(2, pages + 1):
page_url = url + '?page=' + str(page)
new_results = self.send_request(page_url)
response.update(new_results)
return response
def create_customer(self):
url = "%s/api/customer" % self.alerta_url
payload = {
'customer': self.customer,
'match': self.match,
}
payload = self.module.jsonify(payload)
response = self.send_request(url, payload, 'POST')
return response
def delete_customer(self, id):
url = "%s/api/customer/%s" % (self.alerta_url, id)
response = self.send_request(url, None, 'DELETE')
return response
def find_customer_id(self, customer):
for i in customer['customers']:
if self.customer == i['customer'] and self.match == i['match']:
return i['id']
return None
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(choices=['present', 'absent'], default='present'),
customer=dict(type='str', required=True),
match=dict(type='str', required=True),
alerta_url=dict(type='str', required=True),
api_username=dict(type='str'),
api_password=dict(type='str', no_log=True),
api_key=dict(type='str', no_log=True),
),
required_together=[['api_username', 'api_password']],
mutually_exclusive=[['api_username', 'api_key']],
supports_check_mode=True
)
alerta_iface = AlertaInterface(module)
if alerta_iface.state == 'present':
response = alerta_iface.get_customers()
if alerta_iface.find_customer_id(response):
module.exit_json(changed=False, response=response, msg="Customer %s already exists" % alerta_iface.customer)
else:
if not module.check_mode:
response = alerta_iface.create_customer()
module.exit_json(changed=True, response=response, msg="Customer %s created" % alerta_iface.customer)
else:
response = alerta_iface.get_customers()
id = alerta_iface.find_customer_id(response)
if id:
if not module.check_mode:
alerta_iface.delete_customer(id)
module.exit_json(changed=True, response=response, msg="Customer %s with id %s deleted" % (alerta_iface.customer, id))
else:
module.exit_json(changed=False, response=response, msg="Customer %s does not exists" % alerta_iface.customer)
if __name__ == "__main__":
main()

View File

@@ -15,6 +15,7 @@ short_description: Manages Datadog monitors
description:
- Manages monitors within Datadog.
- Options as described on https://docs.datadoghq.com/api/.
- The type C(event-v2) was added in community.general 4.8.0.
author: Sebastian Kornehl (@skornehl)
requirements: [datadog]
options:
@@ -56,6 +57,7 @@ options:
- metric alert
- service check
- event alert
- event-v2 alert
- process alert
- log alert
- query alert
@@ -222,7 +224,7 @@ def main():
api_host=dict(),
app_key=dict(required=True, no_log=True),
state=dict(required=True, choices=['present', 'absent', 'mute', 'unmute']),
type=dict(choices=['metric alert', 'service check', 'event alert', 'process alert',
type=dict(choices=['metric alert', 'service check', 'event alert', 'event-v2 alert', 'process alert',
'log alert', 'query alert', 'trace-analytics alert',
'rum alert', 'composite']),
name=dict(required=True),

View File

@@ -623,7 +623,7 @@ def main():
# Fetch existing monitor if the A record indicates it should exist and build the new monitor
current_monitor = dict()
new_monitor = dict()
if current_record and current_record['type'] == 'A':
if current_record and current_record['type'] == 'A' and current_record.get('monitor'):
current_monitor = DME.getMonitor(current_record['id'])
# Build the new monitor

View File

@@ -374,8 +374,9 @@ options:
description:
- This is only used with 'bridge-slave' - 'hairpin mode' for the slave, which allows frames to be sent back out through the slave the
frame was received on.
- The default value is C(true), but that is being deprecated
and it will be changed to C(false) in community.general 7.0.0.
type: bool
default: yes
runner:
description:
- This is the type of device or network connection that you wish to create for a team.
@@ -1376,7 +1377,8 @@ class Nmcli(object):
self.hellotime = module.params['hellotime']
self.maxage = module.params['maxage']
self.ageingtime = module.params['ageingtime']
self.hairpin = module.params['hairpin']
# hairpin should be back to normal in 7.0.0
self._hairpin = module.params['hairpin']
self.path_cost = module.params['path_cost']
self.mac = module.params['mac']
self.runner = module.params['runner']
@@ -1423,6 +1425,18 @@ class Nmcli(object):
self.edit_commands = []
@property
def hairpin(self):
if self._hairpin is None:
self.module.deprecate(
"Parameter 'hairpin' default value will change from true to false in community.general 7.0.0. "
"Set the value explicitly to supress this warning.",
version='7.0.0', collection_name='community.general',
)
# Should be False in 7.0.0 but then that should be in argument_specs
self._hairpin = True
return self._hairpin
def execute_command(self, cmd, use_unsafe_shell=False, data=None):
if isinstance(cmd, list):
cmd = [to_text(item) for item in cmd]
@@ -2119,7 +2133,7 @@ def main():
hellotime=dict(type='int', default=2),
maxage=dict(type='int', default=20),
ageingtime=dict(type='int', default=300),
hairpin=dict(type='bool', default=True),
hairpin=dict(type='bool'),
path_cost=dict(type='int', default=100),
# team specific vars
runner=dict(type='str', default='roundrobin',

View File

@@ -72,8 +72,8 @@ options:
default: false
description:
- Avoid loading any C(.gemrc) file. Ignored for RubyGems prior to 2.5.2.
- "The current default value will be deprecated in community.general 4.0.0: if the value is not explicitly specified, a deprecation message will be shown."
- From community.general 5.0.0 on, the default will be changed to C(true).
- "The current default value will be deprecated in community.general 5.0.0: if the value is not explicitly specified, a deprecation message will be shown."
- From community.general 6.0.0 on, the default will be changed to C(true).
version_added: 3.3.0
env_shebang:
description:

View File

@@ -610,8 +610,9 @@ class Pacman(object):
# Expand group members
for group_member in self.inventory["available_groups"][pkg]:
pkg_list.append(Package(name=group_member, source=group_member))
elif pkg in self.inventory["available_pkgs"]:
# just a regular pkg
elif pkg in self.inventory["available_pkgs"] or pkg in self.inventory["installed_pkgs"]:
# Just a regular pkg, either available in the repositories,
# or locally installed, which we need to know for absent state
pkg_list.append(Package(name=pkg, source=pkg))
else:
# Last resort, call out to pacman to extract the info,

View File

@@ -230,7 +230,9 @@ def install_packages(module, xbps_path, state, packages):
module.params['upgrade_xbps'] = False
install_packages(module, xbps_path, state, packages)
elif rc != 0 and not (state == 'latest' and rc == 17):
module.fail_json(msg="failed to install %s" % (package))
module.fail_json(msg="failed to install %s packages(s)"
% (len(toInstall)),
packages=toInstall)
module.exit_json(changed=True, msg="installed %s package(s)"
% (len(toInstall)),

View File

@@ -318,6 +318,14 @@ EXAMPLES = '''
category: Systems
command: DisableBootOverride
- name: Set system indicator LED to blink using security token for auth
community.general.redfish_command:
category: Systems
command: IndicatorLedBlink
resource_id: 437XR1138R2
baseuri: "{{ baseuri }}"
auth_token: "{{ result.session.token }}"
- name: Add user
community.general.redfish_command:
category: Accounts
@@ -583,7 +591,8 @@ from ansible.module_utils.common.text.converters import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["PowerOn", "PowerForceOff", "PowerForceRestart", "PowerGracefulRestart",
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride"],
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride",
"IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Chassis": ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",
"UpdateUserRole", "UpdateUserPassword", "UpdateUserName",
@@ -754,6 +763,8 @@ def main():
elif command == "DisableBootOverride":
boot_opts['override_enabled'] = 'Disabled'
result = rf_utils.set_boot_override(boot_opts)
elif command.startswith('IndicatorLed'):
result = rf_utils.manage_system_indicator_led(command)
elif category == "Chassis":
result = rf_utils._find_chassis_resource()
@@ -769,7 +780,7 @@ def main():
else:
for command in command_list:
if command in led_commands:
result = rf_utils.manage_indicator_led(command)
result = rf_utils.manage_chassis_indicator_led(command)
elif category == "Sessions":
# execute only if we find SessionService resources

View File

@@ -110,7 +110,6 @@ class GitlabBranch(object):
return self.project.branches.create({'branch': branch, 'ref': ref_branch})
def delete_branch(self, branch):
branch.unprotect()
return branch.delete()

View File

@@ -279,7 +279,7 @@ class GitLabGroup(object):
def delete_group(self):
group = self.group_object
if len(group.projects.list()) >= 1:
if len(group.projects.list(all=False)) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:

View File

@@ -172,13 +172,13 @@ class GitLabGroup(object):
# get user id if the user exists
def get_user_id(self, gitlab_user):
user_exists = self._gitlab.users.list(username=gitlab_user)
user_exists = self._gitlab.users.list(username=gitlab_user, all=True)
if user_exists:
return user_exists[0].id
# get group id if group exists
def get_group_id(self, gitlab_group):
groups = self._gitlab.groups.list(search=gitlab_group)
groups = self._gitlab.groups.list(search=gitlab_group, all=True)
for group in groups:
if group.full_path == gitlab_group:
return group.id

View File

@@ -268,7 +268,7 @@ class GitLabHook(object):
@param hook_url Url to call on event
'''
def find_hook(self, project, hook_url):
hooks = project.hooks.list()
hooks = project.hooks.list(all=True)
for hook in hooks:
if (hook.url == hook_url):
return hook

View File

@@ -494,9 +494,9 @@ def main():
namespace_id = group.id
else:
if username:
namespace = gitlab_instance.namespaces.list(search=username)[0]
namespace = gitlab_instance.namespaces.list(search=username, all=False)[0]
else:
namespace = gitlab_instance.namespaces.list(search=gitlab_instance.user.username)[0]
namespace = gitlab_instance.namespaces.list(search=gitlab_instance.user.username, all=False)[0]
namespace_id = namespace.id
if not namespace_id:

View File

@@ -178,12 +178,12 @@ class GitLabProjectMembers(object):
project_exists = self._gitlab.projects.get(project_name)
return project_exists.id
except gitlab.exceptions.GitlabGetError as e:
project_exists = self._gitlab.projects.list(search=project_name)
project_exists = self._gitlab.projects.list(search=project_name, all=False)
if project_exists:
return project_exists[0].id
def get_user_id(self, gitlab_user):
user_exists = self._gitlab.users.list(username=gitlab_user)
user_exists = self._gitlab.users.list(username=gitlab_user, all=False)
if user_exists:
return user_exists[0].id

View File

@@ -48,7 +48,6 @@ options:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
- Required only if C(state) is set to C(present).
type: str
reset_password:
description:
@@ -349,7 +348,7 @@ class GitLabUser(object):
@param sshkey_name Name of the ssh key
'''
def ssh_key_exists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
keyList = map(lambda k: k.title, user.keys.list(all=True))
return sshkey_name in keyList
@@ -519,7 +518,7 @@ class GitLabUser(object):
@param username Username of the user
'''
def find_user(self, username):
users = self._gitlab.users.list(search=username)
users = self._gitlab.users.list(search=username, all=True)
for user in users:
if (user.username == username):
return user

View File

@@ -41,6 +41,16 @@ options:
- The priority of the alternative.
type: int
default: 50
state:
description:
- C(present) - install the alternative (if not already installed), but do
not set it as the currently selected alternative for the group.
- C(selected) - install the alternative (if not already installed), and
set it as the currently selected alternative for the group.
choices: [ present, selected ]
default: selected
type: str
version_added: 4.8.0
requirements: [ update-alternatives ]
'''
@@ -61,6 +71,13 @@ EXAMPLES = r'''
name: java
path: /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java
priority: -10
- name: Install Python 3.5 but do not select it
community.general.alternatives:
name: python
path: /usr/bin/python3.5
link: /usr/bin/python
state: present
'''
import os
@@ -70,6 +87,15 @@ import subprocess
from ansible.module_utils.basic import AnsibleModule
class AlternativeState:
PRESENT = "present"
SELECTED = "selected"
@classmethod
def to_list(cls):
return [cls.PRESENT, cls.SELECTED]
def main():
module = AnsibleModule(
@@ -78,6 +104,11 @@ def main():
path=dict(type='path', required=True),
link=dict(type='path'),
priority=dict(type='int', default=50),
state=dict(
type='str',
choices=AlternativeState.to_list(),
default=AlternativeState.SELECTED,
),
),
supports_check_mode=True,
)
@@ -87,6 +118,7 @@ def main():
path = params['path']
link = params['link']
priority = params['priority']
state = params['state']
UPDATE_ALTERNATIVES = module.get_bin_path('update-alternatives', True)
@@ -126,9 +158,20 @@ def main():
link = line.split()[1]
break
changed = False
if current_path != path:
# Check mode: expect a change if this alternative is not already
# installed, or if it is to be set as the current selection.
if module.check_mode:
module.exit_json(changed=True, current_path=current_path)
module.exit_json(
changed=(
path not in all_alternatives or
state == AlternativeState.SELECTED
),
current_path=current_path,
)
try:
# install the requested path if necessary
if path not in all_alternatives:
@@ -141,18 +184,34 @@ def main():
[UPDATE_ALTERNATIVES, '--install', link, name, path, str(priority)],
check_rc=True
)
changed = True
# select the requested path
module.run_command(
[UPDATE_ALTERNATIVES, '--set', name, path],
check_rc=True
)
# set the current selection to this path (if requested)
if state == AlternativeState.SELECTED:
module.run_command(
[UPDATE_ALTERNATIVES, '--set', name, path],
check_rc=True
)
changed = True
module.exit_json(changed=True)
except subprocess.CalledProcessError as cpe:
module.fail_json(msg=str(dir(cpe)))
else:
module.exit_json(changed=False)
elif current_path == path and state == AlternativeState.PRESENT:
# Case where alternative is currently selected, but state is set
# to 'present'. In this case, we set to auto mode.
if module.check_mode:
module.exit_json(changed=True, current_path=current_path)
changed = True
try:
module.run_command(
[UPDATE_ALTERNATIVES, '--auto', name],
check_rc=True,
)
except subprocess.CalledProcessError as cpe:
module.fail_json(msg=str(dir(cpe)))
module.exit_json(changed=changed)
if __name__ == '__main__':

View File

@@ -59,7 +59,7 @@ options:
resizefs:
description:
- If C(yes), if the block device and filesystem size differ, grow the filesystem into the space.
- Supported for C(ext2), C(ext3), C(ext4), C(ext4dev), C(f2fs), C(lvm), C(xfs), C(ufs) and C(vfat) filesystems.
- Supported for C(btrfs), C(ext2), C(ext3), C(ext4), C(ext4dev), C(f2fs), C(lvm), C(xfs), C(ufs) and C(vfat) filesystems.
Attempts to resize other filesystem types will fail.
- XFS Will only grow if mounted. Currently, the module is based on commands
from C(util-linux) package to perform operations, so resizing of XFS is
@@ -331,6 +331,10 @@ class Reiserfs(Filesystem):
class Btrfs(Filesystem):
MKFS = 'mkfs.btrfs'
INFO = 'btrfs'
GROW = 'btrfs'
GROW_MAX_SPACE_FLAGS = ['filesystem', 'resize', 'max']
GROW_MOUNTPOINT_ONLY = True
def __init__(self, module):
super(Btrfs, self).__init__(module)
@@ -349,6 +353,19 @@ class Btrfs(Filesystem):
self.MKFS_FORCE_FLAGS = ['-f']
self.module.warn('Unable to identify mkfs.btrfs version (%r, %r)' % (stdout, stderr))
def get_fs_size(self, dev):
"""Return size in bytes of filesystem on device (integer)."""
mountpoint = dev.get_mountpoint()
if not mountpoint:
self.module.fail_json(msg="%s needs to be mounted for %s operations" % (dev, self.fstype))
dummy, stdout, dummy = self.module.run_command([self.module.get_bin_path(self.INFO),
'filesystem', 'usage', '-b', mountpoint], check_rc=True)
for line in stdout.splitlines():
if "Device size" in line:
return int(line.split()[-1])
raise ValueError(stdout)
class Ocfs2(Filesystem):
MKFS = 'mkfs.ocfs2'

View File

@@ -113,7 +113,7 @@ from ansible.module_utils.common.text.converters import to_native
def get_runtime_status(ignore_selinux_state=False):
return True if ignore_selinux_state is True else selinux.is_selinux_enabled()
return ignore_selinux_state or selinux.is_selinux_enabled()
def semanage_port_get_ports(seport, setype, proto):
@@ -161,10 +161,7 @@ def semanage_port_get_type(seport, port, proto):
key = (int(ports[0]), int(ports[1]), proto)
records = seport.get_all()
if key in records:
return records[key]
else:
return None
return records.get(key)
def semanage_port_add(module, ports, proto, setype, do_reload, serange='s0', sestore=''):
@@ -194,19 +191,23 @@ def semanage_port_add(module, ports, proto, setype, do_reload, serange='s0', ses
:rtype: bool
:return: True if the policy was changed, otherwise False
"""
change = False
try:
seport = seobject.portRecords(sestore)
seport.set_reload(do_reload)
change = False
ports_by_type = semanage_port_get_ports(seport, setype, proto)
for port in ports:
if port not in ports_by_type:
change = True
port_type = semanage_port_get_type(seport, port, proto)
if port_type is None and not module.check_mode:
seport.add(port, proto, serange, setype)
elif port_type is not None and not module.check_mode:
seport.modify(port, proto, serange, setype)
if port in ports_by_type:
continue
change = True
if module.check_mode:
continue
port_type = semanage_port_get_type(seport, port, proto)
if port_type is None:
seport.add(port, proto, serange, setype)
else:
seport.modify(port, proto, serange, setype)
except (ValueError, IOError, KeyError, OSError, RuntimeError) as e:
module.fail_json(msg="%s: %s\n" % (e.__class__.__name__, to_native(e)), exception=traceback.format_exc())
@@ -238,10 +239,10 @@ def semanage_port_del(module, ports, proto, setype, do_reload, sestore=''):
:rtype: bool
:return: True if the policy was changed, otherwise False
"""
change = False
try:
seport = seobject.portRecords(sestore)
seport.set_reload(do_reload)
change = False
ports_by_type = semanage_port_get_ports(seport, setype, proto)
for port in ports:
if port in ports_by_type:

View File

@@ -23,6 +23,7 @@ options:
description:
- The commands allowed by the sudoers rule.
- Multiple can be added by passing a list of commands.
- Use C(ALL) for all commands.
type: list
elements: str
group:
@@ -80,7 +81,7 @@ EXAMPLES = '''
state: present
user: bob
runas: alice
commands: ANY
commands: ALL
- name: >-
Allow the monitoring group to run sudo /usr/local/bin/gather-app-metrics

View File

@@ -10,58 +10,75 @@ __metaclass__ = type
DOCUMENTATION = '''
module: xfconf
author:
- "Joseph Benden (@jbenden)"
- "Alexei Znamensky (@russoz)"
- "Joseph Benden (@jbenden)"
- "Alexei Znamensky (@russoz)"
short_description: Edit XFCE4 Configurations
description:
- This module allows for the manipulation of Xfce 4 Configuration with the help of
xfconf-query. Please see the xfconf-query(1) man pages for more details.
seealso:
- name: C(xfconf-query) man page
description: Manual page of the C(xfconf-query) tool at the XFCE documentation site.
link: 'https://docs.xfce.org/xfce/xfconf/xfconf-query'
- name: xfconf - Configuration Storage System
description: XFCE documentation for the Xfconf configuration system.
link: 'https://docs.xfce.org/xfce/xfconf/start'
options:
channel:
description:
- A Xfconf preference channel is a top-level tree key, inside of the
Xfconf repository that corresponds to the location for which all
application properties/keys are stored. See man xfconf-query(1)
required: yes
- A Xfconf preference channel is a top-level tree key, inside of the
Xfconf repository that corresponds to the location for which all
application properties/keys are stored. See man xfconf-query(1)
required: true
type: str
property:
description:
- A Xfce preference key is an element in the Xfconf repository
that corresponds to an application preference. See man xfconf-query(1)
required: yes
- A Xfce preference key is an element in the Xfconf repository
that corresponds to an application preference. See man xfconf-query(1)
required: true
type: str
value:
description:
- Preference properties typically have simple values such as strings,
integers, or lists of strings and integers. This is ignored if the state
is "get". For array mode, use a list of values. See man xfconf-query(1)
- Preference properties typically have simple values such as strings,
integers, or lists of strings and integers. This is ignored if the state
is "get". For array mode, use a list of values. See man xfconf-query(1)
type: list
elements: raw
value_type:
description:
- The type of value being set. This is ignored if the state is "get".
For array mode, use a list of types.
- The type of value being set. This is ignored if the state is "get".
- When providing more than one I(value_type), the length of the list must
be equal to the length of I(value).
- If only one I(value_type) is provided, but I(value) contains more than
on element, that I(value_type) will be applied to all elements of I(value).
- If the I(property) being set is an array and it can possibly have ony one
element in the array, then I(force_array=true) must be used to ensure
that C(xfconf-query) will interpret the value as an array rather than a
scalar.
- Support for C(uchar), C(char), C(uint64), and C(int64) has been added in community.general 4.8.0.
type: list
elements: str
choices: [ int, uint, bool, float, double, string ]
choices: [ string, int, double, bool, uint, uchar, char, uint64, int64, float ]
state:
type: str
description:
- The action to take upon the property/value.
- State C(get) is deprecated and will be removed in community.general 5.0.0. Please use the module M(community.general.xfconf_info) instead.
- The action to take upon the property/value.
- State C(get) is deprecated and will be removed in community.general 5.0.0. Please use the module M(community.general.xfconf_info) instead.
choices: [ get, present, absent ]
default: "present"
force_array:
description:
- Force array even if only one element
- Force array even if only one element
type: bool
default: 'no'
aliases: ['array']
version_added: 1.0.0
disable_facts:
description:
- The value C(false) is no longer allowed since community.general 4.0.0.
- This option will be deprecated in a future version, and eventually be removed.
- The value C(false) is no longer allowed since community.general 4.0.0.
- This option will be deprecated in a future version, and eventually be removed.
type: bool
default: true
version_added: 2.1.0
@@ -88,7 +105,7 @@ EXAMPLES = """
property: /general/workspace_names
value_type: string
value: ['Main']
force_array: yes
force_array: true
"""
RETURN = '''
@@ -104,27 +121,27 @@ RETURN = '''
sample: "/Xft/DPI"
value_type:
description:
- The type of the value that was changed (C(none) for C(get) and C(reset)
state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
- The type of the value that was changed (C(none) for C(get) and C(reset)
state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
returned: success
type: any
sample: '"int" or ["str", "str", "str"]'
value:
description:
- The value of the preference key after executing the module. Either a
single string value or a list of strings for array types.
- This is a string or a list of strings.
- The value of the preference key after executing the module. Either a
single string value or a list of strings for array types.
- This is a string or a list of strings.
returned: success
type: any
sample: '"192" or ["orange", "yellow", "violet"]'
previous_value:
description:
- The value of the preference key before executing the module (C(none) for
C(get) state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
- The value of the preference key before executing the module (C(none) for
C(get) state). Either a single string value or a list of strings for array
types.
- This is a string or a list of strings.
returned: success
type: any
sample: '"96" or ["red", "blue", "green"]'
@@ -161,15 +178,13 @@ class XFConfProperty(CmdStateModuleHelper):
facts_params = ('property', 'channel', 'value')
module = dict(
argument_spec=dict(
state=dict(default="present",
choices=("present", "get", "absent"),
type='str'),
channel=dict(required=True, type='str'),
property=dict(required=True, type='str'),
value_type=dict(required=False, type='list',
elements='str', choices=('int', 'uint', 'bool', 'float', 'double', 'string')),
value=dict(required=False, type='list', elements='raw'),
force_array=dict(default=False, type='bool', aliases=['array']),
state=dict(type='str', choices=("present", "get", "absent"), default="present"),
channel=dict(type='str', required=True),
property=dict(type='str', required=True),
value_type=dict(type='list', elements='str',
choices=('string', 'int', 'double', 'bool', 'uint', 'uchar', 'char', 'uint64', 'int64', 'float')),
value=dict(type='list', elements='raw'),
force_array=dict(type='bool', default=False, aliases=['array']),
disable_facts=dict(type='bool', default=True),
),
required_if=[('state', 'present', ['value', 'value_type'])],

View File

@@ -0,0 +1,2 @@
shippable/posix/group1
disabled

View File

@@ -0,0 +1,4 @@
alerta_url: http://localhost:8080/
alerta_user: admin@example.com
alerta_password: password
alerta_key: demo-key

View File

@@ -0,0 +1,151 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Create customer (check mode)
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
check_mode: true
register: result
- name: Check result (check mode)
assert:
that:
- result is changed
- name: Create customer
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
register: result
- name: Check customer creation
assert:
that:
- result is changed
- name: Test customer creation idempotency
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
register: result
- name: Check customer creation idempotency
assert:
that:
- result is not changed
- name: Delete customer (check mode)
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
check_mode: true
register: result
- name: Check customer deletion (check mode)
assert:
that:
- result is changed
- name: Delete customer
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
register: result
- name: Check customer deletion
assert:
that:
- result is changed
- name: Test customer deletion idempotency
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
register: result
- name: Check customer deletion idempotency
assert:
that:
- result is not changed
- name: Delete non-existing customer (check mode)
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_username: "{{ alerta_user }}"
api_password: "{{ alerta_password }}"
customer: customer1
match: admin@admin.admin
state: absent
check_mode: true
register: result
- name: Check non-existing customer deletion (check mode)
assert:
that:
- result is not changed
- name: Create customer with api key
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_key: "{{ alerta_key }}"
customer: customer1
match: admin@admin.admin
register: result
- name: Check customer creation with api key
assert:
that:
- result is changed
- name: Delete customer with api key
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_key: "{{ alerta_key }}"
customer: customer1
match: admin@admin.admin
state: absent
register: result
- name: Check customer deletion with api key
assert:
that:
- result is changed
- name: Use wrong api key
alerta_customer:
alerta_url: "{{ alerta_url }}"
api_key: wrong_key
customer: customer1
match: admin@admin.admin
register: result
ignore_errors: true
- name: Check customer creation with api key
assert:
that:
- result is not changed
- result is failed

View File

@@ -49,6 +49,12 @@
# Test that path is checked: alternatives must fail when path is nonexistent
- import_tasks: path_is_checked.yml
# Test operation of the 'state' parameter
- block:
- include_tasks: remove_links.yml
- include_tasks: tests_state.yml
# Cleanup
always:
- include_tasks: remove_links.yml
@@ -62,6 +68,7 @@
path: '/usr/bin/dummy{{ item }}'
state: absent
with_sequence: start=1 end=4
# *Disable tests on Fedora 24*
# Shippable Fedora 24 image provides chkconfig-1.7-2.fc24.x86_64 but not the
# latest available version (chkconfig-1.8-1.fc24.x86_64). update-alternatives

View File

@@ -3,5 +3,6 @@
path: '{{ item }}'
state: absent
with_items:
- "{{ alternatives_dir }}/dummy"
- /etc/alternatives/dummy
- /usr/bin/dummy

View File

@@ -0,0 +1,71 @@
# Add a few dummy alternatives with state = present and make sure that the
# group is in 'auto' mode and the highest priority alternative is selected.
- name: Add some dummy alternatives with state = present
alternatives:
name: dummy
path: "/usr/bin/dummy{{ item.n }}"
link: /usr/bin/dummy
priority: "{{ item.priority }}"
state: present
loop:
- { n: 1, priority: 50 }
- { n: 2, priority: 70 }
- { n: 3, priority: 25 }
- name: Ensure that the link group is in auto mode
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^auto$"'
# Execute current selected 'dummy' and ensure it's the alternative we expect
- name: Execute the current dummy command
shell: dummy
register: cmd
- name: Ensure that the expected command was executed
assert:
that:
- cmd.stdout == "dummy2"
# Add another alternative with state = 'selected' and make sure that
# this change results in the group being set to manual mode, and the
# new alternative being the selected one.
- name: Add another dummy alternative with state = selected
alternatives:
name: dummy
path: /usr/bin/dummy4
link: /usr/bin/dummy
priority: 10
state: selected
- name: Ensure that the link group is in manual mode
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^manual$"'
- name: Execute the current dummy command
shell: dummy
register: cmd
- name: Ensure that the expected command was executed
assert:
that:
- cmd.stdout == "dummy4"
# Set the currently selected alternative to state = 'present' (was previously
# selected), and ensure that this results in the group being set to 'auto'
# mode, and the highest priority alternative is selected.
- name: Set current selected dummy to state = present
alternatives:
name: dummy
path: /usr/bin/dummy4
link: /usr/bin/dummy
state: present
- name: Ensure that the link group is in auto mode
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^auto$"'
- name: Execute the current dummy command
shell: dummy
register: cmd
- name: Ensure that the expected command was executed
assert:
that:
- cmd.stdout == "dummy2"

View File

@@ -0,0 +1 @@
shippable/posix/group2

View File

@@ -0,0 +1,78 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky <russoz@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
DOCUMENTATION = '''
module: cmd_echo
author: "Alexei Znamensky (@russoz)"
short_description: Simple module for testing
description:
- Simple module test description.
options:
command:
description: aaa
type: list
elements: str
required: true
arg_formats:
description: bbb
type: dict
required: true
arg_order:
description: ccc
type: raw
required: true
arg_values:
description: ddd
type: list
required: true
aa:
description: eee
type: raw
'''
EXAMPLES = ""
RETURN = ""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, fmt
def main():
module = AnsibleModule(
argument_spec=dict(
arg_formats=dict(type="dict", default={}),
arg_order=dict(type="raw", required=True),
arg_values=dict(type="dict", default={}),
aa=dict(type="raw"),
),
)
p = module.params
arg_formats = {}
for arg, fmt_spec in p['arg_formats'].items():
func = getattr(fmt, fmt_spec['func'])
args = fmt_spec.get("args", [])
arg_formats[arg] = func(*args)
runner = CmdRunner(module, ['echo', '--'], arg_formats=arg_formats)
info = None
with runner.context(p['arg_order']) as ctx:
result = ctx.run(**p['arg_values'])
info = ctx.run_info
rc, out, err = result
module.exit_json(rc=rc, out=out, err=err, info=info)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,7 @@
# (c) 2022, Alexei Znamensky
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: parameterized test cmd_echo
ansible.builtin.include_tasks:
file: test_cmd_echo.yml
loop: "{{ cmd_echo_tests }}"

View File

@@ -0,0 +1,13 @@
---
- name: test cmd_echo [{{ item.name }}]
cmd_echo:
arg_formats: "{{ item.arg_formats|default(omit) }}"
arg_order: "{{ item.arg_order }}"
arg_values: "{{ item.arg_values|default(omit) }}"
aa: "{{ item.aa|default(omit) }}"
register: test_result
ignore_errors: "{{ item.expect_error|default(omit) }}"
- name: check results [{{ item.name }}]
assert:
that: "{{ item.assertions }}"

View File

@@ -0,0 +1,84 @@
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
cmd_echo_tests:
- name: set aa and bb value
arg_formats:
aa:
func: as_opt_eq_val
args: [--answer]
bb:
func: as_bool
args: [--bb-here]
arg_order: 'aa bb'
arg_values:
bb: true
aa: 11
assertions:
- test_result.rc == 0
- test_result.out == "-- --answer=11 --bb-here\n"
- test_result.err == ""
- name: default aa value
arg_formats:
aa:
func: as_opt_eq_val
args: [--answer]
bb:
func: as_bool
args: [--bb-here]
arg_order: ['aa', 'bb']
arg_values:
aa: 43
bb: true
assertions:
- test_result.rc == 0
- test_result.out == "-- --answer=43 --bb-here\n"
- test_result.err == ""
- name: implicit aa format
arg_formats:
bb:
func: as_bool
args: [--bb-here]
arg_order: ['aa', 'bb']
arg_values:
bb: true
aa: 1984
assertions:
- test_result.rc == 0
- test_result.out == "-- --aa 1984 --bb-here\n"
- test_result.err == ""
- name: missing bb format
arg_order: ['aa', 'bb']
arg_values:
bb: true
aa: 1984
expect_error: true
assertions:
- test_result is failed
- test_result.rc == 1
- '"out" not in test_result'
- '"err" not in test_result'
- >-
"MissingArgumentFormat: Cannot find format for parameter bb"
in test_result.module_stderr
- name: missing bb value
arg_formats:
bb:
func: as_bool
args: [--bb-here]
arg_order: 'aa bb'
aa: 1984
expect_error: true
assertions:
- test_result is failed
- test_result.rc == 1
- '"out" not in test_result'
- '"err" not in test_result'
- >-
"MissingArgumentValue: Cannot find value for parameter bb"
in test_result.module_stderr

View File

@@ -0,0 +1,14 @@
The integration tests can be executed locally:
1. Create or use an existing discord server
2. Open `Server Settings` and navigate to `Integrations` tab
3. Click `Create Webhook` to create a new webhook
4. Click `Copy Webhook URL` and extract the webhook_id + webhook_token
Example: https://discord.com/api/webhooks/`webhook_id`/`webhook_token`
5. Replace the variables `discord_id` and `discord_token` in the var file
6. Run the integration test
````
ansible-test integration -v --color yes discord --allow-unsupported
````

View File

@@ -0,0 +1 @@
unsupported

View File

@@ -0,0 +1,2 @@
discord_id: 000
discord_token: xxx

View File

@@ -0,0 +1,64 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Send basic message
community.general.discord:
webhook_id: "{{ discord_id }}"
webhook_token: "{{ discord_token }}"
content: "Messages from ansible-test"
register: result
- name: Check result
assert:
that:
- result is changed
- result.http_code == 204
- name: Send embeds
community.general.discord:
webhook_id: "{{ discord_id }}"
webhook_token: "{{ discord_token }}"
embeds:
- title: "Title of embed message 1"
description: "Description embed message 1"
footer:
text: "author ansible-test"
image:
url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
- title: "Title of embed message 2"
description: "Description embed message 2"
footer:
text: "author ansible-test"
icon_url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
fields:
- name: "Field 1"
value: 1
- name: "Field 2"
value: "Text"
timestamp: "{{ ansible_date_time.iso8601 }}"
username: Ansible Test
avatar_url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
register: result
- name: Check result
assert:
that:
- result is changed
- result.http_code == 204
- name: Use a wrong token
community.general.discord:
webhook_id: "{{ discord_id }}"
webhook_token: "wrong_token"
content: "Messages from ansible-test"
register: result
ignore_errors: true
- name: Check result
assert:
that:
- result is not changed
- result.http_code == 401
- result.response.message == "Invalid Webhook Token"

View File

@@ -16,7 +16,7 @@ tested_filesystems:
ext3: {fssize: 10, grow: True}
ext2: {fssize: 10, grow: True}
xfs: {fssize: 20, grow: False} # grow requires a mounted filesystem
btrfs: {fssize: 150, grow: False} # grow not implemented
btrfs: {fssize: 150, grow: False} # grow requires a mounted filesystem
reiserfs: {fssize: 33, grow: False} # grow not implemented
vfat: {fssize: 20, grow: True}
ocfs2: {fssize: '{{ ocfs2_fssize }}', grow: False} # grow not implemented

View File

@@ -0,0 +1 @@
unsupported

View File

@@ -0,0 +1,140 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: Clean up test project
lxd_project:
name: ansible-test-project
state: absent
- name: Clean up test project
lxd_project:
name: ansible-test-project-renamed
state: absent
- name: Create test project
lxd_project:
name: ansible-test-project
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check project has been created correctly
assert:
that:
- results is changed
- results.actions is defined
- "'create' in results.actions"
- name: Create test project again with merge_project set to true
lxd_project:
name: ansible-test-project
merge_project: true
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check state is not changed
assert:
that:
- results is not changed
- "{{ results.actions | length }} == 0"
- name: Create test project again with merge_project set to false
lxd_project:
name: ansible-test-project
merge_project: false
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check state is not changed
assert:
that:
- results is changed
- "'apply_projects_configs' in results.actions"
- name: Update project test => update description
lxd_project:
name: ansible-test-project
merge_project: false
description: "ansible test project"
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "3"
state: present
register: results
- name: Check state is changed
assert:
that:
- results is changed
- "'apply_projects_configs' in results.actions"
- name: Update project test => update project config
lxd_project:
name: ansible-test-project
merge_project: false
description: "ansible test project"
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "4"
state: present
register: results
- name: Check state is changed
assert:
that:
- results is changed
- "'apply_projects_configs' in results.actions"
- name: Rename project test
lxd_project:
name: ansible-test-project
new_name: ansible-test-project-renamed
merge_project: true
description: "ansible test project"
config:
features.images: "false"
features.networks: "true"
features.profiles: "true"
limits.cpu: "4"
state: present
register: results
- name: Check state is changed
assert:
that:
- results is changed
- "'rename' in results.actions"
- name: Clean up test project
lxd_project:
name: ansible-test-project-renamed
state: absent
register: results
- name: Check project is deleted
assert:
that:
- results is changed
- "'delete' in results.actions"

View File

@@ -5,3 +5,4 @@ skip/freebsd
skip/osx
skip/macos
skip/rhel
needs/root

View File

@@ -0,0 +1,81 @@
---
- vars:
package_name: ansible-test-foo
username: ansible-regular-user
block:
- name: Install fakeroot
pacman:
state: present
name:
- fakeroot
- name: Create user
user:
name: '{{ username }}'
home: '/home/{{ username }}'
create_home: true
- name: Create directory
file:
path: '/home/{{ username }}/{{ package_name }}'
state: directory
owner: '{{ username }}'
- name: Create PKGBUILD
copy:
dest: '/home/{{ username }}/{{ package_name }}/PKGBUILD'
content: |
pkgname=('{{ package_name }}')
pkgver=1.0.0
pkgrel=1
pkgdesc="Test removing a local package not in the repositories"
arch=('any')
license=('GPL v3+')
owner: '{{ username }}'
- name: Build package
command:
cmd: su {{ username }} -c "makepkg -srf"
chdir: '/home/{{ username }}/{{ package_name }}'
- name: Install package
pacman:
state: present
name:
- '/home/{{ username }}/{{ package_name }}/{{ package_name }}-1.0.0-1-any.pkg.tar.zst'
- name: Remove package (check mode)
pacman:
state: absent
name:
- '{{ package_name }}'
check_mode: true
register: remove_1
- name: Remove package
pacman:
state: absent
name:
- '{{ package_name }}'
register: remove_2
- name: Remove package (idempotent)
pacman:
state: absent
name:
- '{{ package_name }}'
register: remove_3
- name: Check conditions
assert:
that:
- remove_1 is changed
- remove_2 is changed
- remove_3 is not changed
always:
- name: Remove directory
file:
path: '{{ remote_tmp_dir }}/{{ package_name }}'
state: absent
become: true

View File

@@ -11,3 +11,4 @@
- include: 'package_urls.yml'
- include: 'remove_nosave.yml'
- include: 'update_cache.yml'
- include: 'locally_installed_package.yml'

View File

@@ -0,0 +1,4 @@
**/.terraform/*
*.tfstate
*.tfstate.*
.terraform.lock.hcl

View File

@@ -0,0 +1,7 @@
shippable/posix/group1
skip/windows
skip/aix
skip/osx
skip/macos
skip/freebsd
skip/python2

View File

@@ -0,0 +1,3 @@
dependencies:
- setup_pkg_mgr
- setup_remote_tmp_dir

View File

@@ -0,0 +1,70 @@
---
# This block checks and registers Terraform version of the binary found in path.
- name: Check for existing Terraform in path
block:
- name: Check if terraform is present in path
command: "command -v terraform"
register: terraform_binary_path
ignore_errors: true
- name: Check Terraform version
command: terraform version
register: terraform_version_output
when: terraform_binary_path.rc == 0
- name: Set terraform version
set_fact:
terraform_version_installed: "{{ terraform_version_output.stdout | regex_search('(?!Terraform.*v)([0-9]+\\.[0-9]+\\.[0-9]+)') }}"
when: terraform_version_output.changed
# This block handles the tasks of installing the Terraform binary. This happens if there is no existing
# terraform in $PATH OR version does not match `terraform_version`.
- name: Execute Terraform install tasks
block:
- name: Install Terraform
debug:
msg: "Installing terraform {{ terraform_version }}, found: {{ terraform_version_installed | default('no terraform binary found') }}."
- name: Ensure unzip is present
ansible.builtin.package:
name: unzip
state: present
- name: Install Terraform binary
unarchive:
src: "{{ terraform_url }}"
dest: "{{ remote_tmp_dir }}"
mode: 0755
remote_src: yes
validate_certs: "{{ validate_certs }}"
when: terraform_version_installed is not defined or terraform_version_installed != terraform_version
# This sets `terraform_binary_path` to coalesced output of first non-empty string in this order:
# path from the 'Check if terraform is present in path' task, and lastly, the fallback path.
- name: Set path to terraform binary
set_fact:
terraform_binary_path: "{{ terraform_binary_path.stdout or remote_tmp_dir ~ '/terraform' }}"
- name: Create terraform project directory
file:
path: "{{ terraform_project_dir }}/{{ item['name'] }}"
state: directory
mode: 0755
loop: "{{ terraform_provider_versions }}"
loop_control:
index_var: provider_index
- name: Loop over provider upgrade test tasks
include_tasks: test_provider_upgrade.yml
vars:
tf_provider: "{{ terraform_provider_versions[provider_index] }}"
loop: "{{ terraform_provider_versions }}"
loop_control:
index_var: provider_index

View File

@@ -0,0 +1,23 @@
---
- name: Output terraform provider test project
ansible.builtin.template:
src: templates/provider_test/main.tf.j2
dest: "{{ terraform_project_dir }}/{{ tf_provider['name'] }}/main.tf"
force: yes
register: terraform_provider_hcl
# The purpose of this task is to init terraform multiple times with different provider module
# versions, so that we can verify that provider upgrades during init work as intended.
- name: Init Terraform configuration with pinned provider version
community.general.terraform:
project_path: "{{ terraform_provider_hcl.dest | dirname }}"
binary_path: "{{ terraform_binary_path }}"
force_init: yes
provider_upgrade: "{{ terraform_provider_upgrade }}"
state: present
register: terraform_init_result
- assert:
that: terraform_init_result is not failed

View File

@@ -0,0 +1,8 @@
terraform {
required_providers {
{{ tf_provider['name'] }} = {
source = "{{ tf_provider['source'] }}"
version = "{{ tf_provider['version'] }}"
}
}
}

View File

@@ -0,0 +1,37 @@
---
# Terraform version that will be downloaded
terraform_version: 1.1.7
# Architecture of the downloaded Terraform release (needs to match target testing platform)
terraform_arch: "{{ ansible_system | lower }}_{{terraform_arch_map[ansible_architecture] }}"
# URL of where the Terraform binary will be downloaded from
terraform_url: "https://releases.hashicorp.com/terraform/{{ terraform_version }}/terraform_{{ terraform_version }}_{{ terraform_arch }}.zip"
# Controls whether the unarchive task will validate TLS certs of the Terraform binary host
validate_certs: yes
# Directory where Terraform tests will be created
terraform_project_dir: "{{ remote_tmp_dir }}/tf_provider_test"
# Controls whether terraform init will use the `-upgrade` flag
terraform_provider_upgrade: yes
# list of dicts containing Terraform providers that will be tested
# The null provider is a good candidate, as it's small and has no external dependencies
terraform_provider_versions:
- name: "null"
source: "hashicorp/null"
version: ">=2.0.0, < 3.0.0"
- name: "null"
source: "hashicorp/null"
version: ">=3.0.0"
# mapping between values returned from ansible_architecture and arch names used by golang builds of Terraform
# see https://www.terraform.io/downloads
terraform_arch_map:
x86_64: amd64
arm64: arm64

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Check extra collection docs with antsibull-lint."""
"""Check extra collection docs with antsibull-docs."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

View File

@@ -5,6 +5,6 @@
],
"output": "path-line-column-message",
"requirements": [
"antsibull"
"antsibull-docs"
]
}

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Check extra collection docs with antsibull-lint."""
"""Check extra collection docs with antsibull-docs."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

View File

@@ -7,6 +7,7 @@
plugins/module_utils/cloud.py pylint:bad-option-value # a pylint test that is disabled was modified over time
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/lxd/lxd_project.py use-argspec-type-path # expanduser() applied to constants
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice

View File

@@ -6,6 +6,7 @@
.azure-pipelines/scripts/publish-codecov.py metaclass-boilerplate
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/lxd/lxd_project.py use-argspec-type-path # expanduser() applied to constants
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice

View File

@@ -1,6 +1,7 @@
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/lxd/lxd_project.py use-argspec-type-path # expanduser() applied to constants
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice

View File

@@ -1,6 +1,7 @@
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/lxd/lxd_project.py use-argspec-type-path # expanduser() applied to constants
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice

View File

@@ -1,6 +1,7 @@
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/lxd/lxd_project.py use-argspec-type-path # expanduser() applied to constants
plugins/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice

View File

@@ -7,6 +7,7 @@
plugins/module_utils/cloud.py pylint:bad-option-value # a pylint test that is disabled was modified over time
plugins/modules/cloud/lxc/lxc_container.py use-argspec-type-path
plugins/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/cloud/lxd/lxd_project.py use-argspec-type-path # expanduser() applied to constants
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values

View File

@@ -0,0 +1,222 @@
[
{
"DEPLOY_ID": "bcfec9d9-c0d0-4523-b5e7-62993947e94c",
"ETIME": 0,
"GID": 105,
"GNAME": "SW",
"HISTORY_RECORDS": {},
"ID": 451,
"LAST_POLL": 0,
"LCM_STATE": 3,
"MONITORING": {},
"NAME": "terraform_demo_00",
"RESCHED": 0,
"STATE": 3,
"STIME": 1649886492,
"TEMPLATE": {
"NIC": [
{
"AR_ID": "0",
"BRIDGE": "mgmt0",
"BRIDGE_TYPE": "linux",
"CLUSTER_ID": "0",
"IP": "192.168.11.248",
"MAC": "02:00:c0:a8:2b:bb",
"MODEL": "virtio",
"NAME": "NIC0",
"NETWORK": "Infrastructure",
"NETWORK_ID": "0",
"NIC_ID": "0",
"SECURITY_GROUPS": "0,101",
"TARGET": "one-453-0",
"VLAN_ID": "12",
"VN_MAD": "802.1Q"
}
],
"NIC_DEFAULT": {
"MODEL": "virtio"
},
"TEMPLATE_ID": "28",
"TM_MAD_SYSTEM": "shared",
"VCPU": "4",
"VMID": "453"
},
"USER_TEMPLATE": {
"GUEST_OS": "linux",
"INPUTS_ORDER": "",
"LABELS": "foo,bench",
"LOGO": "images/logos/linux.png",
"MEMORY_UNIT_COST": "MB",
"SCHED_REQUIREMENTS": "ARCH=\"x86_64\"",
"TGROUP": "bench_clients"
}
},
{
"DEPLOY_ID": "25895435-5e3a-4d50-a025-e03a7a463abd",
"ETIME": 0,
"GID": 105,
"GNAME": "SW",
"HISTORY_RECORDS": {},
"ID": 451,
"LAST_POLL": 0,
"LCM_STATE": 3,
"MONITORING": {},
"NAME": "terraform_demo_01",
"RESCHED": 0,
"STATE": 3,
"STIME": 1649886492,
"TEMPLATE": {
"NIC": [
{
"AR_ID": "0",
"BRIDGE": "mgmt0",
"BRIDGE_TYPE": "linux",
"CLUSTER_ID": "0",
"IP": "192.168.11.241",
"MAC": "02:00:c0:a8:4b:bb",
"MODEL": "virtio",
"NAME": "NIC0",
"NETWORK": "Infrastructure",
"NETWORK_ID": "0",
"NIC_ID": "0",
"SECURITY_GROUPS": "0,101",
"TARGET": "one-451-0",
"VLAN_ID": "12",
"VN_MAD": "802.1Q"
}
],
"NIC_DEFAULT": {
"MODEL": "virtio"
},
"TEMPLATE_ID": "28",
"TM_MAD_SYSTEM": "shared",
"VCPU": "4",
"VMID": "451"
},
"USER_TEMPLATE": {
"GUEST_OS": "linux",
"INPUTS_ORDER": "",
"LABELS": "foo,bench",
"LOGO": "images/logos/linux.png",
"MEMORY_UNIT_COST": "MB",
"SCHED_REQUIREMENTS": "ARCH=\"x86_64\"",
"TESTATTR": "testvar",
"TGROUP": "bench_clients"
}
},
{
"DEPLOY_ID": "2b00c379-3601-45ee-acf5-e7b3ff2b7bca",
"ETIME": 0,
"GID": 105,
"GNAME": "SW",
"HISTORY_RECORDS": {},
"ID": 451,
"LAST_POLL": 0,
"LCM_STATE": 3,
"MONITORING": {},
"NAME": "terraform_demo_srv_00",
"RESCHED": 0,
"STATE": 3,
"STIME": 1649886492,
"TEMPLATE": {
"NIC": [
{
"AR_ID": "0",
"BRIDGE": "mgmt0",
"BRIDGE_TYPE": "linux",
"CLUSTER_ID": "0",
"IP": "192.168.11.247",
"MAC": "02:00:c0:a8:0b:cc",
"MODEL": "virtio",
"NAME": "NIC0",
"NETWORK": "Infrastructure",
"NETWORK_ID": "0",
"NIC_ID": "0",
"SECURITY_GROUPS": "0,101",
"TARGET": "one-452-0",
"VLAN_ID": "12",
"VN_MAD": "802.1Q"
}
],
"NIC_DEFAULT": {
"MODEL": "virtio"
},
"TEMPLATE_ID": "28",
"TM_MAD_SYSTEM": "shared",
"VCPU": "4",
"VMID": "452"
},
"USER_TEMPLATE": {
"GUEST_OS": "linux",
"INPUTS_ORDER": "",
"LABELS": "serv,bench",
"LOGO": "images/logos/linux.png",
"MEMORY_UNIT_COST": "MB",
"SCHED_REQUIREMENTS": "ARCH=\"x86_64\"",
"TGROUP": "bench_server"
}
},
{
"DEPLOY_ID": "97037f55-dd2c-4549-8d24-561a6569e870",
"ETIME": 0,
"GID": 105,
"GNAME": "SW",
"HISTORY_RECORDS": {},
"ID": 311,
"LAST_POLL": 0,
"LCM_STATE": 3,
"MONITORING": {},
"NAME": "bs-windows",
"RESCHED": 0,
"STATE": 3,
"STIME": 1648076254,
"TEMPLATE": {
"NIC": [
{
"AR_ID": "0",
"BRIDGE": "mgmt0",
"BRIDGE_TYPE": "linux",
"CLUSTER_ID": "0",
"IP": "192.168.11.209",
"MAC": "02:00:c0:a8:0b:dd",
"MODEL": "virtio",
"NAME": "NIC0",
"NETWORK": "Infrastructure",
"NETWORK_ID": "0",
"NETWORK_UNAME": "admin",
"NIC_ID": "0",
"SECURITY_GROUPS": "0,101",
"TARGET": "one-311-0",
"VLAN_ID": "12",
"VN_MAD": "802.1Q"
},
[
"TEMPLATE_ID",
"23"
],
[
"TM_MAD_SYSTEM",
"shared"
],
[
"VCPU",
"4"
],
[
"VMID",
"311"
]
]
},
"UID": 22,
"UNAME": "bsanders",
"USER_TEMPLATE": {
"GUEST_OS": "windows",
"INPUTS_ORDER": "",
"LABELS": "serv",
"HYPERVISOR": "kvm",
"SCHED_REQUIREMENTS": "ARCH=\"x86_64\"",
"SET_HOSTNAME": "windows"
}
}
]

View File

@@ -9,14 +9,18 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from collections import OrderedDict
import json
import pytest
from ansible.inventory.data import InventoryData
from ansible.parsing.dataloader import DataLoader
from ansible.template import Templar
from ansible_collections.community.general.plugins.inventory.opennebula import InventoryModule
from ansible_collections.community.general.tests.unit.compat.mock import create_autospec
@pytest.fixture(scope="module")
@pytest.fixture
def inventory():
r = InventoryModule()
r.inventory = InventoryData()
@@ -33,6 +37,18 @@ def test_verify_file_bad_config(inventory):
assert inventory.verify_file('foobar.opennebula.yml') is False
def get_vm_pool_json():
with open('tests/unit/plugins/inventory/fixtures/opennebula_inventory.json', 'r') as json_file:
jsondata = json.load(json_file)
data = type('pyone.bindings.VM_POOLSub', (object,), {'VM': []})()
for fake_server in jsondata:
data.VM.append(type('pyone.bindings.VMType90Sub', (object,), fake_server)())
return data
def get_vm_pool():
data = type('pyone.bindings.VM_POOLSub', (object,), {'VM': []})()
@@ -195,36 +211,99 @@ def get_vm_pool():
return data
def get_option(option):
if option == 'api_url':
return 'https://opennebula:2633/RPC2'
if option == 'api_username':
return 'username'
elif option == 'api_password':
return 'password'
elif option == 'api_authfile':
return '~/.one/one_auth'
elif option == 'hostname':
return 'v4_first_ip'
elif option == 'group_by_labels':
return True
elif option == 'filter_by_label':
return None
else:
return False
options_base_test = {
'api_url': 'https://opennebula:2633/RPC2',
'api_username': 'username',
'api_password': 'password',
'api_authfile': '~/.one/one_auth',
'hostname': 'v4_first_ip',
'group_by_labels': True,
'filter_by_label': None,
}
options_constructable_test = options_base_test.copy()
options_constructable_test.update({
'compose': {'is_linux': "GUEST_OS == 'linux'"},
'filter_by_label': 'bench',
'groups': {
'benchmark_clients': "TGROUP.endswith('clients')",
'lin': 'is_linux == True'
},
'keyed_groups': [{'key': 'TGROUP', 'prefix': 'tgroup'}],
})
# given a dictionary `opts_dict`, return a function that behaves like ansible's inventory get_options
def mk_get_options(opts_dict):
def inner(opt):
return opts_dict.get(opt, False)
return inner
def test_get_connection_info(inventory, mocker):
inventory.get_option = mocker.MagicMock(side_effect=get_option)
inventory.get_option = mocker.MagicMock(side_effect=mk_get_options(options_base_test))
auth = inventory._get_connection_info()
assert (auth.username and auth.password)
def test_populate_constructable_templating(inventory, mocker):
# bypass API fetch call
inventory._get_vm_pool = mocker.MagicMock(side_effect=get_vm_pool_json)
inventory.get_option = mocker.MagicMock(side_effect=mk_get_options(options_constructable_test))
# the templating engine is needed for the constructable groups/vars
# so give that some fake data and instantiate it.
fake_config_filepath = '/fake/opennebula.yml'
fake_cache = {fake_config_filepath: options_constructable_test.copy()}
fake_cache[fake_config_filepath]['plugin'] = 'community.general.opennebula'
dataloader = create_autospec(DataLoader, instance=True)
dataloader._FILE_CACHE = fake_cache
inventory.templar = Templar(loader=dataloader)
inventory._populate()
# note the vm_pool (and json data file) has four hosts,
# but options_constructable_test asks ansible to filter it out
assert len(get_vm_pool_json().VM) == 4
assert set([vm.NAME for vm in get_vm_pool_json().VM]) == set([
'terraform_demo_00',
'terraform_demo_01',
'terraform_demo_srv_00',
'bs-windows',
])
assert set(inventory.inventory.hosts) == set(['terraform_demo_00', 'terraform_demo_01', 'terraform_demo_srv_00'])
host_demo00 = inventory.inventory.get_host('terraform_demo_00')
host_demo01 = inventory.inventory.get_host('terraform_demo_01')
host_demosrv = inventory.inventory.get_host('terraform_demo_srv_00')
assert 'benchmark_clients' in inventory.inventory.groups
assert 'lin' in inventory.inventory.groups
assert inventory.inventory.groups['benchmark_clients'].hosts == [host_demo00, host_demo01]
assert inventory.inventory.groups['lin'].hosts == [host_demo00, host_demo01, host_demosrv]
# test group by label:
assert 'bench' in inventory.inventory.groups
assert 'foo' in inventory.inventory.groups
assert inventory.inventory.groups['bench'].hosts == [host_demo00, host_demo01, host_demosrv]
assert inventory.inventory.groups['serv'].hosts == [host_demosrv]
assert inventory.inventory.groups['foo'].hosts == [host_demo00, host_demo01]
# test `compose` transforms GUEST_OS=Linux to is_linux == True
assert host_demo00.get_vars()['GUEST_OS'] == 'linux'
assert host_demo00.get_vars()['is_linux'] is True
# test `keyed_groups`
assert inventory.inventory.groups['tgroup_bench_clients'].hosts == [host_demo00, host_demo01]
assert inventory.inventory.groups['tgroup_bench_server'].hosts == [host_demosrv]
def test_populate(inventory, mocker):
# bypass API fetch call
inventory._get_vm_pool = mocker.MagicMock(side_effect=get_vm_pool)
inventory.get_option = mocker.MagicMock(side_effect=get_option)
inventory.get_option = mocker.MagicMock(side_effect=mk_get_options(options_base_test))
inventory._populate()
# get different hosts

View File

@@ -0,0 +1,306 @@
# -*- coding: utf-8 -*-
# (c) 2022, Alexei Znamensky <russoz@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from sys import version_info
import pytest
from ansible_collections.community.general.tests.unit.compat.mock import MagicMock, PropertyMock
from ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, fmt
TC_FORMATS = dict(
simple_boolean__true=(fmt.as_bool, ("--superflag",), True, ["--superflag"]),
simple_boolean__false=(fmt.as_bool, ("--superflag",), False, []),
simple_boolean__none=(fmt.as_bool, ("--superflag",), None, []),
simple_boolean_not__true=(fmt.as_bool_not, ("--superflag",), True, []),
simple_boolean_not__false=(fmt.as_bool_not, ("--superflag",), False, ["--superflag"]),
simple_boolean_not__none=(fmt.as_bool_not, ("--superflag",), None, ["--superflag"]),
simple_optval__str=(fmt.as_optval, ("-t",), "potatoes", ["-tpotatoes"]),
simple_optval__int=(fmt.as_optval, ("-t",), 42, ["-t42"]),
simple_opt_val__str=(fmt.as_opt_val, ("-t",), "potatoes", ["-t", "potatoes"]),
simple_opt_val__int=(fmt.as_opt_val, ("-t",), 42, ["-t", "42"]),
simple_opt_eq_val__str=(fmt.as_opt_eq_val, ("--food",), "potatoes", ["--food=potatoes"]),
simple_opt_eq_val__int=(fmt.as_opt_eq_val, ("--answer",), 42, ["--answer=42"]),
simple_list_potato=(fmt.as_list, (), "literal_potato", ["literal_potato"]),
simple_list_42=(fmt.as_list, (), 42, ["42"]),
simple_map=(fmt.as_map, ({'a': 1, 'b': 2, 'c': 3},), 'b', ["2"]),
simple_default_type__list=(fmt.as_default_type, ("list",), [1, 2, 3, 5, 8], ["--1", "--2", "--3", "--5", "--8"]),
simple_default_type__bool_true=(fmt.as_default_type, ("bool", "what"), True, ["--what"]),
simple_default_type__bool_false=(fmt.as_default_type, ("bool", "what"), False, []),
simple_default_type__potato=(fmt.as_default_type, ("any-other-type", "potato"), "42", ["--potato", "42"]),
simple_fixed_true=(fmt.as_fixed, [("--always-here", "--forever")], True, ["--always-here", "--forever"]),
simple_fixed_false=(fmt.as_fixed, [("--always-here", "--forever")], False, ["--always-here", "--forever"]),
simple_fixed_none=(fmt.as_fixed, [("--always-here", "--forever")], None, ["--always-here", "--forever"]),
simple_fixed_str=(fmt.as_fixed, [("--always-here", "--forever")], "something", ["--always-here", "--forever"]),
)
if tuple(version_info) >= (3, 1):
from collections import OrderedDict
# needs OrderedDict to provide a consistent key order
TC_FORMATS["simple_default_type__dict"] = ( # type: ignore
fmt.as_default_type,
("dict",),
OrderedDict((('a', 1), ('b', 2))),
["--a=1", "--b=2"]
)
TC_FORMATS_IDS = sorted(TC_FORMATS.keys())
@pytest.mark.parametrize('func, fmt_opt, value, expected',
(TC_FORMATS[tc] for tc in TC_FORMATS_IDS),
ids=TC_FORMATS_IDS)
def test_arg_format(func, fmt_opt, value, expected):
fmt_func = func(*fmt_opt)
actual = fmt_func(value, ctx_ignore_none=True)
print("formatted string = {0}".format(actual))
assert actual == expected, "actual = {0}".format(actual)
TC_RUNNER = dict(
# SAMPLE: This shows all possible elements of a test case. It does not actually run.
#
# testcase_name=(
# # input
# dict(
# args_bundle = dict(
# param1=dict(
# type="int",
# value=11,
# fmt_func=fmt.as_opt_eq_val,
# fmt_arg="--answer",
# ),
# param2=dict(
# fmt_func=fmt.as_bool,
# fmt_arg="--bb-here",
# )
# ),
# runner_init_args = dict(
# command="testing",
# default_args_order=(),
# check_rc=False,
# force_lang="C",
# path_prefix=None,
# environ_update=None,
# ),
# runner_ctx_args = dict(
# args_order=['aa', 'bb'],
# output_process=None,
# ignore_value_none=True,
# ),
# ),
# # command execution
# dict(
# runner_ctx_run_args = dict(bb=True),
# rc = 0,
# out = "",
# err = "",
# ),
# # expected
# dict(
# results=(),
# run_info=dict(
# cmd=['/mock/bin/testing', '--answer=11', '--bb-here'],
# environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'},
# ),
# exc=None,
# ),
# ),
#
aa_bb=(
dict(
args_bundle=dict(
aa=dict(type="int", value=11, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(),
runner_ctx_args=dict(args_order=['aa', 'bb']),
),
dict(runner_ctx_run_args=dict(bb=True), rc=0, out="", err=""),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--answer=11', '--bb-here'],
environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'},
args_order=('aa', 'bb'),
),
),
),
aa_bb_default_order=(
dict(
args_bundle=dict(
aa=dict(type="int", value=11, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(default_args_order=['bb', 'aa']),
runner_ctx_args=dict(),
),
dict(runner_ctx_run_args=dict(bb=True), rc=0, out="", err=""),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--bb-here', '--answer=11'],
environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'},
args_order=('bb', 'aa'),
),
),
),
aa_bb_default_order_args_order=(
dict(
args_bundle=dict(
aa=dict(type="int", value=11, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(default_args_order=['bb', 'aa']),
runner_ctx_args=dict(args_order=['aa', 'bb']),
),
dict(runner_ctx_run_args=dict(bb=True), rc=0, out="", err=""),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--answer=11', '--bb-here'],
environ_update={'LANGUAGE': 'C', 'LC_ALL': 'C'},
args_order=('aa', 'bb'),
),
),
),
aa_bb_dup_in_args_order=(
dict(
args_bundle=dict(
aa=dict(type="int", value=11, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(),
runner_ctx_args=dict(args_order=['aa', 'bb', 'aa']),
),
dict(runner_ctx_run_args=dict(bb=True), rc=0, out="", err=""),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--answer=11', '--bb-here', '--answer=11'],
),
),
),
aa_bb_process_output=(
dict(
args_bundle=dict(
aa=dict(type="int", value=11, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(default_args_order=['bb', 'aa']),
runner_ctx_args=dict(
args_order=['aa', 'bb'],
output_process=lambda rc, out, err: '-/-'.join([str(rc), out, err])
),
),
dict(runner_ctx_run_args=dict(bb=True), rc=0, out="ni", err="nu"),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--answer=11', '--bb-here'],
),
results="0-/-ni-/-nu"
),
),
aa_bb_ignore_none_with_none=(
dict(
args_bundle=dict(
aa=dict(type="int", value=49, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(default_args_order=['bb', 'aa']),
runner_ctx_args=dict(
args_order=['aa', 'bb'],
ignore_value_none=True, # default
),
),
dict(runner_ctx_run_args=dict(bb=None), rc=0, out="ni", err="nu"),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--answer=49'],
),
),
),
aa_bb_ignore_not_none_with_none=(
dict(
args_bundle=dict(
aa=dict(type="int", value=49, fmt_func=fmt.as_opt_eq_val, fmt_arg="--answer"),
bb=dict(fmt_func=fmt.as_bool, fmt_arg="--bb-here"),
),
runner_init_args=dict(default_args_order=['bb', 'aa']),
runner_ctx_args=dict(
args_order=['aa', 'bb'],
ignore_value_none=False,
),
),
dict(runner_ctx_run_args=dict(aa=None, bb=True), rc=0, out="ni", err="nu"),
dict(
run_info=dict(
cmd=['/mock/bin/testing', '--answer=None', '--bb-here'],
),
),
),
)
TC_RUNNER_IDS = sorted(TC_RUNNER.keys())
@pytest.mark.parametrize('runner_input, cmd_execution, expected',
(TC_RUNNER[tc] for tc in TC_RUNNER_IDS),
ids=TC_RUNNER_IDS)
def test_runner(runner_input, cmd_execution, expected):
arg_spec = {}
params = {}
arg_formats = {}
for k, v in runner_input['args_bundle'].items():
try:
arg_spec[k] = {'type': v['type']}
except KeyError:
pass
try:
params[k] = v['value']
except KeyError:
pass
try:
arg_formats[k] = v['fmt_func'](v['fmt_arg'])
except KeyError:
pass
orig_results = tuple(cmd_execution[x] for x in ('rc', 'out', 'err'))
print("arg_spec={0}\nparams={1}\narg_formats={2}\n".format(
arg_spec,
params,
arg_formats,
))
module = MagicMock()
type(module).argument_spec = PropertyMock(return_value=arg_spec)
type(module).params = PropertyMock(return_value=params)
module.get_bin_path.return_value = '/mock/bin/testing'
module.run_command.return_value = orig_results
runner = CmdRunner(
module=module,
command="testing",
arg_formats=arg_formats,
**runner_input['runner_init_args']
)
def _assert_run_info(actual, expected):
reduced = dict((k, actual[k]) for k in expected.keys())
assert reduced == expected, "{0}".format(reduced)
def _assert_run(runner_input, cmd_execution, expected, ctx, results):
_assert_run_info(ctx.run_info, expected['run_info'])
assert results == expected.get('results', orig_results)
exc = expected.get("exc")
if exc:
with pytest.raises(exc):
with runner.context(**runner_input['runner_ctx_args']) as ctx:
results = ctx.run(**cmd_execution['runner_ctx_run_args'])
_assert_run(runner_input, cmd_execution, expected, ctx, results)
else:
with runner.context(**runner_input['runner_ctx_args']) as ctx:
results = ctx.run(**cmd_execution['runner_ctx_run_args'])
_assert_run(runner_input, cmd_execution, expected, ctx, results)

View File

@@ -91,3 +91,26 @@ def test_remove_snapshot_check_mode(connect_mock, capfd, mocker):
out, err = capfd.readouterr()
assert not err
assert not json.loads(out)['changed']
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_rollback_snapshot_check_mode(connect_mock, capfd, mocker):
set_module_args({"hostname": "test-lxc",
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"state": "rollback",
"snapname": "test",
"timeout": "1",
"force": True,
"_ansible_check_mode": True})
proxmox_utils.HAS_PROXMOXER = True
connect_mock.side_effect = lambda: fake_api(mocker)
with pytest.raises(SystemExit) as results:
proxmox_snap.main()
out, err = capfd.readouterr()
assert not err
output = json.loads(out)
assert not output['changed']
assert output['msg'] == "Snapshot test does not exist"

View File

@@ -6,7 +6,7 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible_collections.community.general.plugins.modules import hana_query
from ansible_collections.community.general.plugins.modules.database.saphana import hana_query
from ansible_collections.community.general.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,

Some files were not shown because too many files have changed in this diff Show More