Compare commits

...

103 Commits

Author SHA1 Message Date
Felix Fontein
433d021c42 Remove CI config. 2023-05-08 21:37:16 +02:00
Felix Fontein
c12fd2474b Release 4.8.11. 2023-05-08 21:17:09 +02:00
Felix Fontein
39875affa7 Prepare 4.8.11 EOL release. 2023-05-07 21:31:09 +02:00
patchback[bot]
7b6cc1bf5c [PR #6482/737d37e0 backport][stable-4] CI: Arch Linux now uses Python 3.11 (#6485)
CI: Arch Linux now uses Python 3.11 (#6482)

Arch Linux now uses Python 3.11.

(cherry picked from commit 737d37e019)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-05-04 07:58:26 +02:00
Felix Fontein
85aa288f8f Revert "[PR #6439/6e913a3b backport][stable-4] dnsimple_info: remove extraneous importorskip from test (#6444)"
This reverts commit 0566169758.
2023-04-28 12:10:00 +02:00
patchback[bot]
0566169758 [PR #6439/6e913a3b backport][stable-4] dnsimple_info: remove extraneous importorskip from test (#6444)
dnsimple_info: remove extraneous importorskip from test (#6439)

* dnsimple_info: remove extraneous importorskip from test

* remove yet another extraneous importorskip from test

(cherry picked from commit 6e913a3b28)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-04-28 10:20:53 +02:00
Felix Fontein
01c287ed6c [stable-4] Unit tests: restrict requirements to avoid bad PyOpenSSL + cryptography version collision for ansible-core 2.11 (#6427)
Restrict some unit test requirements.
2023-04-23 22:45:19 +02:00
patchback[bot]
78f6e594fc [PR #6415/f0fcc91a backport][stable-4] zypper_repository: disable failing repository (#6422)
zypper_repository: disable failing repository (#6415)

* Disable failing repository from zypper_repository tests.

* Also disable repo file for >= 15.4.

* Simply disable file test for now.

(cherry picked from commit f0fcc91ac7)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-04-23 18:49:16 +02:00
patchback[bot]
14584b261d [PR #6414/69d7f19c backport][stable-4] Restrict jail tests for sysrc to certain FreeBSD versions (#6419)
Restrict jail tests for sysrc to certain FreeBSD versions (#6414)

Restrict jail tests for sysrc to certain FreeBSD versions.

(cherry picked from commit 69d7f19c74)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-04-23 18:13:26 +02:00
Felix Fontein
4229f6d04a [stable-4] pkgng: skip jail tests also on FreeBSD 12.3 (#6405)
pkgng: skip jail tests also on FreeBSD 12.3 (#6313)

Skip jail tests also on FreeBSD 12.3.

(cherry picked from commit aa77a88f4b)
2023-04-23 15:42:39 +02:00
Felix Fontein
6173cf0d42 Next expected release is 4.8.11. 2023-03-26 11:46:03 +02:00
Felix Fontein
29d66b1c21 Release 4.8.10. 2023-03-26 11:10:29 +02:00
Felix Fontein
c071fb1df3 Change release summary to maintenance release. 2023-03-26 10:23:09 +02:00
patchback[bot]
dd7e8b4463 [PR #6175/1ddcdc63 backport][stable-4] Mark monit integration tests as unstable (#6176)
Mark monit integration tests as unstable (#6175)

Mark monit integration tests as unstable.

(cherry picked from commit 1ddcdc63ff)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-03-12 12:06:27 +00:00
Felix Fontein
da85b37764 [stable-4] Run tests with EOL ansible-core versions in GHA (#6071)
* Run tests with EOL ansible-core versions in GHA (#6044)

Run tests with EOL ansible-core versions in GHA.

(cherry picked from commit b72b7d4936)

* Re-schedule cron.

* Use correct targets.

* Restrict python-gitlab to < 3.13.0.

* Fix Ubuntu 16.04 check.

* Revert "Restrict python-gitlab to < 3.13.0."

This reverts commit 3f3a519d79.

* Add extra conditions from tests/utils/shippable/units.sh.

* Avoid problematic broken quoting.
2023-02-24 14:41:59 +01:00
patchback[bot]
8806d31d4c [PR #6031/e348d285 backport][stable-4] Re-enable Arch Linux tests (#6038)
Re-enable Arch Linux tests (#6031)

Revert "Disable Arch Linux tests for now (#6013)"

This reverts commit 1b2c2af9a8.

(cherry picked from commit e348d28559)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-02-22 22:39:44 +01:00
patchback[bot]
841d3b25b9 [PR #6013/1b2c2af9 backport][stable-4] Disable Arch Linux tests for now (#6014)
Disable Arch Linux tests for now (#6013)

Disable Arch Linux tests for now until https://github.com/ansible-community/images/pull/40 and https://github.com/systemd/systemd/issues/26474 are resolved.

(cherry picked from commit 1b2c2af9a8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-02-18 21:56:46 +01:00
Felix Fontein
bbe74d2b17 [stable-4] Fix pylint errors (#5939)
Fix pylint errors (#5933)

* Fix pylint errors.

* Also adjust to https://github.com/ansible/ansible/pull/79909.

(cherry picked from commit b1d9507cd2)
2023-02-04 17:28:18 +01:00
patchback[bot]
a7783c48ff [PR #5868/098912c2 backport][stable-4] stormssh tests: do not install newer cryptography (#5870)
stormssh tests: do not install newer cryptography (#5868)

Do not install newer cryptography.

ci_complete

(cherry picked from commit 098912c229)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-01-22 17:22:09 +00:00
Felix Fontein
bacd64e4dc Prepare 4.8.10 release. 2023-01-18 08:19:54 +01:00
patchback[bot]
939d30862c [PR #5833/08b0ea70 backport][stable-4] ldap.py: capitalize one letter (#5834)
ldap.py: capitalize one letter (#5833)

(cherry picked from commit 08b0ea700d)

Co-authored-by: bluikko <14869000+bluikko@users.noreply.github.com>
2023-01-14 18:21:24 +01:00
patchback[bot]
a4a102ae68 [PR #5785/0ff003d3 backport][stable-4] Fix CI (#5787)
Fix CI (#5785)

Try to fix CI.

(cherry picked from commit 0ff003d312)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-01-07 15:04:54 +01:00
patchback[bot]
9b13efe654 [PR #5761/84ebda65 backport][stable-4] Fix callback plugin types (#5762)
Fix callback plugin types (#5761)

Fix callback types.

(cherry picked from commit 84ebda65f1)

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-01-04 23:46:26 +01:00
patchback[bot]
4ea084cc29 [PR #5755/b49bf081 backport][stable-4] ModuleHelper - fix bug when adjusting conflicting output (#5756)
ModuleHelper - fix bug when adjusting conflicting output (#5755)

* ModuleHelper - fix bug when adjusting conflicting output

* add changelog fragment

* remove commented test code

(cherry picked from commit b49bf081f8)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2023-01-04 21:40:50 +01:00
patchback[bot]
c5c8decea5 [PR #5674/b5e58a3b backport][stable-4] CI: Bump CentOS Stream 8 Python from 3.8 to 3.9 (#5675)
CI: Bump CentOS Stream 8 Python from 3.8 to 3.9 (#5674)

Bump CentOS Stream 8 Python from 3.8 to 3.9.

(cherry picked from commit b5e58a3bcc)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-12-09 15:52:40 +00:00
patchback[bot]
7abf7cc7c7 Temporarily disable copr tests. (#5594) (#5596)
(cherry picked from commit 11e1423f60)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-11-23 20:33:56 +01:00
patchback[bot]
c6f395e46b Ignore mpdehaan in BOTMETA. (#5524) (#5526) (#5527)
(cherry picked from commit 0e9cd5e6b6)
(cherry picked from commit 4ed5177d60)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-11-09 17:53:21 +00:00
Felix Fontein
a6ce5eaa8e Next expected release is 4.8.10. 2022-11-06 14:11:16 +01:00
Felix Fontein
15ad2448f1 Release 4.8.9. 2022-11-06 12:55:53 +01:00
Felix Fontein
ff2b016c66 Drop stable-3 from weekly CI; migrate stable-4 from nightly to weekly.
(cherry picked from commit 90ac53d150)
2022-11-06 12:53:38 +01:00
Felix Fontein
44e522d311 ldap_attrs: escape ldap search filter (#5435) (#5470)
* escape ldap search filter

* move escape to separate line

* add changelog fragment

* Update changelogs/fragments/5435-escape-ldap-param.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix encoding

* fixup! fix encoding

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1a97ca1a6f)

Co-authored-by: Reto Kupferschmid <kupferschmid@puzzle.ch>
2022-11-06 12:51:22 +01:00
Felix Fontein
b94800036b Prepare 4.8.9 release. 2022-11-06 11:46:22 +01:00
Felix Fontein
d119905bd5 Fix non-matching defaults. (#5452) (#5454)
(cherry picked from commit f84a9bf932)
2022-11-01 20:09:31 +01:00
Felix Fontein
2754d86ac5 Restrict Python 3.6 unit test requirements for elastic-apm. (#5441) 2022-10-29 13:02:29 +02:00
patchback[bot]
03ba48cf78 ldap_attrs: search_s based _is_value_present (#5385) (#5422)
* search_s based _is_value_present

* Fix formatted string and ldap import

* Add changelog fragment

* Remove superfluous import ldap

* Improve fragment

* Code format {x} prefix

* Lower-case fixes

* Fix suggestions to changelog

* Break with the past and let bools be bools

* Let ldap_attrs break on invalid DN's

(cherry picked from commit 091bdc77c3)

Co-authored-by: Martin <github@mrvanes.com>
2022-10-25 08:11:54 +02:00
Felix Fontein
147fbe602c Next expected release is 4.8.9. 2022-10-24 21:52:23 +02:00
Felix Fontein
ec2efb26d0 Release 4.8.8. 2022-10-24 21:03:23 +02:00
Felix Fontein
150495a15f Fix broken changelog fragment.
(cherry picked from commit c88f0f4ca0)
2022-10-24 21:02:44 +02:00
patchback[bot]
b2b3c056ca clarify jc filter usage in the example (#5396) (#5419)
* Update jc.py

##### SUMMARY
<!— Your description here –>

##### ISSUE TYPE
- Docs Pull Request

+label: docsite_pr

* Update jc.py

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update jc.py

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update jc.py

* Update jc.py

* Update jc.py

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* change all of the tags to be FQMN

FQMN = fully qualified module name

* Update jc.py

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update jc.py

* Update jc.py

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update jc.py

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/filter/jc.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 70c57dcb6a)

Co-authored-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2022-10-24 20:59:45 +02:00
patchback[bot]
557594c392 pkgng: fix error-handling when upgrading all (#5369) (#5410)
* pkgng: fix error-handling when upgrading all

* provide for rc=1 in check_mode + test

* fix name of task in test

* add changelog fragment

(cherry picked from commit baa8bd52ab)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-10-23 20:56:12 +02:00
Felix Fontein
b6a6edd403 Prepare 4.8.8 release. 2022-10-23 16:43:35 +02:00
patchback[bot]
e42770d4bf archive: better expose requirements (#5392) (#5401)
* Better expose requirements.

* Move sentence back to notes.

* Update plugins/modules/files/archive.py

Co-authored-by: Maxwell G <gotmax@e.email>

* Break line.

Co-authored-by: Maxwell G <gotmax@e.email>
(cherry picked from commit a023f2a344)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-10-21 07:21:49 +02:00
patchback[bot]
1b78f18bf4 Do not crash when lzma is not around. (#5393) (#5397)
(cherry picked from commit 5aa1e58749)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-10-20 20:45:27 +02:00
patchback[bot]
ec11d13825 Fix module. (#5383) (#5387)
(cherry picked from commit c3bdc4b394)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-10-19 10:30:49 +02:00
patchback[bot]
eb066335f8 [opentelemetry][callback] support opentelemetry-api 1.13 (#5342) (#5378)
* [opentelemetry][callback] support opentelemetry-api 1.13

* [opentelemetry][callback] changelog fragment

* Update changelogs/fragments/5342-opentelemetry_bug_fix_opentelemetry-api-1.13.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* [opentelemetry-callback] refactor time_ns in a function

* fix linting

* change branch outside of the function

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* [opentelemetry]: remove options from suggestion

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit 5732023aa2)

Co-authored-by: Victor Martinez <victormartinezrubio@gmail.com>
2022-10-18 10:38:22 +02:00
patchback[bot]
cb26897b3e Make pfexec become usable for illumos (#3889) (#5338)
* Experimental change from OpenIndiana

* resolve pfexec problem, by removing superfluous quotes

* reimplement "wrap_exe"

* remove spaces arround keyword argument assignment

* adapted pfexec unit test

* Try to fix quoting of test expression

* Fix quoting of test expression by replacing ' with "

* Add changelog fragment

(cherry picked from commit dc2d3c24fa)

Co-authored-by: manschwetusCS <30724946+manschwetusCS@users.noreply.github.com>
2022-10-05 11:03:32 +00:00
patchback[bot]
b7b5c1852e keycloak_user_federation: add explanation and example to vendor option (#4893) (#5335)
* Add explanation and example to vendor option

##### SUMMARY
<!— Your description here –>

##### ISSUE TYPE
- Docs Pull Request

+label: docsite_pr

* Update plugins/modules/identity/keycloak/keycloak_user_federation.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7b86fa6a7d)

Co-authored-by: clovis-monmousseau <58973012+clovis-monmousseau@users.noreply.github.com>
2022-10-05 08:18:45 +02:00
patchback[bot]
97dce1f621 Fix #5313: redhat_subscription module is not idempotent when pool_ids (#5319) (#5329)
This fix ensures the idempotency of the redhat_subscription module when pool_ids are used. The main problem was, that a 'None' quantity was not properly handled and that the quantity check compared a string with an integer.

Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>

Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
Co-authored-by: Christoph Fiehe <c.fiehe@eurodata.de>
(cherry picked from commit 6fe2a84e87)

Co-authored-by: cfiehe <cfiehe@users.noreply.github.com>
2022-10-03 20:36:32 +02:00
Felix Fontein
0618af9b1e Next expected release is 4.8.8. 2022-10-03 07:00:06 +02:00
Felix Fontein
bd5f7197d6 Release 4.8.7. 2022-10-03 06:29:30 +02:00
Felix Fontein
8532e0e086 Prepare 4.8.7 release. 2022-10-02 22:06:22 +02:00
patchback[bot]
096f8bed3b locale_gen: fix UbuntuMode (#5282) (#5309)
* Fix UbuntuMode

* Fix indentation

* Create 5281-locale_gen.yaml

* Update and rename 5281-locale_gen.yaml to 5282-locale_gen.yaml

* apply suggested changes

* apply suggested change

(cherry picked from commit fb1cf91ebd)

Co-authored-by: Bartosz-lab <73119351+Bartosz-lab@users.noreply.github.com>
2022-09-25 21:07:25 +02:00
Felix Fontein
725d16d835 Replace devel with stable-2.14 in CI. (#5299) 2022-09-21 08:07:16 +02:00
patchback[bot]
5462773827 gitlab modules: improved imports (#5259) (#5276)
* gitlab modules: improved imports

* add changelog fragment

* refactored the import check to its sole function

(cherry picked from commit 6b463e6fa6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-09-12 21:14:02 +02:00
patchback[bot]
f798d914e1 Fix pkgng tests (#5266) (#5269)
* Now there are problems with 13.0 as well. But maybe 13.1 works again?

* 13.1 still does not work, maybe 13.2 will (not yet available in CI)...

(cherry picked from commit b371bd6a5b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-09-10 11:39:58 +02:00
patchback[bot]
c3d5a7b1b8 Restrict Python packages for nomad tests. (#5262) (#5264)
(cherry picked from commit dde0b55f1a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-09-10 11:16:36 +02:00
patchback[bot]
9bd160d989 ali_instance: fixed markups in doc (#5226) (#5231)
* ali_instance: fixed markups in doc

* Update plugins/modules/cloud/alicloud/ali_instance.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/alicloud/ali_instance.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/alicloud/ali_instance.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/alicloud/ali_instance.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ac8b034061)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-09-04 15:59:43 +02:00
patchback[bot]
1b800273ef ipwcli_dns: fixed markups in doc (#5225) (#5229)
* ipwcli_dns: fixed markups in doc

* added punctuation

(cherry picked from commit a481f8356e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2022-09-04 15:59:34 +02:00
patchback[bot]
61306b579e Update BOTMETA.yml (#5165) (#5216)
* Update BOTMETA.yml

Removing Endlesstrax and Amigus as maintainers.

* Update .github/BOTMETA.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update BOTMETA.yml

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 86f4d798a9)

Co-authored-by: tylerezimmerman <100804646+tylerezimmerman@users.noreply.github.com>
2022-09-03 11:46:29 +02:00
patchback[bot]
107a1729a4 Catch more broader error messages. (#5212) (#5214)
(cherry picked from commit fa49051912)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-09-03 11:35:03 +02:00
patchback[bot]
895ae3b73e [TEMP] Fix RHEL 8 issues by restricting bcrypt to < 4.0.0 (#5183) (#5186)
(cherry picked from commit 8e59e52525)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-25 08:21:26 +02:00
patchback[bot]
aa737429de filesystem: create temp directory outside /tmp to avoid problems with tmpfs. (#5182) (#5184)
(cherry picked from commit 8027bc5335)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-25 06:58:56 +02:00
patchback[bot]
28830d8ca5 adding nested try block for tss.py to import new Delinea library (#5151) (#5163)
* adding nested try block to import delinea library

* whitespace

* Update plugins/lookup/tss.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* adding changelog fragment

* Update changelogs/fragments/5151-add-delinea-support-tss-lookup.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Tom Reeb <Thomas.Reeb_e@morganlewis.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 9f39294f50)

Co-authored-by: Tom Reeb <tomreeb@users.noreply.github.com>
2022-08-23 22:10:49 +02:00
Felix Fontein
3825264260 Next expected release is 4.8.7. 2022-08-22 16:04:10 +02:00
Felix Fontein
9e319610c3 Release 4.8.6. 2022-08-22 14:04:41 +02:00
Felix Fontein
92db683b08 Prepare 4.8.6 release. 2022-08-21 22:10:53 +02:00
patchback[bot]
81966e8900 Increase xfs size to 300 MB. This seems to be new minimal size. (#5133) (#5135)
(cherry picked from commit 98ea27847f)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-20 14:18:51 +02:00
patchback[bot]
465b0c72a6 Fix nsupdate when updating NS record (#5112) (#5131)
* Fix nsupdate when updating NS record

* Changelog fragment

* Update changelogs/fragments/5112-fix-nsupdate-ns-entry.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Switch to fallback to AUTHORITY instead of using with NS type.

* Update plugins/modules/net_tools/nsupdate.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/nsupdate.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: jonathan lung <lungj@heresjono.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ad8965218d)

Co-authored-by: Jonathan Lung <lungj@users.noreply.github.com>
2022-08-20 13:34:16 +02:00
patchback[bot]
37fc85b03a Remove Fedora 35 from devel CI runs. (#5121) (#5122)
(cherry picked from commit ad0c7095d4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-16 22:21:18 +02:00
patchback[bot]
efcaf57da8 Try to install virtualenv via pip on Arch. (#5116) (#5118)
ci_complete

(cherry picked from commit 3dcff121c4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-13 12:50:41 +02:00
Felix Fontein
e4eead189b Fix linting errors; fix some real bugs (#5111) (#5115)
* Fix linting errors.

* Fix bugs.

* Another linter error ignored.

* More fixes.

* Ignore sanity errors with older versions.

ci_complete

* Forgot to commit more changes.

(cherry picked from commit a54af8909c)
2022-08-12 14:37:34 +02:00
Felix Fontein
54bf6ef6de Add MIT-license.txt (#5072)
(partially cherry picked from commit b5eae69e36)
2022-08-05 12:47:18 +02:00
patchback[bot]
a63b8b14bc aix_filesystem: Fix examples (#5067) (#5070)
`community.general.filesystem` is not a valid argument to
aix_filesystem.

(cherry picked from commit 8f37638480)

Co-authored-by: Maxwell G <9920591+gotmax23@users.noreply.github.com>
2022-08-05 12:46:54 +02:00
patchback[bot]
2e335f3876 Set CARGO_NET_GIT_FETCH_WITH_CLI=true for cargo on Alpine. (#5053) (#5054)
(cherry picked from commit b5eae69e36)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-08-01 23:18:10 +02:00
Felix Fontein
9d770169cc Next release will be 4.8.6. 2022-08-01 20:52:50 +02:00
Felix Fontein
d6c9c0c49a Release 4.8.5. 2022-08-01 09:30:30 +02:00
patchback[bot]
9f1e976b9f Slack: Add support for (some) groups (#5019) (#5043)
* Slack: Add support for (some) groups

Some of the older private channels in the workspace I'm working in have channel ID's starting with `G0` and `GF` and this resulted to false positive `channel_not_found` errors.
I've added these prefixes to the list to maintain as much backwards compatibility as possible.

Ideally the auto-prefix of the channel name with `#` is dropped entirely, given the Channel ID's have become more dominant in the Slack API over the past years.

* Add changelog fragment for slack channel prefix fix

* Update changelogs/fragments/5019-slack-support-more-groups.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3fe9592cf1)

Co-authored-by: Richard Tuin <richardtuin@gmail.com>
2022-07-31 23:52:45 +02:00
Felix Fontein
e309707e22 Prepare 4.8.5 release. 2022-07-31 22:56:31 +02:00
patchback[bot]
7503c69b53 Pacman: Improve url integrity test (#4968) (#5010)
* Fix typo

* Host url package

* Delete cached files

* Add cases for cached url package

* Rename file_pkg for clarification

* Change port to 8080, as 80 is already used in pipeline

* Added fragment

* Change port to 8000, as 8080 is already used in pipeline

* Fixed changelog fragment

* Change port to 53280, as 8000 is already used in pipeline

* Change port to 27617 (copied from get_url), as 53280 is already used in pipeline

* Also download the signature of url package

Co-authored-by: Jean Raby <jean@raby.sh>

* Fix duplication errors

Co-authored-by: Jean Raby <jean@raby.sh>

* Copied waiting from get_url; applyed output redirection from jraby

* Fix signature filename

* Use correct cache dir

* Add missing assertions for uninstall_1c

* Fix typo

* Delete changelog fragment

* Make python server true async with 90 sec timeout

Copied from ainsible.builtin.get_url

Co-authored-by: Jean Raby <jean@raby.sh>
(cherry picked from commit 76b235c6b3)

Co-authored-by: Minei3oat <Minei3oat@users.noreply.github.com>
2022-07-27 07:41:37 +02:00
patchback[bot]
34d7369293 fixing minor documentation flaws (#5000) (#5003)
Co-authored-by: Thomas Blaesing <thomas.blaesing@erwinhymergroup.com>
(cherry picked from commit 037c75db4f)

Co-authored-by: Thomas <3999809+tehtbl@users.noreply.github.com>
2022-07-26 12:48:45 +02:00
patchback[bot]
91c37a79f4 Update to new Github account for notifications (#4986) (#4988)
* Update to new Github account for notifications

* Update to new Github account for notifications

(cherry picked from commit 3204905e5c)

Co-authored-by: Florian <100365291+florianpaulhoberg@users.noreply.github.com>
2022-07-23 14:23:31 +02:00
patchback[bot]
24c706ca1b python-daemon 2.3.1 requires Python 3+. (#4977) (#4980)
(cherry picked from commit e1cfa13a1b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-23 13:32:59 +02:00
patchback[bot]
851dec44c5 Temporarily disable the yum_versionlock tests. (#4978) (#4984)
(cherry picked from commit 8f5a8cf4ba)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-23 12:22:49 +02:00
patchback[bot]
22773418d2 Pacman: Fix name of URL packages (#4959) (#4970)
* Strip downloading... of unseen URLs

* Added changelog fragment

* Added integration tests for reason and reason_for

Inspired by the integration tests for url packages

* Revert "Added integration tests for reason and reason_for"

This reverts commit f60d92f0d7.

Accidentally commited to the wrong branch.

(cherry picked from commit 788cfb624a)

Co-authored-by: Minei3oat <Minei3oat@users.noreply.github.com>
2022-07-21 20:16:24 +02:00
patchback[bot]
8854f4d948 proxmox module_utils: fix get_vm int parse handling (#4945) (#4966)
* add int parse handling

* Revert "add int parse handling"

This reverts commit db2aac4254.

* fix: vmid check if state is absent

* add changelogs fragments

* Update changelogs/fragments/4945-fix-get_vm-int-parse-handling.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c57204f9a9)

Co-authored-by: miyuk <enough7531@gmail.com>
2022-07-21 08:14:36 +02:00
patchback[bot]
35092aa7f9 Adjust to b1dd2af4ca. (#4949) (#4951)
(cherry picked from commit ade54bceb8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-12 17:17:08 +02:00
patchback[bot]
5bbcfa5644 proxmox inventory: fix for agent enabled (#4910) (#4946)
* Update proxmox.py

* Forgot a debug print.

* pep

* Check if int, old school way.

* pep, once again.

* Create 4910-fix-for-agent-enabled.yml

* Must check the first listentry for enabled=1

* Update changelogs/fragments/4910-fix-for-agent-enabled.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit aa03c71267)

Co-authored-by: ube <ube@alienautopsy.net>
2022-07-12 11:17:44 +02:00
Felix Fontein
8f6a4e0028 Next release will be 4.8.5. 2022-07-12 08:56:49 +02:00
Felix Fontein
1a185608bd Release 4.8.4. 2022-07-12 06:59:54 +02:00
Felix Fontein
69ba89db0d Prepare 4.8.4 release. 2022-07-11 22:07:57 +02:00
patchback[bot]
07798c3169 Fix syntax in rax_clb_nodes that breaks in Python3 (#4933) (#4936)
* Use syntax that works in both Python 2 and 3 when iterating through a
    dict that's going to be mutated during iteration
  * Fixes `dictionary changed size during iteration` error
  * Fixes #4932

(cherry picked from commit 9a928d5ffb)

Co-authored-by: Teddy Caddy <tcaddy@users.noreply.github.com>
2022-07-07 22:37:04 +02:00
patchback[bot]
44009a72d3 fix lxd connection plugin inventory_hostname (#4912) (#4934)
* fixes lxd connection plugin issue #4886

remote_addr value was set to literal string 'inventory_hostname' instead
of the value for inventory_hostname variable. solution found in PR
ansible/ansible#77894

* changelog fragment - bugfix - lxd connection plugin

* correct changelog fragment

* Update changelogs/fragments/4886-fix-lxd-inventory-hostname.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* replace _host instance variable with calls to get 'remote_addr' option

suggested by felixfontein

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 905f9ec399)

Co-authored-by: antonc42 <antonc42@users.noreply.github.com>
2022-07-07 22:36:55 +02:00
patchback[bot]
ab176acacf Fix license filenames. (#4923) (#4924)
(cherry picked from commit 1c06e237c8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-07-04 20:12:07 +00:00
patchback[bot]
0372fdf150 Do not ignore tld option in DSV lookup plugin (#4911) (#4920)
* Do not ignore tld option in DSV lookup plugin

* add changelog fragment

* Update changelogs/fragments/4911-dsv-honor-tld-option.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7ffa2b525c)

Co-authored-by: andrii-zakurenyi <85106843+andrii-zakurenyi@users.noreply.github.com>
2022-07-04 20:59:28 +02:00
patchback[bot]
96c80fe478 Fix GetChassisPower when multiple chassis are present (#4902) (#4914)
* Fix GetChassisPower when multiple chassis are present

When multiple chassis are present, and one or more of those chassis do _not_
report power information, the GetChassisPower command will fail. To address
that, only report a failure if _all_ of the Chassis objects lack power
power reporting functionality.

Fixes #4901

* Update changelogs/fragments/4901-fix-redfish-chassispower.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit f60d12cf2d)

Co-authored-by: Jacob Yundt <jyundt@gmail.com>
2022-06-30 21:01:40 +02:00
patchback[bot]
bf42b48d5d Improve hwclock support test. (#4904) (#4908)
(cherry picked from commit 674b1da8bf)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-06-30 07:41:59 +02:00
patchback[bot]
29145b15de Fix command variable usage in CmdRunner (#4903) (#4905)
* Fix command variable usage

* Add changelog fragment for cmd-runner bugfix (#4903)

(cherry picked from commit 265c052c27)

Co-authored-by: Álvaro García Jaén <garciajaenalvaro@gmail.com>
2022-06-30 07:18:38 +02:00
Felix Fontein
35c4de1e80 Fix various module docs. (#4887) (#4889)
(cherry picked from commit 2dcdd2faca)
2022-06-22 23:01:49 +02:00
patchback[bot]
37d25436e8 Fix docs. (#4881) (#4883)
(cherry picked from commit aa4c994dfd)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-06-22 14:44:30 +02:00
patchback[bot]
2a36e20465 Added additional maintainers for TSS and DSV lookup plugins (#4870) (#4874)
(cherry picked from commit cb58867b57)

Co-authored-by: Ricky White <ricky@migusgroup.com>
2022-06-21 22:55:53 +02:00
patchback[bot]
b5fb390274 Disable opentelemetry installation for unit tests. (#4871) (#4872)
(cherry picked from commit 1eee35dffb)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-06-21 21:39:30 +02:00
patchback[bot]
bc0bb0cfc5 Fix CI due to pycdlib dropping Python 2 support. (#4865) (#4868)
(cherry picked from commit 297de3011c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-06-21 15:03:49 +02:00
Felix Fontein
b85107e289 Next expected release is 4.8.4. 2022-06-20 21:42:25 +02:00
243 changed files with 1371 additions and 1586 deletions

View File

@@ -1,3 +0,0 @@
## Azure Pipelines Configuration
Please see the [Documentation](https://github.com/ansible/community/wiki/Testing:-Azure-Pipelines) for more information.

View File

@@ -1,543 +0,0 @@
trigger:
batch: true
branches:
include:
- main
- stable-*
pr:
autoCancel: true
branches:
include:
- main
- stable-*
schedules:
- cron: 0 8 * * *
displayName: Nightly (main)
always: true
branches:
include:
- main
- cron: 0 10 * * *
displayName: Nightly (active stable branches)
always: true
branches:
include:
- stable-4
- cron: 0 11 * * 0
displayName: Weekly (old stable branches)
always: true
branches:
include:
- stable-3
variables:
- name: checkoutPath
value: ansible_collections/community/general
- name: coverageBranches
value: main
- name: pipelinesCoverage
value: coverage
- name: entryPoint
value: tests/utils/shippable/shippable.sh
- name: fetchDepth
value: 0
resources:
containers:
- container: default
image: quay.io/ansible/azure-pipelines-test-container:3.0.0
pool: Standard
stages:
### Sanity
- stage: Sanity_devel
displayName: Sanity devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: devel/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- test: extra
- stage: Sanity_2_13
displayName: Sanity 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.13/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_12
displayName: Sanity 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.12/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_11
displayName: Sanity 2.11
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.11/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_10
displayName: Sanity 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.10/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_9
displayName: Sanity 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.9/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
### Units
- stage: Units_devel
displayName: Units devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: devel/units/{0}/1
targets:
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- stage: Units_2_13
displayName: Units 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.13/units/{0}/1
targets:
- test: 2.7
- test: 3.6
- test: 3.8
- test: 3.9
- stage: Units_2_12
displayName: Units 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.12/units/{0}/1
targets:
- test: 2.6
- test: 3.5
- test: 3.8
- stage: Units_2_11
displayName: Units 2.11
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.11/units/{0}/1
targets:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.9
- stage: Units_2_10
displayName: Units 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.10/units/{0}/1
targets:
- test: 2.7
- test: 3.6
- stage: Units_2_9
displayName: Units 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.9/units/{0}/1
targets:
- test: 2.6
- test: 3.5
## Remote
- stage: Remote_devel
displayName: Remote devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/{0}
targets:
- name: macOS 12.0
test: macos/12.0
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 9.0
test: rhel/9.0
- name: FreeBSD 12.3
test: freebsd/12.3
- name: FreeBSD 13.1
test: freebsd/13.1
groups:
- 1
- 2
- 3
- stage: Remote_2_13
displayName: Remote 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.13/{0}
targets:
- name: macOS 12.0
test: macos/12.0
- name: RHEL 8.5
test: rhel/8.5
groups:
- 1
- 2
- 3
- stage: Remote_2_12
displayName: Remote 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.12/{0}
targets:
- name: macOS 11.1
test: macos/11.1
- name: RHEL 8.4
test: rhel/8.4
- name: FreeBSD 13.0
test: freebsd/13.0
groups:
- 1
- 2
- stage: Remote_2_11
displayName: Remote 2.11
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.11/{0}
targets:
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.3
test: rhel/8.3
#- name: FreeBSD 12.2
# test: freebsd/12.2
groups:
- 1
- 2
- stage: Remote_2_10
displayName: Remote 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.10/{0}
targets:
- name: OS X 10.11
test: osx/10.11
- name: macOS 10.15
test: macos/10.15
groups:
- 1
- 2
- stage: Remote_2_9
displayName: Remote 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.9/{0}
targets:
- name: RHEL 8.2
test: rhel/8.2
- name: RHEL 7.8
test: rhel/7.8
#- name: FreeBSD 12.0
# test: freebsd/12.0
groups:
- 1
- 2
### Docker
- stage: Docker_devel
displayName: Docker devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/linux/{0}
targets:
- name: CentOS 7
test: centos7
- name: Fedora 35
test: fedora35
- name: Fedora 36
test: fedora36
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
- name: Alpine 3
test: alpine3
groups:
- 1
- 2
- 3
- stage: Docker_2_13
displayName: Docker 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.13/linux/{0}
targets:
- name: Fedora 35
test: fedora35
- name: openSUSE 15 py2
test: opensuse15py2
- name: Alpine 3
test: alpine3
groups:
- 1
- 2
- 3
- stage: Docker_2_12
displayName: Docker 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.12/linux/{0}
targets:
- name: CentOS 6
test: centos6
- name: Fedora 34
test: fedora34
- name: Ubuntu 18.04
test: ubuntu1804
groups:
- 1
- 2
- 3
- stage: Docker_2_11
displayName: Docker 2.11
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.11/linux/{0}
targets:
- name: Fedora 33
test: fedora33
- name: Alpine 3
test: alpine3
groups:
- 2
- 3
- stage: Docker_2_10
displayName: Docker 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.10/linux/{0}
targets:
- name: Fedora 32
test: fedora32
- name: Ubuntu 16.04
test: ubuntu1604
groups:
- 2
- 3
- stage: Docker_2_9
displayName: Docker 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.9/linux/{0}
targets:
- name: Fedora 31
test: fedora31
groups:
- 2
- 3
### Community Docker
- stage: Docker_community_devel
displayName: Docker (community images) devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/linux-community/{0}
targets:
- name: Debian Bullseye
test: debian-bullseye/3.9
- name: ArchLinux
test: archlinux/3.10
- name: CentOS Stream 8
test: centos-stream8/3.8
groups:
- 1
- 2
- 3
### Cloud
- stage: Cloud_devel
displayName: Cloud devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: devel/cloud/{0}/1
targets:
- test: 2.7
- test: '3.10'
- stage: Cloud_2_13
displayName: Cloud 2.13
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.13/cloud/{0}/1
targets:
- test: 3.9
- stage: Cloud_2_12
displayName: Cloud 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.12/cloud/{0}/1
targets:
- test: 3.8
- stage: Cloud_2_11
displayName: Cloud 2.11
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.11/cloud/{0}/1
targets:
- test: 3.6
- stage: Cloud_2_10
displayName: Cloud 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.10/cloud/{0}/1
targets:
- test: 3.5
- stage: Cloud_2_9
displayName: Cloud 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.9/cloud/{0}/1
targets:
- test: 2.7
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity_devel
- Sanity_2_9
- Sanity_2_10
- Sanity_2_11
- Sanity_2_12
- Sanity_2_13
- Units_devel
- Units_2_9
- Units_2_10
- Units_2_11
- Units_2_12
- Units_2_13
- Remote_devel
- Remote_2_9
- Remote_2_10
- Remote_2_11
- Remote_2_12
- Remote_2_13
- Docker_devel
- Docker_2_9
- Docker_2_10
- Docker_2_11
- Docker_2_12
- Docker_2_13
- Docker_community_devel
- Cloud_devel
- Cloud_2_9
- Cloud_2_10
- Cloud_2_11
- Cloud_2_12
- Cloud_2_13
jobs:
- template: templates/coverage.yml

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# Aggregate code coverage results for later processing.
set -o pipefail -eu
agent_temp_directory="$1"
PATH="${PWD}/bin:${PATH}"
mkdir "${agent_temp_directory}/coverage/"
if [[ "$(ansible --version)" =~ \ 2\.9\. ]]; then
exit
fi
options=(--venv --venv-system-site-packages --color -v)
ansible-test coverage combine --group-by command --export "${agent_temp_directory}/coverage/" "${options[@]}"
if ansible-test coverage analyze targets generate --help >/dev/null 2>&1; then
# Only analyze coverage if the installed version of ansible-test supports it.
# Doing so allows this script to work unmodified for multiple Ansible versions.
ansible-test coverage analyze targets generate "${agent_temp_directory}/coverage/coverage-analyze-targets.json" "${options[@]}"
fi

View File

@@ -1,60 +0,0 @@
#!/usr/bin/env python
"""
Combine coverage data from multiple jobs, keeping the data only from the most recent attempt from each job.
Coverage artifacts must be named using the format: "Coverage $(System.JobAttempt) {StableUniqueNameForEachJob}"
The recommended coverage artifact name format is: Coverage $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)
Keep in mind that Azure Pipelines does not enforce unique job display names (only names).
It is up to pipeline authors to avoid name collisions when deviating from the recommended format.
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import shutil
import sys
def main():
"""Main program entry point."""
source_directory = sys.argv[1]
if '/ansible_collections/' in os.getcwd():
output_path = "tests/output"
else:
output_path = "test/results"
destination_directory = os.path.join(output_path, 'coverage')
if not os.path.exists(destination_directory):
os.makedirs(destination_directory)
jobs = {}
count = 0
for name in os.listdir(source_directory):
match = re.search('^Coverage (?P<attempt>[0-9]+) (?P<label>.+)$', name)
label = match.group('label')
attempt = int(match.group('attempt'))
jobs[label] = max(attempt, jobs.get(label, 0))
for label, attempt in jobs.items():
name = 'Coverage {attempt} {label}'.format(label=label, attempt=attempt)
source = os.path.join(source_directory, name)
source_files = os.listdir(source)
for source_file in source_files:
source_path = os.path.join(source, source_file)
destination_path = os.path.join(destination_directory, source_file + '.' + label)
print('"%s" -> "%s"' % (source_path, destination_path))
shutil.copyfile(source_path, destination_path)
count += 1
print('Coverage file count: %d' % count)
print('##vso[task.setVariable variable=coverageFileCount]%d' % count)
print('##vso[task.setVariable variable=outputPath]%s' % output_path)
if __name__ == '__main__':
main()

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# Check the test results and set variables for use in later steps.
set -o pipefail -eu
if [[ "$PWD" =~ /ansible_collections/ ]]; then
output_path="tests/output"
else
output_path="test/results"
fi
echo "##vso[task.setVariable variable=outputPath]${output_path}"
if compgen -G "${output_path}"'/junit/*.xml' > /dev/null; then
echo "##vso[task.setVariable variable=haveTestResults]true"
fi
if compgen -G "${output_path}"'/bot/ansible-test-*' > /dev/null; then
echo "##vso[task.setVariable variable=haveBotResults]true"
fi
if compgen -G "${output_path}"'/coverage/*' > /dev/null; then
echo "##vso[task.setVariable variable=haveCoverageData]true"
fi

View File

@@ -1,101 +0,0 @@
#!/usr/bin/env python
"""
Upload code coverage reports to codecov.io.
Multiple coverage files from multiple languages are accepted and aggregated after upload.
Python coverage, as well as PowerShell and Python stubs can all be uploaded.
"""
import argparse
import dataclasses
import pathlib
import shutil
import subprocess
import tempfile
import typing as t
import urllib.request
@dataclasses.dataclass(frozen=True)
class CoverageFile:
name: str
path: pathlib.Path
flags: t.List[str]
@dataclasses.dataclass(frozen=True)
class Args:
dry_run: bool
path: pathlib.Path
def parse_args() -> Args:
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--dry-run', action='store_true')
parser.add_argument('path', type=pathlib.Path)
args = parser.parse_args()
# Store arguments in a typed dataclass
fields = dataclasses.fields(Args)
kwargs = {field.name: getattr(args, field.name) for field in fields}
return Args(**kwargs)
def process_files(directory: pathlib.Path) -> t.Tuple[CoverageFile, ...]:
processed = []
for file in directory.joinpath('reports').glob('coverage*.xml'):
name = file.stem.replace('coverage=', '')
# Get flags from name
flags = name.replace('-powershell', '').split('=') # Drop '-powershell' suffix
flags = [flag if not flag.startswith('stub') else flag.split('-')[0] for flag in flags] # Remove "-01" from stub files
processed.append(CoverageFile(name, file, flags))
return tuple(processed)
def upload_files(codecov_bin: pathlib.Path, files: t.Tuple[CoverageFile, ...], dry_run: bool = False) -> None:
for file in files:
cmd = [
str(codecov_bin),
'--name', file.name,
'--file', str(file.path),
]
for flag in file.flags:
cmd.extend(['--flags', flag])
if dry_run:
print(f'DRY-RUN: Would run command: {cmd}')
continue
subprocess.run(cmd, check=True)
def download_file(url: str, dest: pathlib.Path, flags: int, dry_run: bool = False) -> None:
if dry_run:
print(f'DRY-RUN: Would download {url} to {dest} and set mode to {flags:o}')
return
with urllib.request.urlopen(url) as resp:
with dest.open('w+b') as f:
# Read data in chunks rather than all at once
shutil.copyfileobj(resp, f, 64 * 1024)
dest.chmod(flags)
def main():
args = parse_args()
url = 'https://ansible-ci-files.s3.amazonaws.com/codecov/linux/codecov'
with tempfile.TemporaryDirectory(prefix='codecov-') as tmpdir:
codecov_bin = pathlib.Path(tmpdir) / 'codecov'
download_file(url, codecov_bin, 0o755, args.dry_run)
files = process_files(args.path)
upload_files(codecov_bin, files, args.dry_run)
if __name__ == '__main__':
main()

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env bash
# Generate code coverage reports for uploading to Azure Pipelines and codecov.io.
set -o pipefail -eu
PATH="${PWD}/bin:${PATH}"
if [[ "$(ansible --version)" =~ \ 2\.9\. ]]; then
exit
fi
if ! ansible-test --help >/dev/null 2>&1; then
# Install the devel version of ansible-test for generating code coverage reports.
# This is only used by Ansible Collections, which are typically tested against multiple Ansible versions (in separate jobs).
# Since a version of ansible-test is required that can work the output from multiple older releases, the devel version is used.
pip install https://github.com/ansible/ansible/archive/devel.tar.gz --disable-pip-version-check
fi
ansible-test coverage xml --group-by command --stub --venv --venv-system-site-packages --color -v

View File

@@ -1,34 +0,0 @@
#!/usr/bin/env bash
# Configure the test environment and run the tests.
set -o pipefail -eu
entry_point="$1"
test="$2"
read -r -a coverage_branches <<< "$3" # space separated list of branches to run code coverage on for scheduled builds
export COMMIT_MESSAGE
export COMPLETE
export COVERAGE
export IS_PULL_REQUEST
if [ "${SYSTEM_PULLREQUEST_TARGETBRANCH:-}" ]; then
IS_PULL_REQUEST=true
COMMIT_MESSAGE=$(git log --format=%B -n 1 HEAD^2)
else
IS_PULL_REQUEST=
COMMIT_MESSAGE=$(git log --format=%B -n 1 HEAD)
fi
COMPLETE=
COVERAGE=
if [ "${BUILD_REASON}" = "Schedule" ]; then
COMPLETE=yes
if printf '%s\n' "${coverage_branches[@]}" | grep -q "^${BUILD_SOURCEBRANCHNAME}$"; then
COVERAGE=yes
fi
fi
"${entry_point}" "${test}" 2>&1 | "$(dirname "$0")/time-command.py"

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env python
"""Prepends a relative timestamp to each input line from stdin and writes it to stdout."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import time
def main():
"""Main program entry point."""
start = time.time()
sys.stdin.reconfigure(errors='surrogateescape')
sys.stdout.reconfigure(errors='surrogateescape')
for line in sys.stdin:
seconds = time.time() - start
sys.stdout.write('%02d:%02d %s' % (seconds // 60, seconds % 60, line))
sys.stdout.flush()
if __name__ == '__main__':
main()

View File

@@ -1,39 +0,0 @@
# This template adds a job for processing code coverage data.
# It will upload results to Azure Pipelines and codecov.io.
# Use it from a job stage that completes after all other jobs have completed.
# This can be done by placing it in a separate summary stage that runs after the test stage(s) have completed.
jobs:
- job: Coverage
displayName: Code Coverage
container: default
workspace:
clean: all
steps:
- checkout: self
fetchDepth: $(fetchDepth)
path: $(checkoutPath)
- task: DownloadPipelineArtifact@2
displayName: Download Coverage Data
inputs:
path: coverage/
patterns: "Coverage */*=coverage.combined"
- bash: .azure-pipelines/scripts/combine-coverage.py coverage/
displayName: Combine Coverage Data
- bash: .azure-pipelines/scripts/report-coverage.sh
displayName: Generate Coverage Report
condition: gt(variables.coverageFileCount, 0)
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
# Azure Pipelines only accepts a single coverage data file.
# That means only Python or PowerShell coverage can be uploaded, but not both.
# Set the "pipelinesCoverage" variable to determine which type is uploaded.
# Use "coverage" for Python and "coverage-powershell" for PowerShell.
summaryFileLocation: "$(outputPath)/reports/$(pipelinesCoverage).xml"
displayName: Publish to Azure Pipelines
condition: gt(variables.coverageFileCount, 0)
- bash: .azure-pipelines/scripts/publish-codecov.py "$(outputPath)"
displayName: Publish to codecov.io
condition: gt(variables.coverageFileCount, 0)
continueOnError: true

View File

@@ -1,55 +0,0 @@
# This template uses the provided targets and optional groups to generate a matrix which is then passed to the test template.
# If this matrix template does not provide the required functionality, consider using the test template directly instead.
parameters:
# A required list of dictionaries, one per test target.
# Each item in the list must contain a "test" or "name" key.
# Both may be provided. If one is omitted, the other will be used.
- name: targets
type: object
# An optional list of values which will be used to multiply the targets list into a matrix.
# Values can be strings or numbers.
- name: groups
type: object
default: []
# An optional format string used to generate the job name.
# - {0} is the name of an item in the targets list.
- name: nameFormat
type: string
default: "{0}"
# An optional format string used to generate the test name.
# - {0} is the name of an item in the targets list.
- name: testFormat
type: string
default: "{0}"
# An optional format string used to add the group to the job name.
# {0} is the formatted name of an item in the targets list.
# {{1}} is the group -- be sure to include the double "{{" and "}}".
- name: nameGroupFormat
type: string
default: "{0} - {{1}}"
# An optional format string used to add the group to the test name.
# {0} is the formatted test of an item in the targets list.
# {{1}} is the group -- be sure to include the double "{{" and "}}".
- name: testGroupFormat
type: string
default: "{0}/{{1}}"
jobs:
- template: test.yml
parameters:
jobs:
- ${{ if eq(length(parameters.groups), 0) }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(parameters.nameFormat, coalesce(target.name, target.test)) }}
test: ${{ format(parameters.testFormat, coalesce(target.test, target.name)) }}
- ${{ if not(eq(length(parameters.groups), 0)) }}:
- ${{ each group in parameters.groups }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(format(parameters.nameGroupFormat, parameters.nameFormat), coalesce(target.name, target.test), group) }}
test: ${{ format(format(parameters.testGroupFormat, parameters.testFormat), coalesce(target.test, target.name), group) }}

View File

@@ -1,45 +0,0 @@
# This template uses the provided list of jobs to create test one or more test jobs.
# It can be used directly if needed, or through the matrix template.
parameters:
# A required list of dictionaries, one per test job.
# Each item in the list must contain a "job" and "name" key.
- name: jobs
type: object
jobs:
- ${{ each job in parameters.jobs }}:
- job: test_${{ replace(replace(replace(job.test, '/', '_'), '.', '_'), '-', '_') }}
displayName: ${{ job.name }}
container: default
workspace:
clean: all
steps:
- checkout: self
fetchDepth: $(fetchDepth)
path: $(checkoutPath)
- bash: .azure-pipelines/scripts/run-tests.sh "$(entryPoint)" "${{ job.test }}" "$(coverageBranches)"
displayName: Run Tests
- bash: .azure-pipelines/scripts/process-results.sh
condition: succeededOrFailed()
displayName: Process Results
- bash: .azure-pipelines/scripts/aggregate-coverage.sh "$(Agent.TempDirectory)"
condition: eq(variables.haveCoverageData, 'true')
displayName: Aggregate Coverage Data
- task: PublishTestResults@2
condition: eq(variables.haveTestResults, 'true')
inputs:
testResultsFiles: "$(outputPath)/junit/*.xml"
displayName: Publish Test Results
- task: PublishPipelineArtifact@1
condition: eq(variables.haveBotResults, 'true')
displayName: Publish Bot Results
inputs:
targetPath: "$(outputPath)/bot/"
artifactName: "Bot $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"
- task: PublishPipelineArtifact@1
condition: eq(variables.haveCoverageData, 'true')
displayName: Publish Coverage Data
inputs:
targetPath: "$(Agent.TempDirectory)/coverage/"
artifactName: "Coverage $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"

14
.github/BOTMETA.yml vendored
View File

@@ -220,7 +220,8 @@ files:
$lookups/dnstxt.py: $lookups/dnstxt.py:
maintainers: jpmens maintainers: jpmens
$lookups/dsv.py: $lookups/dsv.py:
maintainers: amigus endlesstrax maintainers: delineaKrehl tylerezimmerman
ignore: amigus
$lookups/etcd3.py: $lookups/etcd3.py:
maintainers: eric-belhomme maintainers: eric-belhomme
$lookups/etcd.py: $lookups/etcd.py:
@@ -257,7 +258,8 @@ files:
maintainers: RevBits maintainers: RevBits
$lookups/shelvefile.py: {} $lookups/shelvefile.py: {}
$lookups/tss.py: $lookups/tss.py:
maintainers: amigus endlesstrax maintainers: delineaKrehl tylerezimmerman
ignore: amigus
$module_utils/: $module_utils/:
labels: module_utils labels: module_utils
$module_utils/gitlab.py: $module_utils/gitlab.py:
@@ -745,7 +747,8 @@ files:
labels: rocketchat labels: rocketchat
ignore: ramondelafuente ignore: ramondelafuente
$modules/notification/say.py: $modules/notification/say.py:
maintainers: $team_ansible_core mpdehaan maintainers: $team_ansible_core
ignore: mpdehaan
$modules/notification/sendgrid.py: $modules/notification/sendgrid.py:
maintainers: makaimc maintainers: makaimc
$modules/notification/slack.py: $modules/notification/slack.py:
@@ -921,7 +924,7 @@ files:
$modules/packaging/os/xbps.py: $modules/packaging/os/xbps.py:
maintainers: dinoocch the-maldridge maintainers: dinoocch the-maldridge
$modules/packaging/os/yum_versionlock.py: $modules/packaging/os/yum_versionlock.py:
maintainers: florianpaulhoberg aminvakil maintainers: gyptazy aminvakil
$modules/packaging/os/zypper.py: $modules/packaging/os/zypper.py:
maintainers: $team_suse maintainers: $team_suse
labels: zypper labels: zypper
@@ -1097,7 +1100,8 @@ files:
$modules/system/nosh.py: $modules/system/nosh.py:
maintainers: tacatac maintainers: tacatac
$modules/system/ohai.py: $modules/system/ohai.py:
maintainers: $team_ansible_core mpdehaan maintainers: $team_ansible_core
ignore: mpdehaan
labels: ohai labels: ohai
$modules/system/open_iscsi.py: $modules/system/open_iscsi.py:
maintainers: srvg maintainers: srvg

View File

@@ -1,49 +0,0 @@
name: "Code scanning - action"
on:
schedule:
- cron: '26 19 * * 1'
jobs:
CodeQL-Build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
# Override language selection by uncommenting this and choosing your languages
# with:
# languages: go, javascript, csharp, python, cpp, java
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

View File

@@ -6,6 +6,162 @@ Community General Release Notes
This changelog describes changes after version 3.0.0. This changelog describes changes after version 3.0.0.
v4.8.11
=======
Release Summary
---------------
Final maintenance release of community.general major version 4.
Major Changes
-------------
- The community.general 4.x.y release stream is now effectively **End of Life**. No more releases will be made, and regular CI runs will stop.
v4.8.10
=======
Release Summary
---------------
Maintenance release.
Bugfixes
--------
- ModuleHelper - fix bug when adjusting the name of reserved output variables (https://github.com/ansible-collections/community.general/pull/5755).
- loganalytics callback plugin - adjust type of callback to ``notification``, it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- logdna callback plugin - adjust type of callback to ``notification``, it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- logstash callback plugin - adjust type of callback to ``notification``, it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- splunk callback plugin - adjust type of callback to ``notification``, it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- sumologic callback plugin - adjust type of callback to ``notification``, it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- syslog_json callback plugin - adjust type of callback to ``notification``, it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- terraform and timezone - slight refactoring to avoid linter reporting potentially undefined variables (https://github.com/ansible-collections/community.general/pull/5933).
v4.8.9
======
Release Summary
---------------
Bugfix release.
Note that from now on, community.general 4.x.y only receives security fixes and major bugfixes, but no longer regular bugfixes.
Bugfixes
--------
- ldap_attrs - fix bug which caused a ``Bad search filter`` error. The error was occuring when the ldap attribute value contained special characters such as ``(`` or ``*`` (https://github.com/ansible-collections/community.general/issues/5434, https://github.com/ansible-collections/community.general/pull/5435).
- ldap_attrs - fix ordering issue by ignoring the ``{x}`` prefix on attribute values (https://github.com/ansible-collections/community.general/issues/977, https://github.com/ansible-collections/community.general/pull/5385).
v4.8.8
======
Release Summary
---------------
Regular bugfix release.
Bugfixes
--------
- archive - avoid crash when ``lzma`` is not present and ``format`` is not ``xz`` (https://github.com/ansible-collections/community.general/pull/5393).
- opentelemetry callback plugin - support opentelemetry-api 1.13.0 that removed support for ``_time_ns`` (https://github.com/ansible-collections/community.general/pull/5342).
- pfexec become plugin - remove superflous quotes preventing exe wrap from working as expected (https://github.com/ansible-collections/community.general/issues/3671, https://github.com/ansible-collections/community.general/pull/3889).
- pkgng - fix case when ``pkg`` fails when trying to upgrade all packages (https://github.com/ansible-collections/community.general/issues/5363).
- redhat_subscription - make module idempotent when ``pool_ids`` are used (https://github.com/ansible-collections/community.general/issues/5313).
- xenserver_facts - fix broken ``AnsibleModule`` call that prevented the module from working at all (https://github.com/ansible-collections/community.general/pull/5383).
v4.8.7
======
Release Summary
---------------
Regular bugfix release.
Minor Changes
-------------
- gitlab module util - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_branch - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_deploy_key - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_group - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_group_members - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_group_variable - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_hook - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_project - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_project_members - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_project_variable - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_protected_branch - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_runner - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_user - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
Bugfixes
--------
- locale_gen - fix support for Ubuntu (https://github.com/ansible-collections/community.general/issues/5281).
- tss lookup plugin - adding support for updated Delinea library (https://github.com/DelineaXPM/python-tss-sdk/issues/9, https://github.com/ansible-collections/community.general/pull/5151).
v4.8.6
======
Release Summary
---------------
Bugfix and maintenance release.
Minor Changes
-------------
- Added MIT license as ``MIT-license.txt`` for ``tests/unit/plugins/modules/packaging/language/test_gem.py`` (https://github.com/ansible-collections/community.general/pull/5065, https://github.com/ansible-collections/community.general/pull/5072).
Bugfixes
--------
- apache2_mod_proxy - avoid crash when reporting inability to parse balancer_member_page HTML caused by using an undefined variable in the error message (https://github.com/ansible-collections/community.general/pull/5111).
- dnsimple_info - correctly report missing library as ``requests`` and not ``another_library`` (https://github.com/ansible-collections/community.general/pull/5111).
- funcd connection plugin - fix signature of ``exec_command`` (https://github.com/ansible-collections/community.general/pull/5111).
- manageiq_alert_profiles - avoid crash when reporting unknown profile caused by trying to return an undefined variable (https://github.com/ansible-collections/community.general/pull/5111).
- nsupdate - compatibility with NS records (https://github.com/ansible-collections/community.general/pull/5112).
- packet_ip_subnet - fix error reporting in case of invalid CIDR prefix lengths (https://github.com/ansible-collections/community.general/pull/5111).
- pip_package_info - remove usage of global variable (https://github.com/ansible-collections/community.general/pull/5111).
v4.8.5
======
Release Summary
---------------
Regular bugfix release.
Bugfixes
--------
- pacman - fixed name resolution of URL packages (https://github.com/ansible-collections/community.general/pull/4959).
- proxmox - fix error handling when getting VM by name when ``state=absent`` (https://github.com/ansible-collections/community.general/pull/4945).
- proxmox inventory plugin - fix crash when ``enabled=1`` is used in agent config string (https://github.com/ansible-collections/community.general/pull/4910).
- proxmox_kvm - fix error handling when getting VM by name when ``state=absent`` (https://github.com/ansible-collections/community.general/pull/4945).
- slack - fix incorrect channel prefix ``#`` caused by incomplete pattern detection by adding ``G0`` and ``GF`` as channel ID patterns (https://github.com/ansible-collections/community.general/pull/5019).
v4.8.4
======
Release Summary
---------------
Regular bugfix release.
Bugfixes
--------
- cmd_runner module utils - fix bug caused by using the ``command`` variable instead of ``self.command`` when looking for binary path (https://github.com/ansible-collections/community.general/pull/4903).
- dsv lookup plugin - do not ignore the ``tld`` parameter (https://github.com/ansible-collections/community.general/pull/4911).
- lxd connection plugin - fix incorrect ``inventory_hostname`` in ``remote_addr``. This is needed for compatibility with ansible-core 2.13 (https://github.com/ansible-collections/community.general/issues/4886).
- rax_clb_nodes - fix code to be compatible with Python 3 (https://github.com/ansible-collections/community.general/pull/4933).
- redfish_info - fix to ``GetChassisPower`` to correctly report power information when multiple chassis exist, but not all chassis report power information (https://github.com/ansible-collections/community.general/issues/4901).
v4.8.3 v4.8.3
====== ======

9
MIT-license.txt Normal file
View File

@@ -0,0 +1,9 @@
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,6 +1,7 @@
# Community General Collection # Community General Collection
[![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-4)](https://dev.azure.com/ansible/community.general/_build?definitionId=31) [![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-4)](https://dev.azure.com/ansible/community.general/_build?definitionId=31)
[![EOL CI](https://github.com/ansible-collections/community.general/workflows/EOL%20CI/badge.svg?event=push)](https://github.com/ansible-collections/community.general/actions)
[![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general) [![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general)
This repository contains the `community.general` Ansible Collection. The collection is a part of the Ansible package and includes many modules and plugins supported by Ansible community which are not part of more specialized community collections. This repository contains the `community.general` Ansible Collection. The collection is a part of the Ansible package and includes many modules and plugins supported by Ansible community which are not part of more specialized community collections.
@@ -17,7 +18,7 @@ If you encounter abusive behavior violating the [Ansible Code of Conduct](https:
## Tested with Ansible ## Tested with Ansible
Tested with the current Ansible 2.9, ansible-base 2.10, ansible-core 2.11, ansible-core 2.12, ansible-core 2.13 releases and the current development version of ansible-core. Ansible versions before 2.9.10 are not supported. Tested with the current Ansible 2.9, ansible-base 2.10, ansible-core 2.11, ansible-core 2.12, ansible-core 2.13, and ansible-core 2.14 releases of ansible-core. Ansible versions before 2.9.10 are not supported.
## External requirements ## External requirements

View File

@@ -1808,6 +1808,42 @@ releases:
- 4647-gconftool2-command-arg.yaml - 4647-gconftool2-command-arg.yaml
- psf-license.yml - psf-license.yml
release_date: '2022-05-16' release_date: '2022-05-16'
4.8.10:
changes:
bugfixes:
- ModuleHelper - fix bug when adjusting the name of reserved output variables
(https://github.com/ansible-collections/community.general/pull/5755).
- loganalytics callback plugin - adjust type of callback to ``notification``,
it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- logdna callback plugin - adjust type of callback to ``notification``, it was
incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- logstash callback plugin - adjust type of callback to ``notification``, it
was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- splunk callback plugin - adjust type of callback to ``notification``, it was
incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- sumologic callback plugin - adjust type of callback to ``notification``, it
was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- syslog_json callback plugin - adjust type of callback to ``notification``,
it was incorrectly classified as ``aggregate`` before (https://github.com/ansible-collections/community.general/pull/5761).
- terraform and timezone - slight refactoring to avoid linter reporting potentially
undefined variables (https://github.com/ansible-collections/community.general/pull/5933).
release_summary: Maintenance release.
fragments:
- 4.8.10.yml
- 5755-mh-fix-output-conflict.yml
- 5761-callback-types.yml
- 5933-linting.yml
release_date: '2023-03-26'
4.8.11:
changes:
major_changes:
- The community.general 4.x.y release stream is now effectively **End of Life**.
No more releases will be made, and regular CI runs will stop.
release_summary: Final maintenance release of community.general major version
4.
fragments:
- eol.yml
release_date: '2023-05-08'
4.8.2: 4.8.2:
changes: changes:
bugfixes: bugfixes:
@@ -1856,3 +1892,146 @@ releases:
- 4814-sudoers-file-permissions.yml - 4814-sudoers-file-permissions.yml
- 4852-sudoers-state-absent.yml - 4852-sudoers-state-absent.yml
release_date: '2022-06-20' release_date: '2022-06-20'
4.8.4:
changes:
bugfixes:
- cmd_runner module utils - fix bug caused by using the ``command`` variable
instead of ``self.command`` when looking for binary path (https://github.com/ansible-collections/community.general/pull/4903).
- dsv lookup plugin - do not ignore the ``tld`` parameter (https://github.com/ansible-collections/community.general/pull/4911).
- lxd connection plugin - fix incorrect ``inventory_hostname`` in ``remote_addr``.
This is needed for compatibility with ansible-core 2.13 (https://github.com/ansible-collections/community.general/issues/4886).
- rax_clb_nodes - fix code to be compatible with Python 3 (https://github.com/ansible-collections/community.general/pull/4933).
- redfish_info - fix to ``GetChassisPower`` to correctly report power information
when multiple chassis exist, but not all chassis report power information
(https://github.com/ansible-collections/community.general/issues/4901).
release_summary: Regular bugfix release.
fragments:
- 4.8.4.yml
- 4886-fix-lxd-inventory-hostname.yml
- 4901-fix-redfish-chassispower.yml
- 4903-cmdrunner-bugfix.yaml
- 4911-dsv-honor-tld-option.yml
- 4933-fix-rax-clb-nodes.yaml
release_date: '2022-07-12'
4.8.5:
changes:
bugfixes:
- pacman - fixed name resolution of URL packages (https://github.com/ansible-collections/community.general/pull/4959).
- proxmox - fix error handling when getting VM by name when ``state=absent``
(https://github.com/ansible-collections/community.general/pull/4945).
- proxmox inventory plugin - fix crash when ``enabled=1`` is used in agent config
string (https://github.com/ansible-collections/community.general/pull/4910).
- proxmox_kvm - fix error handling when getting VM by name when ``state=absent``
(https://github.com/ansible-collections/community.general/pull/4945).
- slack - fix incorrect channel prefix ``#`` caused by incomplete pattern detection
by adding ``G0`` and ``GF`` as channel ID patterns (https://github.com/ansible-collections/community.general/pull/5019).
release_summary: Regular bugfix release.
fragments:
- 4.8.5.yml
- 4910-fix-for-agent-enabled.yml
- 4945-fix-get_vm-int-parse-handling.yaml
- 4959-pacman-fix-url-packages-name.yaml
- 5019-slack-support-more-groups.yml
release_date: '2022-08-01'
4.8.6:
changes:
bugfixes:
- apache2_mod_proxy - avoid crash when reporting inability to parse balancer_member_page
HTML caused by using an undefined variable in the error message (https://github.com/ansible-collections/community.general/pull/5111).
- dnsimple_info - correctly report missing library as ``requests`` and not ``another_library``
(https://github.com/ansible-collections/community.general/pull/5111).
- funcd connection plugin - fix signature of ``exec_command`` (https://github.com/ansible-collections/community.general/pull/5111).
- manageiq_alert_profiles - avoid crash when reporting unknown profile caused
by trying to return an undefined variable (https://github.com/ansible-collections/community.general/pull/5111).
- nsupdate - compatibility with NS records (https://github.com/ansible-collections/community.general/pull/5112).
- packet_ip_subnet - fix error reporting in case of invalid CIDR prefix lengths
(https://github.com/ansible-collections/community.general/pull/5111).
- pip_package_info - remove usage of global variable (https://github.com/ansible-collections/community.general/pull/5111).
minor_changes:
- Added MIT license as ``MIT-license.txt`` for ``tests/unit/plugins/modules/packaging/language/test_gem.py``
(https://github.com/ansible-collections/community.general/pull/5065, https://github.com/ansible-collections/community.general/pull/5072).
release_summary: Bugfix and maintenance release.
fragments:
- 4.8.6.yml
- 5111-fixes.yml
- 5112-fix-nsupdate-ns-entry.yaml
- licenses.yml
release_date: '2022-08-22'
4.8.7:
changes:
bugfixes:
- locale_gen - fix support for Ubuntu (https://github.com/ansible-collections/community.general/issues/5281).
- tss lookup plugin - adding support for updated Delinea library (https://github.com/DelineaXPM/python-tss-sdk/issues/9,
https://github.com/ansible-collections/community.general/pull/5151).
minor_changes:
- gitlab module util - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_branch - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_deploy_key - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_group - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_group_members - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_group_variable - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_hook - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_project - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_project_members - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_project_variable - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_protected_branch - minor refactor when checking for installed dependency
(https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_runner - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
- gitlab_user - minor refactor when checking for installed dependency (https://github.com/ansible-collections/community.general/pull/5259).
release_summary: Regular bugfix release.
fragments:
- 4.8.7.yml
- 5151-add-delinea-support-tss-lookup.yml
- 5259-gitlab-imports.yaml
- 5282-locale_gen.yaml
release_date: '2022-10-03'
4.8.8:
changes:
bugfixes:
- archive - avoid crash when ``lzma`` is not present and ``format`` is not ``xz``
(https://github.com/ansible-collections/community.general/pull/5393).
- opentelemetry callback plugin - support opentelemetry-api 1.13.0 that removed
support for ``_time_ns`` (https://github.com/ansible-collections/community.general/pull/5342).
- pfexec become plugin - remove superflous quotes preventing exe wrap from working
as expected (https://github.com/ansible-collections/community.general/issues/3671,
https://github.com/ansible-collections/community.general/pull/3889).
- pkgng - fix case when ``pkg`` fails when trying to upgrade all packages (https://github.com/ansible-collections/community.general/issues/5363).
- redhat_subscription - make module idempotent when ``pool_ids`` are used (https://github.com/ansible-collections/community.general/issues/5313).
- xenserver_facts - fix broken ``AnsibleModule`` call that prevented the module
from working at all (https://github.com/ansible-collections/community.general/pull/5383).
release_summary: Regular bugfix release.
fragments:
- 3671-illumos-pfexec.yml
- 4.8.8.yml
- 5313-fix-redhat_subscription-idempotency-pool_ids.yml
- 5342-opentelemetry_bug_fix_opentelemetry-api-1.13.yml
- 5369-pkgng-fix-update-all.yaml
- 5383-xenserver_facts.yml
- 5393-archive.yml
release_date: '2022-10-24'
4.8.9:
changes:
bugfixes:
- ldap_attrs - fix bug which caused a ``Bad search filter`` error. The error
was occuring when the ldap attribute value contained special characters such
as ``(`` or ``*`` (https://github.com/ansible-collections/community.general/issues/5434,
https://github.com/ansible-collections/community.general/pull/5435).
- ldap_attrs - fix ordering issue by ignoring the ``{x}`` prefix on attribute
values (https://github.com/ansible-collections/community.general/issues/977,
https://github.com/ansible-collections/community.general/pull/5385).
release_summary: 'Bugfix release.
Note that from now on, community.general 4.x.y only receives security fixes
and major bugfixes, but no longer regular bugfixes.'
fragments:
- 4.8.9.yml
- 5385-search_s-based-_is_value_present.yaml
- 5435-escape-ldap-param.yml
release_date: '2022-11-06'

View File

@@ -1,6 +1,6 @@
namespace: community namespace: community
name: general name: general
version: 4.8.3 version: 4.8.11
readme: README.md readme: README.md
authors: authors:
- Ansible (https://github.com/ansible) - Ansible (https://github.com/ansible)

View File

@@ -101,4 +101,4 @@ class BecomeModule(BecomeBase):
flags = self.get_option('become_flags') flags = self.get_option('become_flags')
noexe = not self.get_option('wrap_exe') noexe = not self.get_option('wrap_exe')
return '%s %s "%s"' % (exe, flags, self._build_success_command(cmd, shell, noexe=noexe)) return '%s %s %s' % (exe, flags, self._build_success_command(cmd, shell, noexe=noexe))

View File

@@ -8,9 +8,9 @@ DOCUMENTATION = """
name: sudosu name: sudosu
short_description: Run tasks using sudo su - short_description: Run tasks using sudo su -
description: description:
- This become plugins allows your remote/login user to execute commands as another user via the C(sudo) and C(su) utilities combined. - This become plugin allows your remote/login user to execute commands as another user via the C(sudo) and C(su) utilities combined.
author: author:
- Dag Wieers (@dagwieers) - Dag Wieers (@dagwieers)
version_added: 2.4.0 version_added: 2.4.0
options: options:
become_user: become_user:

View File

@@ -232,13 +232,13 @@ class CallbackModule(CallbackModule_default):
# Remove non-essential attributes # Remove non-essential attributes
for attr in self.removed_attributes: for attr in self.removed_attributes:
if attr in result: if attr in result:
del(result[attr]) del result[attr]
# Remove empty attributes (list, dict, str) # Remove empty attributes (list, dict, str)
for attr in result.copy(): for attr in result.copy():
if isinstance(result[attr], (MutableSequence, MutableMapping, binary_type, text_type)): if isinstance(result[attr], (MutableSequence, MutableMapping, binary_type, text_type)):
if not result[attr]: if not result[attr]:
del(result[attr]) del result[attr]
def _handle_exceptions(self, result): def _handle_exceptions(self, result):
if 'exception' in result: if 'exception' in result:

View File

@@ -12,7 +12,7 @@ DOCUMENTATION = '''
type: notification type: notification
short_description: write playbook output to log file short_description: write playbook output to log file
description: description:
- This callback writes playbook output to a file per host in the `/var/log/ansible/hosts` directory - This callback writes playbook output to a file per host in the C(/var/log/ansible/hosts) directory
requirements: requirements:
- Whitelist in configuration - Whitelist in configuration
- A writeable /var/log/ansible/hosts directory by the user executing Ansible on the controller - A writeable /var/log/ansible/hosts directory by the user executing Ansible on the controller

View File

@@ -6,7 +6,7 @@ __metaclass__ = type
DOCUMENTATION = ''' DOCUMENTATION = '''
name: loganalytics name: loganalytics
type: aggregate type: notification
short_description: Posts task results to Azure Log Analytics short_description: Posts task results to Azure Log Analytics
author: "Cyrus Li (@zhcli) <cyrus1006@gmail.com>" author: "Cyrus Li (@zhcli) <cyrus1006@gmail.com>"
description: description:
@@ -153,7 +153,7 @@ class AzureLogAnalyticsSource(object):
class CallbackModule(CallbackBase): class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0 CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate' CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'loganalytics' CALLBACK_NAME = 'loganalytics'
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True

View File

@@ -8,7 +8,7 @@ __metaclass__ = type
DOCUMENTATION = ''' DOCUMENTATION = '''
author: Unknown (!UNKNOWN) author: Unknown (!UNKNOWN)
name: logdna name: logdna
type: aggregate type: notification
short_description: Sends playbook logs to LogDNA short_description: Sends playbook logs to LogDNA
description: description:
- This callback will report logs from playbook actions, tasks, and events to LogDNA (https://app.logdna.com) - This callback will report logs from playbook actions, tasks, and events to LogDNA (https://app.logdna.com)
@@ -110,7 +110,7 @@ def isJSONable(obj):
class CallbackModule(CallbackBase): class CallbackModule(CallbackBase):
CALLBACK_VERSION = 0.1 CALLBACK_VERSION = 0.1
CALLBACK_TYPE = 'aggregate' CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logdna' CALLBACK_NAME = 'community.general.logdna'
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True

View File

@@ -112,7 +112,7 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase): class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0 CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate' CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logstash' CALLBACK_NAME = 'community.general.logstash'
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True

View File

@@ -94,13 +94,32 @@ try:
from opentelemetry.sdk.trace.export import ( from opentelemetry.sdk.trace.export import (
BatchSpanProcessor BatchSpanProcessor
) )
from opentelemetry.util._time import _time_ns
# Support for opentelemetry-api <= 1.12
try:
from opentelemetry.util._time import _time_ns
except ImportError as imp_exc:
OTEL_LIBRARY_TIME_NS_ERROR = imp_exc
else:
OTEL_LIBRARY_TIME_NS_ERROR = None
except ImportError as imp_exc: except ImportError as imp_exc:
OTEL_LIBRARY_IMPORT_ERROR = imp_exc OTEL_LIBRARY_IMPORT_ERROR = imp_exc
OTEL_LIBRARY_TIME_NS_ERROR = imp_exc
else: else:
OTEL_LIBRARY_IMPORT_ERROR = None OTEL_LIBRARY_IMPORT_ERROR = None
if sys.version_info >= (3, 7):
time_ns = time.time_ns
elif not OTEL_LIBRARY_TIME_NS_ERROR:
time_ns = _time_ns
else:
def time_ns():
# Support versions older than 3.7 with opentelemetry-api > 1.12
return int(time.time() * 1e9)
class TaskData: class TaskData:
""" """
Data about an individual task. Data about an individual task.
@@ -112,10 +131,7 @@ class TaskData:
self.path = path self.path = path
self.play = play self.play = play
self.host_data = OrderedDict() self.host_data = OrderedDict()
if sys.version_info >= (3, 7): self.start = time_ns()
self.start = time.time_ns()
else:
self.start = _time_ns()
self.action = action self.action = action
self.args = args self.args = args
@@ -140,10 +156,7 @@ class HostData:
self.name = name self.name = name
self.status = status self.status = status
self.result = result self.result = result
if sys.version_info >= (3, 7): self.finish = time_ns()
self.finish = time.time_ns()
else:
self.finish = _time_ns()
class OpenTelemetrySource(object): class OpenTelemetrySource(object):

View File

@@ -14,9 +14,9 @@ DOCUMENTATION = '''
- set as main display callback - set as main display callback
short_description: only print certain tasks short_description: only print certain tasks
description: description:
- This callback only prints tasks that have been tagged with `print_action` or that have failed. - This callback only prints tasks that have been tagged with C(print_action) or that have failed.
This allows operators to focus on the tasks that provide value only. This allows operators to focus on the tasks that provide value only.
- Tasks that are not printed are placed with a '.'. - Tasks that are not printed are placed with a C(.).
- If you increase verbosity all tasks are printed. - If you increase verbosity all tasks are printed.
options: options:
nocolor: nocolor:

View File

@@ -19,7 +19,7 @@ __metaclass__ = type
DOCUMENTATION = ''' DOCUMENTATION = '''
name: splunk name: splunk
type: aggregate type: notification
short_description: Sends task result events to Splunk HTTP Event Collector short_description: Sends task result events to Splunk HTTP Event Collector
author: "Stuart Hirst (!UNKNOWN) <support@convergingdata.com>" author: "Stuart Hirst (!UNKNOWN) <support@convergingdata.com>"
description: description:
@@ -176,7 +176,7 @@ class SplunkHTTPCollectorSource(object):
class CallbackModule(CallbackBase): class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0 CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate' CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.splunk' CALLBACK_NAME = 'community.general.splunk'
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True

View File

@@ -19,7 +19,7 @@ __metaclass__ = type
DOCUMENTATION = ''' DOCUMENTATION = '''
name: sumologic name: sumologic
type: aggregate type: notification
short_description: Sends task result events to Sumologic short_description: Sends task result events to Sumologic
author: "Ryan Currah (@ryancurrah)" author: "Ryan Currah (@ryancurrah)"
description: description:
@@ -122,7 +122,7 @@ class SumologicHTTPCollectorSource(object):
class CallbackModule(CallbackBase): class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0 CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate' CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.sumologic' CALLBACK_NAME = 'community.general.sumologic'
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True

View File

@@ -70,7 +70,7 @@ class CallbackModule(CallbackBase):
""" """
CALLBACK_VERSION = 2.0 CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate' CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.syslog_json' CALLBACK_NAME = 'community.general.syslog_json'
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True

View File

@@ -63,7 +63,7 @@ class Connection(ConnectionBase):
self.client = fc.Client(self.host) self.client = fc.Client(self.host)
return self return self
def exec_command(self, cmd, become_user=None, sudoable=False, executable='/bin/sh', in_data=None): def exec_command(self, cmd, in_data=None, sudoable=True):
""" run a command on the remote minion """ """ run a command on the remote minion """
if in_data: if in_data:

View File

@@ -18,6 +18,7 @@ DOCUMENTATION = '''
- Container identifier. - Container identifier.
default: inventory_hostname default: inventory_hostname
vars: vars:
- name: inventory_hostname
- name: ansible_host - name: ansible_host
- name: ansible_lxd_host - name: ansible_lxd_host
executable: executable:
@@ -61,7 +62,6 @@ class Connection(ConnectionBase):
def __init__(self, play_context, new_stdin, *args, **kwargs): def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs) super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._host = self._play_context.remote_addr
try: try:
self._lxc_cmd = get_bin_path("lxc") self._lxc_cmd = get_bin_path("lxc")
except ValueError: except ValueError:
@@ -75,14 +75,14 @@ class Connection(ConnectionBase):
super(Connection, self)._connect() super(Connection, self)._connect()
if not self._connected: if not self._connected:
self._display.vvv(u"ESTABLISH LXD CONNECTION FOR USER: root", host=self._host) self._display.vvv(u"ESTABLISH LXD CONNECTION FOR USER: root", host=self.get_option('remote_addr'))
self._connected = True self._connected = True
def exec_command(self, cmd, in_data=None, sudoable=True): def exec_command(self, cmd, in_data=None, sudoable=True):
""" execute a command on the lxd host """ """ execute a command on the lxd host """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
self._display.vvv(u"EXEC {0}".format(cmd), host=self._host) self._display.vvv(u"EXEC {0}".format(cmd), host=self.get_option('remote_addr'))
local_cmd = [self._lxc_cmd] local_cmd = [self._lxc_cmd]
if self.get_option("project"): if self.get_option("project"):
@@ -104,10 +104,10 @@ class Connection(ConnectionBase):
stderr = to_text(stderr) stderr = to_text(stderr)
if stderr == "error: Container is not running.\n": if stderr == "error: Container is not running.\n":
raise AnsibleConnectionFailure("container not running: %s" % self._host) raise AnsibleConnectionFailure("container not running: %s" % self.get_option('remote_addr'))
if stderr == "error: not found\n": if stderr == "error: not found\n":
raise AnsibleConnectionFailure("container not found: %s" % self._host) raise AnsibleConnectionFailure("container not found: %s" % self.get_option('remote_addr'))
return process.returncode, stdout, stderr return process.returncode, stdout, stderr
@@ -115,7 +115,7 @@ class Connection(ConnectionBase):
""" put a file from local to lxd """ """ put a file from local to lxd """
super(Connection, self).put_file(in_path, out_path) super(Connection, self).put_file(in_path, out_path)
self._display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self._host) self._display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.get_option('remote_addr'))
if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')): if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("input path is not a file: %s" % in_path) raise AnsibleFileNotFound("input path is not a file: %s" % in_path)
@@ -138,7 +138,7 @@ class Connection(ConnectionBase):
""" fetch a file from lxd to local """ """ fetch a file from lxd to local """
super(Connection, self).fetch_file(in_path, out_path) super(Connection, self).fetch_file(in_path, out_path)
self._display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._host) self._display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.get_option('remote_addr'))
local_cmd = [self._lxc_cmd] local_cmd = [self._lxc_cmd]
if self.get_option("project"): if self.get_option("project"):

View File

@@ -42,6 +42,7 @@ options:
- The path on which InfluxDB server is accessible - The path on which InfluxDB server is accessible
- Only available when using python-influxdb >= 5.1.0 - Only available when using python-influxdb >= 5.1.0
type: str type: str
default: ''
version_added: '0.2.0' version_added: '0.2.0'
validate_certs: validate_certs:
description: description:
@@ -79,4 +80,5 @@ options:
description: description:
- HTTP(S) proxy to use for Requests to connect to InfluxDB server. - HTTP(S) proxy to use for Requests to connect to InfluxDB server.
type: dict type: dict
default: {}
''' '''

View File

@@ -22,6 +22,7 @@ options:
description: description:
- The password to use with I(bind_dn). - The password to use with I(bind_dn).
type: str type: str
default: ''
dn: dn:
required: true required: true
description: description:
@@ -58,7 +59,7 @@ options:
sasl_class: sasl_class:
description: description:
- The class to use for SASL authentication. - The class to use for SASL authentication.
- possible choices are C(external), C(gssapi). - Possible choices are C(external), C(gssapi).
type: str type: str
choices: ['external', 'gssapi'] choices: ['external', 'gssapi']
default: external default: external

View File

@@ -16,6 +16,7 @@ options:
- Is needed for some modules - Is needed for some modules
type: dict type: dict
required: false required: false
default: {}
utm_host: utm_host:
description: description:
- The REST Endpoint of the Sophos UTM. - The REST Endpoint of the Sophos UTM.

View File

@@ -51,10 +51,16 @@ DOCUMENTATION = '''
type: boolean type: boolean
default: false default: false
requirements: requirements:
- jc (https://github.com/kellyjonbrazil/jc) - jc installed as a Python library (U(https://pypi.org/project/jc/))
''' '''
EXAMPLES = ''' EXAMPLES = '''
- name: Install the prereqs of the jc filter (jc Python package) on the Ansible controller
delegate_to: localhost
ansible.builtin.pip:
name: jc
state: present
- name: Run command - name: Run command
ansible.builtin.command: uname -a ansible.builtin.command: uname -a
register: result register: result
@@ -107,15 +113,19 @@ def jc(data, parser, quiet=True, raw=False):
dictionary or list of dictionaries dictionary or list of dictionaries
Example: Example:
- name: run date command - name: run date command
hosts: ubuntu hosts: ubuntu
tasks: tasks:
- shell: date - name: install the prereqs of the jc filter (jc Python package) on the Ansible controller
delegate_to: localhost
ansible.builtin.pip:
name: jc
state: present
- ansible.builtin.shell: date
register: result register: result
- set_fact: - ansible.builtin.set_fact:
myvar: "{{ result.stdout | community.general.jc('date') }}" myvar: "{{ result.stdout | community.general.jc('date') }}"
- debug: - ansible.builtin.debug:
msg: "{{ myvar }}" msg: "{{ myvar }}"
produces: produces:
@@ -137,7 +147,7 @@ def jc(data, parser, quiet=True, raw=False):
""" """
if not HAS_LIB: if not HAS_LIB:
raise AnsibleError('You need to install "jc" prior to running jc filter') raise AnsibleError('You need to install "jc" as a Python library on the Ansible controller prior to running jc filter')
try: try:
jc_parser = importlib.import_module('jc.parsers.' + parser) jc_parser = importlib.import_module('jc.parsers.' + parser)

View File

@@ -400,12 +400,20 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
properties[parsed_key] = [tag.strip() for tag in stripped_value.split(",")] properties[parsed_key] = [tag.strip() for tag in stripped_value.split(",")]
# The first field in the agent string tells you whether the agent is enabled # The first field in the agent string tells you whether the agent is enabled
# the rest of the comma separated string is extra config for the agent # the rest of the comma separated string is extra config for the agent.
if config == 'agent' and int(value.split(',')[0]): # In some (newer versions of proxmox) instances it can be 'enabled=1'.
agent_iface_value = self._get_agent_network_interfaces(node, vmid, vmtype) if config == 'agent':
if agent_iface_value: agent_enabled = 0
agent_iface_key = self.to_safe('%s%s' % (key, "_interfaces")) try:
properties[agent_iface_key] = agent_iface_value agent_enabled = int(value.split(',')[0])
except ValueError:
if value.split(',')[0] == "enabled=1":
agent_enabled = 1
if agent_enabled:
agent_iface_value = self._get_agent_network_interfaces(node, vmid, vmtype)
if agent_iface_value:
agent_iface_key = self.to_safe('%s%s' % (key, "_interfaces"))
properties[agent_iface_key] = agent_iface_value
if config == 'lxc': if config == 'lxc':
out_val = {} out_val = {}

View File

@@ -16,7 +16,7 @@ DOCUMENTATION = '''
The lookup order mirrors the one from Chef, all folders in the base path are walked back looking for the following configuration The lookup order mirrors the one from Chef, all folders in the base path are walked back looking for the following configuration
file in order : .chef/knife.rb, ~/.chef/knife.rb, /etc/chef/client.rb" file in order : .chef/knife.rb, ~/.chef/knife.rb, /etc/chef/client.rb"
requirements: requirements:
- "pychef (python library https://pychef.readthedocs.io `pip install pychef`)" - "pychef (L(Python library, https://pychef.readthedocs.io), C(pip install pychef))"
options: options:
name: name:
description: description:

View File

@@ -122,6 +122,7 @@ class LookupModule(LookupBase):
"tenant": self.get_option("tenant"), "tenant": self.get_option("tenant"),
"client_id": self.get_option("client_id"), "client_id": self.get_option("client_id"),
"client_secret": self.get_option("client_secret"), "client_secret": self.get_option("client_secret"),
"tld": self.get_option("tld"),
"url_template": self.get_option("url_template"), "url_template": self.get_option("url_template"),
} }
) )

View File

@@ -170,19 +170,29 @@ try:
HAS_TSS_SDK = True HAS_TSS_SDK = True
except ImportError: except ImportError:
SecretServer = None try:
SecretServerError = None from delinea.secrets.server import SecretServer, SecretServerError
HAS_TSS_SDK = False
HAS_TSS_SDK = True
except ImportError:
SecretServer = None
SecretServerError = None
HAS_TSS_SDK = False
try: try:
from thycotic.secrets.server import PasswordGrantAuthorizer, DomainPasswordGrantAuthorizer, AccessTokenAuthorizer from thycotic.secrets.server import PasswordGrantAuthorizer, DomainPasswordGrantAuthorizer, AccessTokenAuthorizer
HAS_TSS_AUTHORIZER = True HAS_TSS_AUTHORIZER = True
except ImportError: except ImportError:
PasswordGrantAuthorizer = None try:
DomainPasswordGrantAuthorizer = None from delinea.secrets.server import PasswordGrantAuthorizer, DomainPasswordGrantAuthorizer, AccessTokenAuthorizer
AccessTokenAuthorizer = None
HAS_TSS_AUTHORIZER = False HAS_TSS_AUTHORIZER = True
except ImportError:
PasswordGrantAuthorizer = None
DomainPasswordGrantAuthorizer = None
AccessTokenAuthorizer = None
HAS_TSS_AUTHORIZER = False
display = Display() display = Display()

View File

@@ -191,7 +191,7 @@ class CmdRunner(object):
environ_update = {} environ_update = {}
self.environ_update = environ_update self.environ_update = environ_update
self.command[0] = module.get_bin_path(command[0], opt_dirs=path_prefix, required=True) self.command[0] = module.get_bin_path(self.command[0], opt_dirs=path_prefix, required=True)
for mod_param_name, spec in iteritems(module.argument_spec): for mod_param_name, spec in iteritems(module.argument_spec):
if mod_param_name not in self.arg_formats: if mod_param_name not in self.arg_formats:

View File

@@ -13,10 +13,9 @@ from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
try: try:
from urllib import quote_plus # Python 2.X
from urlparse import urljoin from urlparse import urljoin
except ImportError: except ImportError:
from urllib.parse import quote_plus, urljoin # Python 3+ from urllib.parse import urljoin # Python 3+
import traceback import traceback
@@ -26,6 +25,7 @@ try:
import requests import requests
HAS_GITLAB_PACKAGE = True HAS_GITLAB_PACKAGE = True
except Exception: except Exception:
gitlab = None
GITLAB_IMP_ERR = traceback.format_exc() GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False HAS_GITLAB_PACKAGE = False
@@ -63,6 +63,14 @@ def find_group(gitlab_instance, identifier):
return project return project
def ensure_gitlab_package(module):
if not HAS_GITLAB_PACKAGE:
module.fail_json(
msg=missing_required_lib("python-gitlab", url='https://python-gitlab.readthedocs.io/en/stable/'),
exception=GITLAB_IMP_ERR
)
def gitlab_authentication(module): def gitlab_authentication(module):
gitlab_url = module.params['api_url'] gitlab_url = module.params['api_url']
validate_certs = module.params['validate_certs'] validate_certs = module.params['validate_certs']
@@ -72,8 +80,7 @@ def gitlab_authentication(module):
gitlab_oauth_token = module.params['api_oauth_token'] gitlab_oauth_token = module.params['api_oauth_token']
gitlab_job_token = module.params['api_job_token'] gitlab_job_token = module.params['api_job_token']
if not HAS_GITLAB_PACKAGE: ensure_gitlab_package(module)
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try: try:
# python-gitlab library remove support for username/password authentication since 1.13.0 # python-gitlab library remove support for username/password authentication since 1.13.0

View File

@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright (c) 2021-2022 Hewlett Packard Enterprise, Inc. All rights reserved. # Copyright (c) 2021-2022 Hewlett Packard Enterprise, Inc. All rights reserved.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
@@ -91,7 +91,7 @@ class iLORedfishUtils(RedfishUtils):
data = response['data'] data = response['data']
ntp_list = data[setkey] ntp_list = data[setkey]
if(len(ntp_list) == 2): if len(ntp_list) == 2:
ntp_list.pop(0) ntp_list.pop(0)
ntp_list.append(mgr_attributes['mgr_attr_value']) ntp_list.append(mgr_attributes['mgr_attr_value'])

View File

@@ -78,7 +78,7 @@ def memset_api_call(api_key, api_method, payload=None):
if msg is None: if msg is None:
msg = response.json() msg = response.json()
return(has_failed, msg, response) return has_failed, msg, response
def check_zone_domain(data, domain): def check_zone_domain(data, domain):
@@ -92,7 +92,7 @@ def check_zone_domain(data, domain):
if zone_domain['domain'] == domain: if zone_domain['domain'] == domain:
exists = True exists = True
return(exists) return exists
def check_zone(data, name): def check_zone(data, name):
@@ -109,7 +109,7 @@ def check_zone(data, name):
if counter == 1: if counter == 1:
exists = True exists = True
return(exists, counter) return exists, counter
def get_zone_id(zone_name, current_zones): def get_zone_id(zone_name, current_zones):
@@ -135,4 +135,4 @@ def get_zone_id(zone_name, current_zones):
zone_id = None zone_id = None
msg = 'Zone ID could not be returned as duplicate zone names were detected' msg = 'Zone ID could not be returned as duplicate zone names were detected'
return(zone_exists, msg, counter, zone_id) return zone_exists, msg, counter, zone_id

View File

@@ -70,7 +70,7 @@ class ModuleHelper(DeprecateAttrsMixin, VarsMixin, DependencyMixin, ModuleHelper
vars_diff = self.vars.diff() or {} vars_diff = self.vars.diff() or {}
result['diff'] = dict_merge(dict(diff), vars_diff) result['diff'] = dict_merge(dict(diff), vars_diff)
for varname in result: for varname in list(result):
if varname in self._output_conflict_list: if varname in self._output_conflict_list:
result["_" + varname] = result[varname] result["_" + varname] = result[varname]
del result[varname] del result[varname]

View File

@@ -122,8 +122,7 @@ def rax_find_image(module, rax_module, image, exit=True):
except ValueError: except ValueError:
try: try:
image = cs.images.find(human_id=image) image = cs.images.find(human_id=image)
except(cs.exceptions.NotFound, except (cs.exceptions.NotFound, cs.exceptions.NoUniqueMatch):
cs.exceptions.NoUniqueMatch):
try: try:
image = cs.images.find(name=image) image = cs.images.find(name=image)
except (cs.exceptions.NotFound, except (cs.exceptions.NotFound,

View File

@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright (c) 2017-2018 Dell EMC Inc. # Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
@@ -1883,14 +1883,13 @@ class RedfishUtils(object):
for property in properties: for property in properties:
if property in data: if property in data:
chassis_power_result[property] = data[property] chassis_power_result[property] = data[property]
else:
return {'ret': False, 'msg': 'Key PowerControl not found.'}
chassis_power_results.append(chassis_power_result) chassis_power_results.append(chassis_power_result)
else:
return {'ret': False, 'msg': 'Key Power not found.'}
result['entries'] = chassis_power_results if len(chassis_power_results) > 0:
return result result['entries'] = chassis_power_results
return result
else:
return {'ret': False, 'msg': 'Power information not found.'}
def get_chassis_thermals(self): def get_chassis_thermals(self):
result = {} result = {}
@@ -2056,7 +2055,7 @@ class RedfishUtils(object):
if property in data: if property in data:
nic[property] = data[property] nic[property] = data[property]
result['entries'] = nic result['entries'] = nic
return(result) return result
def get_nic_inventory(self, resource_uri): def get_nic_inventory(self, resource_uri):
result = {} result = {}

View File

@@ -15,6 +15,7 @@ try:
from redis import Redis from redis import Redis
from redis import __version__ as redis_version from redis import __version__ as redis_version
HAS_REDIS_PACKAGE = True HAS_REDIS_PACKAGE = True
REDIS_IMP_ERR = None
except ImportError: except ImportError:
REDIS_IMP_ERR = traceback.format_exc() REDIS_IMP_ERR = traceback.format_exc()
HAS_REDIS_PACKAGE = False HAS_REDIS_PACKAGE = False
@@ -22,6 +23,7 @@ except ImportError:
try: try:
import certifi import certifi
HAS_CERTIFI_PACKAGE = True HAS_CERTIFI_PACKAGE = True
CERTIFI_IMPORT_ERROR = None
except ImportError: except ImportError:
CERTIFI_IMPORT_ERROR = traceback.format_exc() CERTIFI_IMPORT_ERROR = traceback.format_exc()
HAS_CERTIFI_PACKAGE = False HAS_CERTIFI_PACKAGE = False

View File

@@ -45,12 +45,12 @@ options:
type: str type: str
image_id: image_id:
description: description:
- Image ID used to launch instances. Required when C(state=present) and creating new ECS instances. - Image ID used to launch instances. Required when I(state=present) and creating new ECS instances.
aliases: ['image'] aliases: ['image']
type: str type: str
instance_type: instance_type:
description: description:
- Instance type used to launch instances. Required when C(state=present) and creating new ECS instances. - Instance type used to launch instances. Required when I(state=present) and creating new ECS instances.
aliases: ['type'] aliases: ['type']
type: str type: str
security_groups: security_groups:
@@ -89,7 +89,7 @@ options:
max_bandwidth_out: max_bandwidth_out:
description: description:
- Maximum outgoing bandwidth to the public network, measured in Mbps (Megabits per second). - Maximum outgoing bandwidth to the public network, measured in Mbps (Megabits per second).
Required when C(allocate_public_ip=True). Ignored when C(allocate_public_ip=False). Required when I(allocate_public_ip=true). Ignored when I(allocate_public_ip=false).
default: 0 default: 0
type: int type: int
host_name: host_name:
@@ -153,7 +153,7 @@ options:
type: str type: str
period: period:
description: description:
- The charge duration of the instance, in month. Required when C(instance_charge_type=PrePaid). - The charge duration of the instance, in months. Required when I(instance_charge_type=PrePaid).
- The valid value are [1-9, 12, 24, 36]. - The valid value are [1-9, 12, 24, 36].
default: 1 default: 1
type: int type: int
@@ -164,7 +164,7 @@ options:
default: False default: False
auto_renew_period: auto_renew_period:
description: description:
- The duration of the automatic renew the charge of the instance. Required when C(auto_renew=True). - The duration of the automatic renew the charge of the instance. Required when I(auto_renew=true).
choices: [1, 2, 3, 6, 12] choices: [1, 2, 3, 6, 12]
type: int type: int
instance_ids: instance_ids:
@@ -216,31 +216,31 @@ options:
version_added: '0.2.0' version_added: '0.2.0'
spot_strategy: spot_strategy:
description: description:
- The bidding mode of the pay-as-you-go instance. This parameter is valid when InstanceChargeType is set to PostPaid. - The bidding mode of the pay-as-you-go instance. This parameter is valid when InstanceChargeType is set to PostPaid.
choices: ['NoSpot', 'SpotWithPriceLimit', 'SpotAsPriceGo'] choices: ['NoSpot', 'SpotWithPriceLimit', 'SpotAsPriceGo']
default: 'NoSpot' default: 'NoSpot'
type: str type: str
version_added: '0.2.0' version_added: '0.2.0'
period_unit: period_unit:
description: description:
- The duration unit that you will buy the resource. It is valid when C(instance_charge_type=PrePaid) - The duration unit that you will buy the resource. It is valid when I(instance_charge_type=PrePaid).
choices: ['Month', 'Week'] choices: ['Month', 'Week']
default: 'Month' default: 'Month'
type: str type: str
version_added: '0.2.0' version_added: '0.2.0'
dry_run: dry_run:
description: description:
- Specifies whether to send a dry-run request. - Specifies whether to send a dry-run request.
- If I(dry_run=True), Only a dry-run request is sent and no instance is created. The system checks whether the - If I(dry_run=true), Only a dry-run request is sent and no instance is created. The system checks whether the
required parameters are set, and validates the request format, service permissions, and available ECS instances. required parameters are set, and validates the request format, service permissions, and available ECS instances.
If the validation fails, the corresponding error code is returned. If the validation succeeds, the DryRunOperation error code is returned. If the validation fails, the corresponding error code is returned. If the validation succeeds, the DryRunOperation error code is returned.
- If I(dry_run=False), A request is sent. If the validation succeeds, the instance is created. - If I(dry_run=false), A request is sent. If the validation succeeds, the instance is created.
default: False default: False
type: bool type: bool
version_added: '0.2.0' version_added: '0.2.0'
include_data_disks: include_data_disks:
description: description:
- Whether to change instance disks charge type when changing instance charge type. - Whether to change instance disks charge type when changing instance charge type.
default: True default: True
type: bool type: bool
version_added: '0.2.0' version_added: '0.2.0'

View File

@@ -60,6 +60,7 @@ options:
- The values specified here will be used at installation time as --set arguments for atomic install. - The values specified here will be used at installation time as --set arguments for atomic install.
type: list type: list
elements: str elements: str
default: []
''' '''
EXAMPLES = r''' EXAMPLES = r'''

View File

@@ -44,6 +44,7 @@ options:
description: description:
- A description of the VLAN. - A description of the VLAN.
type: str type: str
default: ''
network_domain: network_domain:
description: description:
- The Id or name of the target network domain. - The Id or name of the target network domain.
@@ -53,11 +54,13 @@ options:
description: description:
- The base address for the VLAN's IPv4 network (e.g. 192.168.1.0). - The base address for the VLAN's IPv4 network (e.g. 192.168.1.0).
type: str type: str
default: ''
private_ipv4_prefix_size: private_ipv4_prefix_size:
description: description:
- The size of the IPv4 address space, e.g 24. - The size of the IPv4 address space, e.g 24.
- Required, if C(private_ipv4_base_address) is specified. - Required, if C(private_ipv4_base_address) is specified.
type: int type: int
default: 0
state: state:
description: description:
- The desired state for the target VLAN. - The desired state for the target VLAN.

View File

@@ -33,6 +33,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -33,6 +33,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -33,6 +33,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -33,6 +33,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -34,6 +34,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -33,6 +33,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -33,6 +33,7 @@ options:
description: description:
- The timeouts for each operations. - The timeouts for each operations.
type: dict type: dict
default: {}
suboptions: suboptions:
create: create:
description: description:

View File

@@ -36,6 +36,7 @@ options:
description: description:
- Add the instance to a Display Group in Linode Manager. - Add the instance to a Display Group in Linode Manager.
type: str type: str
default: ''
linode_id: linode_id:
description: description:
- Unique ID of a linode server. This value is read-only in the sense that - Unique ID of a linode server. This value is read-only in the sense that

View File

@@ -191,10 +191,10 @@ notes:
2.1, the later requires python to be installed in the instance which can 2.1, the later requires python to be installed in the instance which can
be done with the command module. be done with the command module.
- You can copy a file from the host to the instance - You can copy a file from the host to the instance
with the Ansible M(ansible.builtin.copy) and M(ansible.builtin.template) module and the `lxd` connection plugin. with the Ansible M(ansible.builtin.copy) and M(ansible.builtin.template) module and the C(community.general.lxd) connection plugin.
See the example below. See the example below.
- You can copy a file in the created instance to the localhost - You can copy a file in the created instance to the localhost
with `command=lxc file pull instance_name/dir/filename filename`. with C(command=lxc file pull instance_name/dir/filename filename).
See the first example below. See the first example below.
''' '''

View File

@@ -111,7 +111,7 @@ def poll_reload_status(api_key=None, job_id=None, payload=None):
memset_api = response.json() memset_api = response.json()
msg = None msg = None
return(memset_api, msg, stderr) return memset_api, msg, stderr
def reload_dns(args=None): def reload_dns(args=None):
@@ -133,7 +133,7 @@ def reload_dns(args=None):
retvals['failed'] = has_failed retvals['failed'] = has_failed
retvals['memset_api'] = response.json() retvals['memset_api'] = response.json()
retvals['msg'] = msg retvals['msg'] = msg
return(retvals) return retvals
# set changed to true if the reload request was accepted. # set changed to true if the reload request was accepted.
has_changed = True has_changed = True
@@ -153,7 +153,7 @@ def reload_dns(args=None):
if val is not None: if val is not None:
retvals[val] = eval(val) retvals[val] = eval(val)
return(retvals) return retvals
def main(): def main():

View File

@@ -127,7 +127,7 @@ def get_facts(args=None):
retvals['failed'] = has_failed retvals['failed'] = has_failed
retvals['msg'] = msg retvals['msg'] = msg
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code) retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
return(retvals) return retvals
# we don't want to return the same thing twice # we don't want to return the same thing twice
msg = None msg = None
@@ -139,7 +139,7 @@ def get_facts(args=None):
if val is not None: if val is not None:
retvals[val] = eval(val) retvals[val] = eval(val)
return(retvals) return retvals
def main(): def main():

View File

@@ -252,7 +252,7 @@ def get_facts(args=None):
retvals['failed'] = has_failed retvals['failed'] = has_failed
retvals['msg'] = msg retvals['msg'] = msg
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code) retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
return(retvals) return retvals
# we don't want to return the same thing twice # we don't want to return the same thing twice
msg = None msg = None
@@ -264,7 +264,7 @@ def get_facts(args=None):
if val is not None: if val is not None:
retvals[val] = eval(val) retvals[val] = eval(val)
return(retvals) return retvals
def main(): def main():

View File

@@ -43,6 +43,7 @@ options:
- The default TTL for all records created in the zone. This must be a - The default TTL for all records created in the zone. This must be a
valid int from U(https://www.memset.com/apidocs/methods_dns.html#dns.zone_create). valid int from U(https://www.memset.com/apidocs/methods_dns.html#dns.zone_create).
type: int type: int
default: 0
choices: [ 0, 300, 600, 900, 1800, 3600, 7200, 10800, 21600, 43200, 86400 ] choices: [ 0, 300, 600, 900, 1800, 3600, 7200, 10800, 21600, 43200, 86400 ]
force: force:
required: false required: false
@@ -139,7 +140,7 @@ def check(args=None):
retvals['changed'] = has_changed retvals['changed'] = has_changed
retvals['failed'] = has_failed retvals['failed'] = has_failed
return(retvals) return retvals
def create_zone(args=None, zone_exists=None, payload=None): def create_zone(args=None, zone_exists=None, payload=None):
@@ -185,7 +186,7 @@ def create_zone(args=None, zone_exists=None, payload=None):
_has_failed, _msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload) _has_failed, _msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload)
memset_api = response.json() memset_api = response.json()
return(has_failed, has_changed, memset_api, msg) return has_failed, has_changed, memset_api, msg
def delete_zone(args=None, zone_exists=None, payload=None): def delete_zone(args=None, zone_exists=None, payload=None):
@@ -233,7 +234,7 @@ def delete_zone(args=None, zone_exists=None, payload=None):
else: else:
has_failed, has_changed = False, False has_failed, has_changed = False, False
return(has_failed, has_changed, memset_api, msg) return has_failed, has_changed, memset_api, msg
def create_or_delete(args=None): def create_or_delete(args=None):
@@ -255,7 +256,7 @@ def create_or_delete(args=None):
retvals['failed'] = _has_failed retvals['failed'] = _has_failed
retvals['msg'] = _msg retvals['msg'] = _msg
return(retvals) return retvals
zone_exists, _msg, counter, _zone_id = get_zone_id(zone_name=args['name'], current_zones=response.json()) zone_exists, _msg, counter, _zone_id = get_zone_id(zone_name=args['name'], current_zones=response.json())
@@ -271,7 +272,7 @@ def create_or_delete(args=None):
if val is not None: if val is not None:
retvals[val] = eval(val) retvals[val] = eval(val)
return(retvals) return retvals
def main(): def main():

View File

@@ -110,7 +110,7 @@ def check(args=None):
retvals['changed'] = has_changed retvals['changed'] = has_changed
retvals['failed'] = has_failed retvals['failed'] = has_failed
return(retvals) return retvals
def create_zone_domain(args=None, zone_exists=None, zone_id=None, payload=None): def create_zone_domain(args=None, zone_exists=None, zone_id=None, payload=None):
@@ -138,7 +138,7 @@ def create_zone_domain(args=None, zone_exists=None, zone_id=None, payload=None):
if not has_failed: if not has_failed:
has_changed = True has_changed = True
return(has_failed, has_changed, msg) return has_failed, has_changed, msg
def delete_zone_domain(args=None, payload=None): def delete_zone_domain(args=None, payload=None):
@@ -165,7 +165,7 @@ def delete_zone_domain(args=None, payload=None):
# unset msg as we don't want to return unnecessary info to the user. # unset msg as we don't want to return unnecessary info to the user.
msg = None msg = None
return(has_failed, has_changed, memset_api, msg) return has_failed, has_changed, memset_api, msg
def create_or_delete_domain(args=None): def create_or_delete_domain(args=None):
@@ -188,7 +188,7 @@ def create_or_delete_domain(args=None):
retvals['failed'] = has_failed retvals['failed'] = has_failed
retvals['msg'] = msg retvals['msg'] = msg
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code) retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
return(retvals) return retvals
zone_exists, msg, counter, zone_id = get_zone_id(zone_name=args['zone'], current_zones=response.json()) zone_exists, msg, counter, zone_id = get_zone_id(zone_name=args['zone'], current_zones=response.json())
@@ -203,7 +203,7 @@ def create_or_delete_domain(args=None):
retvals['failed'] = has_failed retvals['failed'] = has_failed
retvals['msg'] = stderr retvals['msg'] = stderr
return(retvals) return retvals
if args['state'] == 'present': if args['state'] == 'present':
has_failed, has_changed, msg = create_zone_domain(args=args, zone_exists=zone_exists, zone_id=zone_id, payload=payload) has_failed, has_changed, msg = create_zone_domain(args=args, zone_exists=zone_exists, zone_id=zone_id, payload=payload)
@@ -217,7 +217,7 @@ def create_or_delete_domain(args=None):
if val is not None: if val is not None:
retvals[val] = eval(val) retvals[val] = eval(val)
return(retvals) return retvals
def main(): def main():

View File

@@ -43,11 +43,13 @@ options:
description: description:
- C(SRV) and C(TXT) record priority, in the range 0 > 999 (inclusive). - C(SRV) and C(TXT) record priority, in the range 0 > 999 (inclusive).
type: int type: int
default: 0
record: record:
required: false required: false
description: description:
- The subdomain to create. - The subdomain to create.
type: str type: str
default: ''
type: type:
required: true required: true
description: description:
@@ -64,6 +66,7 @@ options:
description: description:
- The record's TTL in seconds (will inherit zone's TTL if not explicitly set). This must be a - The record's TTL in seconds (will inherit zone's TTL if not explicitly set). This must be a
valid int from U(https://www.memset.com/apidocs/methods_dns.html#dns.zone_record_create). valid int from U(https://www.memset.com/apidocs/methods_dns.html#dns.zone_record_create).
default: 0
choices: [ 0, 300, 600, 900, 1800, 3600, 7200, 10800, 21600, 43200, 86400 ] choices: [ 0, 300, 600, 900, 1800, 3600, 7200, 10800, 21600, 43200, 86400 ]
type: int type: int
zone: zone:
@@ -221,7 +224,7 @@ def create_zone_record(args=None, zone_id=None, records=None, payload=None):
# nothing to do; record is already correct so we populate # nothing to do; record is already correct so we populate
# the return var with the existing record's details. # the return var with the existing record's details.
memset_api = zone_record memset_api = zone_record
return(has_changed, has_failed, memset_api, msg) return has_changed, has_failed, memset_api, msg
else: else:
# merge dicts ensuring we change any updated values # merge dicts ensuring we change any updated values
payload = zone_record.copy() payload = zone_record.copy()
@@ -231,7 +234,7 @@ def create_zone_record(args=None, zone_id=None, records=None, payload=None):
has_changed = True has_changed = True
# return the new record to the user in the returned var. # return the new record to the user in the returned var.
memset_api = new_record memset_api = new_record
return(has_changed, has_failed, memset_api, msg) return has_changed, has_failed, memset_api, msg
has_failed, msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload) has_failed, msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload)
if not has_failed: if not has_failed:
has_changed = True has_changed = True
@@ -246,7 +249,7 @@ def create_zone_record(args=None, zone_id=None, records=None, payload=None):
has_changed = True has_changed = True
# populate the return var with the new record's details. # populate the return var with the new record's details.
memset_api = new_record memset_api = new_record
return(has_changed, has_failed, memset_api, msg) return has_changed, has_failed, memset_api, msg
has_failed, msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload) has_failed, msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload)
if not has_failed: if not has_failed:
has_changed = True has_changed = True
@@ -254,7 +257,7 @@ def create_zone_record(args=None, zone_id=None, records=None, payload=None):
# empty msg as we don't want to return a boatload of json to the user. # empty msg as we don't want to return a boatload of json to the user.
msg = None msg = None
return(has_changed, has_failed, memset_api, msg) return has_changed, has_failed, memset_api, msg
def delete_zone_record(args=None, records=None, payload=None): def delete_zone_record(args=None, records=None, payload=None):
@@ -270,7 +273,7 @@ def delete_zone_record(args=None, records=None, payload=None):
for zone_record in records: for zone_record in records:
if args['check_mode']: if args['check_mode']:
has_changed = True has_changed = True
return(has_changed, has_failed, memset_api, msg) return has_changed, has_failed, memset_api, msg
payload['id'] = zone_record['id'] payload['id'] = zone_record['id']
api_method = 'dns.zone_record_delete' api_method = 'dns.zone_record_delete'
has_failed, msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload) has_failed, msg, response = memset_api_call(api_key=args['api_key'], api_method=api_method, payload=payload)
@@ -280,7 +283,7 @@ def delete_zone_record(args=None, records=None, payload=None):
# empty msg as we don't want to return a boatload of json to the user. # empty msg as we don't want to return a boatload of json to the user.
msg = None msg = None
return(has_changed, has_failed, memset_api, msg) return has_changed, has_failed, memset_api, msg
def create_or_delete(args=None): def create_or_delete(args=None):
@@ -304,7 +307,7 @@ def create_or_delete(args=None):
retvals['failed'] = _has_failed retvals['failed'] = _has_failed
retvals['msg'] = msg retvals['msg'] = msg
retvals['stderr'] = "API returned an error: {0}" . format(response.status_code) retvals['stderr'] = "API returned an error: {0}" . format(response.status_code)
return(retvals) return retvals
zone_exists, _msg, counter, zone_id = get_zone_id(zone_name=args['zone'], current_zones=response.json()) zone_exists, _msg, counter, zone_id = get_zone_id(zone_name=args['zone'], current_zones=response.json())
@@ -317,7 +320,7 @@ def create_or_delete(args=None):
retvals['failed'] = has_failed retvals['failed'] = has_failed
retvals['msg'] = stderr retvals['msg'] = stderr
retvals['stderr'] = stderr retvals['stderr'] = stderr
return(retvals) return retvals
# get a list of all records ( as we can't limit records by zone) # get a list of all records ( as we can't limit records by zone)
api_method = 'dns.zone_record_list' api_method = 'dns.zone_record_list'
@@ -339,7 +342,7 @@ def create_or_delete(args=None):
if val is not None: if val is not None:
retvals[val] = eval(val) retvals[val] = eval(val)
return(retvals) return retvals
def main(): def main():

View File

@@ -743,6 +743,8 @@ def main():
module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e)) module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e))
elif state == 'absent': elif state == 'absent':
if not vmid:
module.exit_json(changed=False, msg='VM with hostname = %s is already absent' % hostname)
try: try:
vm = proxmox.get_vm(vmid, ignore_missing=True) vm = proxmox.get_vm(vmid, ignore_missing=True)
if not vm: if not vm:

View File

@@ -1370,6 +1370,8 @@ def main():
elif state == 'absent': elif state == 'absent':
status = {} status = {}
if not vmid:
module.exit_json(changed=False, msg='VM with name = %s is already absent' % name)
try: try:
vm = proxmox.get_vm(vmid, ignore_missing=True) vm = proxmox.get_vm(vmid, ignore_missing=True)
if not vm: if not vm:

View File

@@ -53,6 +53,7 @@ options:
description: description:
- The RHEV/oVirt cluster in which you want you VM to start. - The RHEV/oVirt cluster in which you want you VM to start.
type: str type: str
default: ''
datacenter: datacenter:
description: description:
- The RHEV/oVirt datacenter in which you want you VM to start. - The RHEV/oVirt datacenter in which you want you VM to start.
@@ -1252,7 +1253,6 @@ def setChanged():
def setMsg(message): def setMsg(message):
global failed
msg.append(message) msg.append(message)

View File

@@ -34,6 +34,7 @@ options:
- The name of the serverless framework project stage to deploy to. - The name of the serverless framework project stage to deploy to.
- This uses the serverless framework default "dev". - This uses the serverless framework default "dev".
type: str type: str
default: ''
functions: functions:
description: description:
- A list of specific functions to deploy. - A list of specific functions to deploy.
@@ -41,12 +42,12 @@ options:
- Deprecated parameter, it will be removed in community.general 5.0.0. - Deprecated parameter, it will be removed in community.general 5.0.0.
type: list type: list
elements: str elements: str
default: []
region: region:
description: description:
- AWS region to deploy the service to. - AWS region to deploy the service to.
- This parameter defaults to C(us-east-1). - This parameter defaults to C(us-east-1).
type: str type: str
default: ''
deploy: deploy:
description: description:
- Whether or not to deploy artifacts after building them. - Whether or not to deploy artifacts after building them.
@@ -115,7 +116,7 @@ state:
returned: always returned: always
command: command:
type: str type: str
description: Full `serverless` command run by this module, in case you want to re-run the command outside the module. description: Full C(serverless) command run by this module, in case you want to re-run the command outside the module.
returned: always returned: always
sample: serverless deploy --stage production sample: serverless deploy --stage production
''' '''

View File

@@ -67,7 +67,7 @@ options:
state_file: state_file:
description: description:
- The path to an existing Terraform state file to use when building plan. - The path to an existing Terraform state file to use when building plan.
If this is not specified, the default `terraform.tfstate` will be used. If this is not specified, the default C(terraform.tfstate) will be used.
- This option is ignored when plan is specified. - This option is ignored when plan is specified.
type: path type: path
variables_files: variables_files:
@@ -89,6 +89,7 @@ options:
resources selected here will also auto-include any dependencies. resources selected here will also auto-include any dependencies.
type: list type: list
elements: str elements: str
default: []
lock: lock:
description: description:
- Enable statefile locking, if you use a service that accepts locks (such - Enable statefile locking, if you use a service that accepts locks (such
@@ -103,7 +104,7 @@ options:
force_init: force_init:
description: description:
- To avoid duplicating infra, if a state file can't be found this will - To avoid duplicating infra, if a state file can't be found this will
force a `terraform init`. Generally, this should be turned off unless force a C(terraform init). Generally, this should be turned off unless
you intend to provision an entirely new Terraform deployment. you intend to provision an entirely new Terraform deployment.
default: false default: false
type: bool type: bool
@@ -149,7 +150,7 @@ options:
type: int type: int
version_added: '3.8.0' version_added: '3.8.0'
notes: notes:
- To just run a `terraform plan`, use check mode. - To just run a C(terraform plan), use check mode.
requirements: [ "terraform" ] requirements: [ "terraform" ]
author: "Ryan Scott Brown (@ryansb)" author: "Ryan Scott Brown (@ryansb)"
''' '''
@@ -205,7 +206,7 @@ EXAMPLES = """
RETURN = """ RETURN = """
outputs: outputs:
type: complex type: complex
description: A dictionary of all the TF outputs by their assigned name. Use `.outputs.MyOutputName.value` to access the value. description: A dictionary of all the TF outputs by their assigned name. Use C(.outputs.MyOutputName.value) to access the value.
returned: on success returned: on success
sample: '{"bukkit_arn": {"sensitive": false, "type": "string", "value": "arn:aws:s3:::tf-test-bukkit"}' sample: '{"bukkit_arn": {"sensitive": false, "type": "string", "value": "arn:aws:s3:::tf-test-bukkit"}'
contains: contains:
@@ -223,12 +224,12 @@ outputs:
description: The value of the output as interpolated by Terraform description: The value of the output as interpolated by Terraform
stdout: stdout:
type: str type: str
description: Full `terraform` command stdout, in case you want to display it or examine the event log description: Full C(terraform) command stdout, in case you want to display it or examine the event log
returned: always returned: always
sample: '' sample: ''
command: command:
type: str type: str
description: Full `terraform` command built by this module, in case you want to re-run the command outside the module or debug a problem. description: Full C(terraform) command built by this module, in case you want to re-run the command outside the module or debug a problem.
returned: always returned: always
sample: terraform apply ... sample: terraform apply ...
""" """
@@ -507,9 +508,9 @@ def main():
outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file) outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file)
rc, outputs_text, outputs_err = module.run_command(outputs_command, cwd=project_path) rc, outputs_text, outputs_err = module.run_command(outputs_command, cwd=project_path)
outputs = {}
if rc == 1: if rc == 1:
module.warn("Could not get Terraform outputs. This usually means none have been defined.\nstdout: {0}\nstderr: {1}".format(outputs_text, outputs_err)) module.warn("Could not get Terraform outputs. This usually means none have been defined.\nstdout: {0}\nstderr: {1}".format(outputs_text, outputs_err))
outputs = {}
elif rc != 0: elif rc != 0:
module.fail_json( module.fail_json(
msg="Failure when getting Terraform outputs. " msg="Failure when getting Terraform outputs. "

View File

@@ -161,9 +161,7 @@ def get_srs(session):
def main(): def main():
module = AnsibleModule( module = AnsibleModule({}, supports_check_mode=True)
supports_check_mode=True,
)
if not HAVE_XENAPI: if not HAVE_XENAPI:
module.fail_json(changed=False, msg="python xen api required for this module") module.fail_json(changed=False, msg="python xen api required for this module")

View File

@@ -59,6 +59,7 @@ options:
(port_from, port_to, and source) (port_from, port_to, and source)
type: list type: list
elements: dict elements: dict
default: []
add_server_ips: add_server_ips:
description: description:
- A list of server identifiers (id or name) to be assigned to a firewall policy. - A list of server identifiers (id or name) to be assigned to a firewall policy.
@@ -66,12 +67,14 @@ options:
type: list type: list
elements: str elements: str
required: false required: false
default: []
remove_server_ips: remove_server_ips:
description: description:
- A list of server IP ids to be unassigned from a firewall policy. Used in combination with update state. - A list of server IP ids to be unassigned from a firewall policy. Used in combination with update state.
type: list type: list
elements: str elements: str
required: false required: false
default: []
add_rules: add_rules:
description: description:
- A list of rules that will be added to an existing firewall policy. - A list of rules that will be added to an existing firewall policy.
@@ -79,12 +82,14 @@ options:
type: list type: list
elements: dict elements: dict
required: false required: false
default: []
remove_rules: remove_rules:
description: description:
- A list of rule ids that will be removed from an existing firewall policy. Used in combination with update state. - A list of rule ids that will be removed from an existing firewall policy. Used in combination with update state.
type: list type: list
elements: str elements: str
required: false required: false
default: []
description: description:
description: description:
- Firewall policy description. maxLength=256 - Firewall policy description. maxLength=256

View File

@@ -97,6 +97,7 @@ options:
port_balancer, and port_server parameters, in addition to source parameter, which is optional. port_balancer, and port_server parameters, in addition to source parameter, which is optional.
type: list type: list
elements: dict elements: dict
default: []
description: description:
description: description:
- Description of the load balancer. maxLength=256 - Description of the load balancer. maxLength=256
@@ -109,12 +110,14 @@ options:
type: list type: list
elements: str elements: str
required: false required: false
default: []
remove_server_ips: remove_server_ips:
description: description:
- A list of server IP ids to be unassigned from a load balancer. Used in combination with update state. - A list of server IP ids to be unassigned from a load balancer. Used in combination with update state.
type: list type: list
elements: str elements: str
required: false required: false
default: []
add_rules: add_rules:
description: description:
- A list of rules that will be added to an existing load balancer. - A list of rules that will be added to an existing load balancer.
@@ -122,12 +125,14 @@ options:
type: list type: list
elements: dict elements: dict
required: false required: false
default: []
remove_rules: remove_rules:
description: description:
- A list of rule ids that will be removed from an existing load balancer. Used in combination with update state. - A list of rule ids that will be removed from an existing load balancer. Used in combination with update state.
type: list type: list
elements: str elements: str
required: false required: false
default: []
wait: wait:
description: description:
- wait for the instance to be in state 'running' before returning - wait for the instance to be in state 'running' before returning

View File

@@ -73,6 +73,7 @@ options:
and value is used to advise when the value is exceeded. and value is used to advise when the value is exceeded.
type: list type: list
elements: dict elements: dict
default: []
suboptions: suboptions:
cpu: cpu:
description: description:
@@ -99,6 +100,7 @@ options:
- Array of ports that will be monitoring. - Array of ports that will be monitoring.
type: list type: list
elements: dict elements: dict
default: []
suboptions: suboptions:
protocol: protocol:
description: description:
@@ -123,6 +125,7 @@ options:
- Array of processes that will be monitoring. - Array of processes that will be monitoring.
type: list type: list
elements: dict elements: dict
default: []
suboptions: suboptions:
process: process:
description: description:
@@ -139,48 +142,56 @@ options:
type: list type: list
elements: dict elements: dict
required: false required: false
default: []
add_processes: add_processes:
description: description:
- Processes to add to the monitoring policy. - Processes to add to the monitoring policy.
type: list type: list
elements: dict elements: dict
required: false required: false
default: []
add_servers: add_servers:
description: description:
- Servers to add to the monitoring policy. - Servers to add to the monitoring policy.
type: list type: list
elements: str elements: str
required: false required: false
default: []
remove_ports: remove_ports:
description: description:
- Ports to remove from the monitoring policy. - Ports to remove from the monitoring policy.
type: list type: list
elements: str elements: str
required: false required: false
default: []
remove_processes: remove_processes:
description: description:
- Processes to remove from the monitoring policy. - Processes to remove from the monitoring policy.
type: list type: list
elements: str elements: str
required: false required: false
default: []
remove_servers: remove_servers:
description: description:
- Servers to remove from the monitoring policy. - Servers to remove from the monitoring policy.
type: list type: list
elements: str elements: str
required: false required: false
default: []
update_ports: update_ports:
description: description:
- Ports to be updated on the monitoring policy. - Ports to be updated on the monitoring policy.
type: list type: list
elements: dict elements: dict
required: false required: false
default: []
update_processes: update_processes:
description: description:
- Processes to be updated on the monitoring policy. - Processes to be updated on the monitoring policy.
type: list type: list
elements: dict elements: dict
required: false required: false
default: []
wait: wait:
description: description:
- wait for the instance to be in state 'running' before returning - wait for the instance to be in state 'running' before returning

View File

@@ -73,11 +73,13 @@ options:
- List of server identifiers (name or id) to be added to the private network. - List of server identifiers (name or id) to be added to the private network.
type: list type: list
elements: str elements: str
default: []
remove_members: remove_members:
description: description:
- List of server identifiers (name or id) to be removed from the private network. - List of server identifiers (name or id) to be removed from the private network.
type: list type: list
elements: str elements: str
default: []
wait: wait:
description: description:
- wait for the instance to be in state 'running' before returning - wait for the instance to be in state 'running' before returning

View File

@@ -346,7 +346,7 @@ def get_connection_info(module):
if not password: if not password:
password = os.environ.get('ONE_PASSWORD') password = os.environ.get('ONE_PASSWORD')
if not(url and username and password): if not (url and username and password):
module.fail_json(msg="One or more connection parameters (api_url, api_username, api_password) were not specified") module.fail_json(msg="One or more connection parameters (api_url, api_username, api_password) were not specified")
from collections import namedtuple from collections import namedtuple

View File

@@ -240,7 +240,7 @@ def get_connection_info(module):
if not password: if not password:
password = os.environ.get('ONE_PASSWORD') password = os.environ.get('ONE_PASSWORD')
if not(url and username and password): if not (url and username and password):
module.fail_json(msg="One or more connection parameters (api_url, api_username, api_password) were not specified") module.fail_json(msg="One or more connection parameters (api_url, api_username, api_password) were not specified")
from collections import namedtuple from collections import namedtuple

View File

@@ -660,7 +660,7 @@ def get_connection_info(module):
if not password: if not password:
password = os.environ.get('ONEFLOW_PASSWORD') password = os.environ.get('ONEFLOW_PASSWORD')
if not(url and username and password): if not (url and username and password):
module.fail_json(msg="One or more connection parameters (api_url, api_username, api_password) were not specified") module.fail_json(msg="One or more connection parameters (api_url, api_username, api_password) were not specified")
from collections import namedtuple from collections import namedtuple

View File

@@ -136,6 +136,7 @@ options:
- URL of custom iPXE script for provisioning. - URL of custom iPXE script for provisioning.
- More about custom iPXE for Packet devices at U(https://help.packet.net/technical/infrastructure/custom-ipxe). - More about custom iPXE for Packet devices at U(https://help.packet.net/technical/infrastructure/custom-ipxe).
type: str type: str
default: ''
always_pxe: always_pxe:
description: description:

View File

@@ -216,7 +216,7 @@ def parse_subnet_cidr(cidr):
try: try:
prefixlen = int(prefixlen) prefixlen = int(prefixlen)
except ValueError: except ValueError:
raise("Wrong prefix length in CIDR expression {0}".format(cidr)) raise Exception("Wrong prefix length in CIDR expression {0}".format(cidr))
return addr, prefixlen return addr, prefixlen

View File

@@ -37,6 +37,7 @@ options:
- Public SSH keys allowing access to the virtual machine. - Public SSH keys allowing access to the virtual machine.
type: list type: list
elements: str elements: str
default: []
datacenter: datacenter:
description: description:
- The datacenter to provision this virtual machine. - The datacenter to provision this virtual machine.
@@ -73,6 +74,7 @@ options:
- list of instance ids, currently only used when state='absent' to remove instances. - list of instance ids, currently only used when state='absent' to remove instances.
type: list type: list
elements: str elements: str
default: []
count: count:
description: description:
- The number of virtual machines to create. - The number of virtual machines to create.

View File

@@ -183,7 +183,7 @@ def remove_datacenter(module, profitbricks):
name = module.params.get('name') name = module.params.get('name')
changed = False changed = False
if(uuid_match.match(name)): if uuid_match.match(name):
_remove_datacenter(module, profitbricks, name) _remove_datacenter(module, profitbricks, name)
changed = True changed = True
else: else:

View File

@@ -49,7 +49,7 @@ options:
- Public SSH keys allowing access to the virtual machine. - Public SSH keys allowing access to the virtual machine.
type: list type: list
elements: str elements: str
required: false default: []
disk_type: disk_type:
description: description:
- The disk type of the volume. - The disk type of the volume.
@@ -80,7 +80,7 @@ options:
- list of instance ids, currently only used when state='absent' to remove instances. - list of instance ids, currently only used when state='absent' to remove instances.
type: list type: list
elements: str elements: str
required: false default: []
subscription_user: subscription_user:
description: description:
- The ProfitBricks username. Overrides the PB_SUBSCRIPTION_ID environment variable. - The ProfitBricks username. Overrides the PB_SUBSCRIPTION_ID environment variable.
@@ -324,7 +324,7 @@ def delete_volume(module, profitbricks):
break break
for n in instance_ids: for n in instance_ids:
if(uuid_match.match(n)): if uuid_match.match(n):
_delete_volume(module, profitbricks, datacenter, n) _delete_volume(module, profitbricks, datacenter, n)
changed = True changed = True
else: else:

View File

@@ -36,6 +36,7 @@ options:
same play)." same play)."
required: false required: false
type: str type: str
default: ''
password: password:
description: description:
- Password which match to account to which specified C(email) belong. - Password which match to account to which specified C(email) belong.
@@ -43,6 +44,7 @@ options:
same play)." same play)."
required: false required: false
type: str type: str
default: ''
cache: cache:
description: > description: >
In case if single play use blocks management module few times it is In case if single play use blocks management module few times it is
@@ -57,7 +59,7 @@ options:
manage blocks." manage blocks."
- "User's account will be used if value not set or empty." - "User's account will be used if value not set or empty."
type: str type: str
required: false default: ''
application: application:
description: description:
- "Name of target PubNub application for which blocks configuration on - "Name of target PubNub application for which blocks configuration on

View File

@@ -81,17 +81,20 @@ options:
default: 'no' default: 'no'
extra_client_args: extra_client_args:
type: dict type: dict
default: {}
description: description:
- A hash of key/value pairs to be used when creating the cloudservers - A hash of key/value pairs to be used when creating the cloudservers
client. This is considered an advanced option, use it wisely and client. This is considered an advanced option, use it wisely and
with caution. with caution.
extra_create_args: extra_create_args:
type: dict type: dict
default: {}
description: description:
- A hash of key/value pairs to be used when creating a new server. - A hash of key/value pairs to be used when creating a new server.
This is considered an advanced option, use it wisely and with caution. This is considered an advanced option, use it wisely and with caution.
files: files:
type: dict type: dict
default: {}
description: description:
- Files to insert into the instance. remotefilename:localcontent - Files to insert into the instance. remotefilename:localcontent
flavor: flavor:
@@ -123,6 +126,7 @@ options:
- keypair - keypair
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of metadata to associate with the instance - A hash of metadata to associate with the instance
name: name:

View File

@@ -25,6 +25,7 @@ options:
C(name). This option requires C(pyrax>=1.9.3) C(name). This option requires C(pyrax>=1.9.3)
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of metadata to associate with the volume - A hash of metadata to associate with the volume
name: name:

View File

@@ -27,6 +27,7 @@ options:
default: LEAST_CONNECTIONS default: LEAST_CONNECTIONS
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of metadata to associate with the instance - A hash of metadata to associate with the instance
name: name:

View File

@@ -252,7 +252,8 @@ def main():
'weight': weight, 'weight': weight,
} }
for name, value in mutable.items(): for name in list(mutable):
value = mutable[name]
if value is None or value == getattr(node, name): if value is None or value == getattr(node, name):
mutable.pop(name) mutable.pop(name)

View File

@@ -27,6 +27,7 @@ options:
- The container to use for container or metadata operations. - The container to use for container or metadata operations.
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of items to set as metadata values on a container - A hash of items to set as metadata values on a container
private: private:

View File

@@ -39,6 +39,7 @@ options:
Requires an integer, specifying expiration in seconds Requires an integer, specifying expiration in seconds
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of items to set as metadata values on an uploaded file or folder - A hash of items to set as metadata values on an uploaded file or folder
method: method:

View File

@@ -29,6 +29,7 @@ options:
- Server name to modify metadata for - Server name to modify metadata for
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of metadata to associate with the instance - A hash of metadata to associate with the instance
author: "Matt Martz (@sivel)" author: "Matt Martz (@sivel)"

View File

@@ -75,17 +75,18 @@ options:
target_hostname: target_hostname:
type: str type: str
description: description:
- One of `target_hostname` and `target_alias` is required for remote.* checks, - One of I(target_hostname) and I(target_alias) is required for remote.* checks,
but prohibited for agent.* checks. The hostname this check should target. but prohibited for agent.* checks. The hostname this check should target.
Must be a valid IPv4, IPv6, or FQDN. Must be a valid IPv4, IPv6, or FQDN.
target_alias: target_alias:
type: str type: str
description: description:
- One of `target_alias` and `target_hostname` is required for remote.* checks, - One of I(target_alias) and I(target_hostname) is required for remote.* checks,
but prohibited for agent.* checks. Use the corresponding key in the entity's but prohibited for agent.* checks. Use the corresponding key in the entity's
`ip_addresses` hash to resolve an IP address to target. I(ip_addresses) hash to resolve an IP address to target.
details: details:
type: dict type: dict
default: {}
description: description:
- Additional details specific to the check type. Must be a hash of strings - Additional details specific to the check type. Must be a hash of strings
between 1 and 255 characters long, or an array or object containing 0 to between 1 and 255 characters long, or an array or object containing 0 to
@@ -97,6 +98,7 @@ options:
default: false default: false
metadata: metadata:
type: dict type: dict
default: {}
description: description:
- Hash of arbitrary key-value pairs to accompany this check if it fires. - Hash of arbitrary key-value pairs to accompany this check if it fires.
Keys and values must be strings between 1 and 255 characters long. Keys and values must be strings between 1 and 255 characters long.

View File

@@ -37,6 +37,7 @@ options:
bound. Necessary to collect C(agent.) rax_mon_checks against this entity. bound. Necessary to collect C(agent.) rax_mon_checks against this entity.
named_ip_addresses: named_ip_addresses:
type: dict type: dict
default: {}
description: description:
- Hash of IP addresses that may be referenced by name by rax_mon_checks - Hash of IP addresses that may be referenced by name by rax_mon_checks
added to this entity. Must be a dictionary of with keys that are names added to this entity. Must be a dictionary of with keys that are names
@@ -44,6 +45,7 @@ options:
addresses. addresses.
metadata: metadata:
type: dict type: dict
default: {}
description: description:
- Hash of arbitrary C(name), C(value) pairs that are passed to associated - Hash of arbitrary C(name), C(value) pairs that are passed to associated
rax_mon_alarms. Names and values must all be between 1 and 255 characters rax_mon_alarms. Names and values must all be between 1 and 255 characters

View File

@@ -36,6 +36,7 @@ options:
- manual - manual
files: files:
type: dict type: dict
default: {}
description: description:
- 'Files to insert into the instance. Hash of C(remotepath: localpath)' - 'Files to insert into the instance. Hash of C(remotepath: localpath)'
flavor: flavor:
@@ -65,6 +66,7 @@ options:
required: true required: true
meta: meta:
type: dict type: dict
default: {}
description: description:
- A hash of metadata to associate with the instance - A hash of metadata to associate with the instance
min_entities: min_entities:

View File

@@ -65,6 +65,7 @@ options:
tags: tags:
type: list type: list
elements: str elements: str
default: []
description: description:
- List of tags to apply to the load-balancer - List of tags to apply to the load-balancer

View File

@@ -143,6 +143,7 @@ except ImportError:
IPADDRESS_IMP_ERR = traceback.format_exc() IPADDRESS_IMP_ERR = traceback.format_exc()
HAS_IPADDRESS = False HAS_IPADDRESS = False
else: else:
IPADDRESS_IMP_ERR = None
HAS_IPADDRESS = True HAS_IPADDRESS = True

View File

@@ -35,7 +35,7 @@ options:
user_data: user_data:
type: dict type: dict
description: description:
- User defined data. Typically used with `cloud-init`. - User defined data. Typically used with C(cloud-init).
- Pass your cloud-init script here as a string - Pass your cloud-init script here as a string
required: false required: false

View File

@@ -142,6 +142,7 @@ options:
- List of ssh keys by their Id to be assigned to a virtual instance. - List of ssh keys by their Id to be assigned to a virtual instance.
type: list type: list
elements: str elements: str
default: []
post_uri: post_uri:
description: description:
- URL of a post provisioning script to be loaded and executed on virtual instance. - URL of a post provisioning script to be loaded and executed on virtual instance.

Some files were not shown because too many files have changed in this diff Show More