Compare commits

...

55 Commits
2.5.5 ... 2.5.7

Author SHA1 Message Date
Felix Fontein
ebd4c4146e Release 2.5.7. 2021-11-09 06:37:15 +01:00
patchback[bot]
43269c9255 Better handling of base64-encoded values in xattr module (#3675) (#3676)
* Fix exception in xattr module when existing extended attribute's value contains non-printable characters and the base64-encoded string contains a '=' sign

* Added changelog fragment for #3675

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 2f0ae0408d)

Co-authored-by: sc-anssi <sc-anssi@users.noreply.github.com>
2021-11-09 06:28:35 +01:00
David Moreau Simard
847f8b9e21 Fix urpmi typo in changelog (#3659)
The module is urpmi, not urmpi.

(cherry picked from commit 58a5463ddb)
2021-11-02 19:09:43 +01:00
patchback[bot]
2241db8a74 Fixed - TypeError: unexpected keyword argument (#3649) (#3653)
* Fixed - TypeError: unexpected keyword argument

- File proxmox_group_info.py creates the error "TypeError:
  get_group() got an unexpected keyword argument \'group\'\r\n'" if a
  group parameter is used.
  Issue is an argument naming conflict. After changing the argument
  name to 'groupid', as used in method ProxmoxGroupInfoAnsible::get_group,
  testing a Proxmox group name is working now.

* Changelog fragment added for #3649

changelog fragment for TypeError: unexpected keyword argument #3649

* Update changelogs/fragments/3649-proxmox_group_info_TypeError.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0df41241dd)

Co-authored-by: hklausing <hklausing@users.noreply.github.com>
2021-10-31 20:35:34 +01:00
patchback[bot]
bed6520d27 provide more fitting description for runner timeout (#3624) (#3644)
* provide more fitting description for runner timeout

* Update plugins/modules/source_control/gitlab/gitlab_runner.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Tim Herren <tim.herren@gmx.ch>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 96de25fc94)

Co-authored-by: nerrehmit <accounts+github@herren.id>
2021-10-30 17:04:13 +02:00
patchback[bot]
08c3bbe201 Fix CI (#3637) (#3640)
* Replace yaml.load with yaml.safe_load in unit tests.

* Remove no longer needed loader arg in two instances.

(cherry picked from commit 753df78877)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-10-30 11:22:13 +02:00
Felix Fontein
de58d446f2 Announce that this is the last regular bugfix release. 2021-10-30 08:43:30 +02:00
patchback[bot]
9e19286a9f gitlab_project_members: improve project name matching (#3602) (#3635)
* Update gitlab_project_members.py

The actual search method doesn't accept path with namespace for project_name. If you have many project with same name, this module gitlab_project_members can't work.

* Update gitlab_project_members.py

* Update gitlab_project_members.py

* Update gitlab_project_members.py

* Create 3602-fix-gitlab_project_members-improve-search-method

* Rename 3602-fix-gitlab_project_members-improve-search-method to 3602-fix-gitlab_project_members-improve-search-method.yml

(cherry picked from commit cdfc4dcf49)

Co-authored-by: paytroff <93038288+paytroff@users.noreply.github.com>
2021-10-30 08:31:53 +02:00
Felix Fontein
f4d16549de Prepare 2.5.7 release. 2021-10-22 08:30:57 +02:00
patchback[bot]
a3c597425d Redfish: perform manager network interface configuration even if property is missing (#3582) (#3590)
* Redfish: perform manager network interface configuration even if property is missing

Signed-off-by: Mike Raineri <michael.raineri@dell.com>

* Update changelogs/fragments/3404-redfish_utils-skip-manager-network-check.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6580e7559b)

Co-authored-by: Mike Raineri <michael.raineri@dell.com>
2021-10-20 17:49:01 +02:00
patchback[bot]
56bbbca2ce Fixed typo in homebrew documentation (#3577) (#3584)
Fixed typo in community.general.homebrew documentation

(cherry picked from commit 02c534bb8e)

Co-authored-by: Premkumar Subramanian <prem_x87@outlook.com>
2021-10-18 23:07:10 +02:00
patchback[bot]
dce582c64c Remove non-working example. (#3571) (#3586)
(cherry picked from commit e8c37ca605)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-10-18 23:04:08 +02:00
patchback[bot]
ecca4cedea Misc doc issues (#3572) (#3579)
* Add names to tasks in oneview module examples

* Fix task name in github_webhook module example

* Fix trailing whitespace

* Add changelog fragment

* Remove changelog fragment

(cherry picked from commit 3731064368)

Co-authored-by: Vitaly Khabarov <vitkhab@users.noreply.github.com>
2021-10-18 14:09:29 +02:00
patchback[bot]
dc69b40681 Use correct FQCN. (#3573) (#3574)
(cherry picked from commit c3813d4533)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-10-17 18:25:48 +02:00
patchback[bot]
1553b77a1d Fix bug with returning results in IPA role (#3561) (#3567)
* Fix bug with returning results in IPA role

Fix #3560

* Add changelog

* Fix typo in changelog

* Update changelogs/fragments/3561-fix-ipa-host-var-detection.yml

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

* Update changelogs/fragments/3561-fix-ipa-host-var-detection.yml

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

* Update changelogs/fragments/3561-fix-ipa-host-var-detection.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 191d2e08bb)

Co-authored-by: Sergey <sshnaidm@users.noreply.github.com>
2021-10-16 21:08:06 +02:00
patchback[bot]
c7c39e598d Bugfix issue2692 logstash callbackmodule with no attribute options (#3530) (#3549)
* Update logstash.py

replacing _options with context.cliargs

* Create 2692-logstash-callback-plugin-replacing_options

logstash callback plugin replace _option with context.CLIARGS

* Rename 2692-logstash-callback-plugin-replacing_options to 2692-logstash-callback-plugin-replacing_options.yml

missed out the extenstion

* Update logstash.py

context imported

* Update 2692-logstash-callback-plugin-replacing_options.yml

dict to string

* Update changelogs/fragments/2692-logstash-callback-plugin-replacing_options.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7038812465)

Co-authored-by: Anand Victor <anandvict@gmail.com>
2021-10-12 06:50:28 +02:00
patchback[bot]
c364e66114 Fix shellcheck error. (#3531) (#3533)
(cherry picked from commit d1f820ed06)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-10-08 16:22:24 +02:00
patchback[bot]
31d62c5947 Fix: gitlab_deploy_key idempotency (#3473) (#3511)
* Fix: gitlab_deploy_key idempotency

The module was not retrieving all the deploy keys leading to non
idempotency on projects with multiple deploy keys.
SEE: https://python-gitlab.readthedocs.io/en/stable/api-usage.html#pagination

* Update changelogs/fragments/3473-gitlab_deploy_key-fix_idempotency.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Jonathan Piron <jonathanpiron@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0bc4518f3b)

Co-authored-by: Jonathan Piron <jonathan@piron.at>
2021-10-04 21:41:59 +02:00
patchback[bot]
e4212472e2 Fix OSX 10.11 CI runs (#3501) (#3503)
* Restrict to OSX 10.11 tests.

* See whether updating brew helps.

* Skip archive task for OSX.

* Refactor homebrew task to make changing the package name easier.

* Revert "See whether updating brew helps."

This reverts commit 8eceb9ef1f.

* Replace xz by gnu-tar.

* Uninstall first.

* Skip iso_extract task for OSX.

* Revert "Restrict to OSX 10.11 tests."

This reverts commit 81823d2f97.

* ci_complete

(cherry picked from commit 106856ed86)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-10-02 21:48:34 +02:00
patchback[bot]
0ff27c3b44 yaml callback: prevent plugin from modifying PyYAML (#3478) (#3493)
* Prevent yaml callback from modifying PyYAML.

* Fix changelog fragment.

* Update changelogs/fragments/3478-yaml-callback.yml

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
(cherry picked from commit 5895e50185)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-10-02 20:22:40 +02:00
patchback[bot]
baf23f3ae2 Update redfish_info.py (#3485) (#3489)
* Update redfish_info.py

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/remote_management/redfish/redfish_info.py

* Update plugins/modules/remote_management/redfish/redfish_info.py

* Update plugins/modules/remote_management/redfish/redfish_info.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit 316e1d6bf2)

Co-authored-by: Rasdva3 <34684695+Rasdva3@users.noreply.github.com>
2021-10-01 15:17:10 +02:00
patchback[bot]
c9c535178f fix structure xcc_redfish_command (#3479) (#3481)
* adhere to proper task structure

* add changelog fragment

* return code formatting to original

* remove unnecessary fragment

(cherry picked from commit a14392fab0)

Co-authored-by: Zach Biles <bile0026@users.noreply.github.com>
2021-09-30 17:24:41 +02:00
patchback[bot]
86ff3eb02e Stick to community.crypto 1.x.y for ubuntu1604. (#3470) (#3476)
(cherry picked from commit 3fee872d58)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-09-29 20:53:23 +02:00
patchback[bot]
2cec7a1779 Copy the permissions along with file for jboss module (#3426) (#3468)
* Copy the permissions along with file for jboss module

Issue: The deployment file is copied with file content only. The file
permission is set to 440 and belongs to root user. When the
JBossI(Wildfly) server is running under non root account, it can't read
the deployment file.

With is fix, the correct permission can be set from previous task for JBoss
server to pickup the deployment file.

* Update changelogs/fragments/3426-copy-permissions-along-with-file-for-jboss-module.yml

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
(cherry picked from commit 7cfdc2ce8c)

Co-authored-by: Pan Luo <xcompass@gmail.com>
2021-09-29 05:44:53 +00:00
patchback[bot]
bcebeddb83 Enable ansibullbot notifications in issues and PRs (#3462) (#3464)
The more recent version of Ansibullbot defaults notifications to false.
We need to set it to true so it can notify contributors and maintainers.

(cherry picked from commit 845c406419)

Co-authored-by: David Moreau Simard <moi@dmsimard.com>
2021-09-29 06:41:29 +02:00
patchback[bot]
745da8e325 Fix: GitLab API searches always return first found match (#3400) (#3449)
* fix: return correct group id
match only full_path or name

* chore: add changelog fragment

* fix: indentation multiple of four

* refactor: use two loops

* fix: typo of group id

* fix: changelog fragment

(cherry picked from commit b6b7601615)

Co-authored-by: Chris Frage <chris.frage@cancom.de>
2021-09-26 19:49:02 +02:00
Felix Fontein
b610b654cc Restrict CI to ansible 2.9 up to 2.12. (#3434) 2021-09-25 21:24:18 +02:00
Felix Fontein
6fdeca5709 [WIP] [stable-2] Fix filesystem tests on OpenSuSE (#3443)
* Disable failing test on OpenSuSE.

* Fix condition.

* Correct fix.
2021-09-25 21:12:19 +02:00
patchback[bot]
e9900f9a8e Diable netcat conflict in zypper tests as one package seems to be no longer available. (#3438) (#3441)
(cherry picked from commit 3715b6ef46)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-09-25 17:42:55 +02:00
patchback[bot]
b475239b4f Fix CI (#3430) (#3432)
* Restrict to unit tests with devel (to be reverted later).

* Restrict lxml for Python 2.6.

* Revert "Restrict to unit tests with devel (to be reverted later)."

This reverts commit d0d87a8a0f.

(cherry picked from commit 935348ae78)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-09-25 07:05:26 +00:00
Felix Fontein
a2f635449b Next expected release is 2.5.7. 2021-09-21 17:03:55 +02:00
Felix Fontein
44de6b09a7 Relesae 2.5.6. 2021-09-21 12:58:35 +02:00
patchback[bot]
c7eed0f32c Update README.md (#3402) (#3414)
* Update README.md

* Update README.md

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix the link

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e1cdad3537)

Co-authored-by: Andrew Klychkov <aklychko@redhat.com>
2021-09-21 12:16:23 +02:00
Felix Fontein
000b3cee58 Prepare 2.5.6 release. 2021-09-21 09:49:25 +02:00
Felix Fontein
ba30fc63b7 Make ready for split-controller testing in ansible-core (#3345) (#3413)
* Accept context/ in aliases.

* Mark ansible_galaxy_install test as context/controller.

* Fix interfaces_file test.

ci_complete

* Install pyone dependency.

ci_complete

(cherry picked from commit 98d071f61e)
2021-09-21 09:47:48 +02:00
patchback[bot]
afddbaa857 openbsd_pkg: Fix KeyError (#3336) (#3406)
If package installation has an error after the package is install (e.g.
when running tags), then there will be output on stderr and the fallback
regex will match; however, since pkg_add exited non-zero, changed is
never added as a key to the dictionary. As a result the code at the end
of main that checks if anything has changed raises a KeyError.

(cherry picked from commit 02d0e3d286)

Co-authored-by: Matthew Martin <phy1729@gmail.com>
2021-09-20 19:49:48 +02:00
patchback[bot]
ad9b4ac781 Install nios test requirements. (#3375) (#3376)
(cherry picked from commit b20fc7a7c3)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-09-15 08:09:34 +02:00
Ajpantuso
0b98534df1 Manually merging BOTMETA (#3356) 2021-09-09 15:43:14 +02:00
Felix Fontein
da0738badf Improve CI (#3348) (#3352)
* Remove superfluous test.

* Use remote_temp_dir instead of output_dir on remote.

* Read certificate from correct place.

* Adjust more places.

* Fix boolean.

* Improve cryptography setup.

* Fix java_keystore changes.

* Need to copy binary from remote.

* Use correct Python for serve script.

* Sleep before downloading.

* Use correct Python interpreter.

* Avoid failing shebang test.

* Fix permission error with macOS 11.1.

* Avoid shebang trouble.

(cherry picked from commit 7c43cc3faa)
2021-09-09 08:10:26 +02:00
patchback[bot]
3735ee6df7 Fix copr integration tests (#3237) (#3324)
Fixes: #2084
(cherry picked from commit 7c493eb4e5)

Co-authored-by: Silvie Chlupova <33493796+schlupov@users.noreply.github.com>
2021-09-02 06:30:01 +02:00
patchback[bot]
de602a9b93 django_manage: Remove scottanderson42 and tastychutney as maintainers. (#3314) (#3316)
Note: tastychutney is another github account of mine that was also added as a maintainer.
(cherry picked from commit 135faf4421)

Co-authored-by: Scott Anderson <scott@waymark.com>
2021-08-31 18:33:08 +02:00
patchback[bot]
c64ab70c75 pamd - fixed issue+minor refactorings (#3285) (#3308)
* pamd - fixed issue+minor refactorings

* added changelog fragment

* added unit test suggested in issue

* Update tests/integration/targets/pamd/tasks/main.yml

* fixed per PR + additional adjustment

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit edd7b84285)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-08-31 12:59:29 +02:00
patchback[bot]
bea5a6266c proxmox inventory plugin - Update examples documentation (#3299) (#3306)
* Initial commit

* Update plugins/inventory/proxmox.py

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 2d6816e11e)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-08-31 11:18:54 +02:00
patchback[bot]
a354dd463f Initial commit (#3300) (#3302)
(cherry picked from commit 58c6f6c95a)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-08-31 10:22:37 +02:00
patchback[bot]
2414011428 udm_dns_record: Fix handling of PTR records (#3244) (#3256) (#3297)
* udm_dns_record: Fix handling of PTR records (#3244)

Before, it was not possible to manage PTR records in Univention DNS,
due to broken zone lookups and improper used parameters of the object.
This patch fixes the PTR handling, allowing both v4 and v6 entries.

* udm_dns_record: [doc] add changelog fragment

* udm_dns_record: [fix] validation errors

* udm_dns_record: import ipaddress module conditionally (#3244)

* udm_dns_record: fix sanity check error, improve doc (#3244)

* udm_dns_record: Improve changes to meet community standards (#3244)

(cherry picked from commit d9dcdcbbe4)

Co-authored-by: Sebastian Damm <SipSeb@users.noreply.github.com>
2021-08-30 19:05:38 +02:00
patchback[bot]
5b4bce3f88 [PR #3289/cf433567 backport][stable-2] Fix unit tests (#3292)
* Fix unit tests (#3289)

* Force new enough requests version.

* Revert "Force new enough requests version."

This reverts commit 339d40bef7.

* Make sure we don't install a too new python-gitlab for Ansible 2.10.

* Change requirement instead of appending new one.

* Fix quoting.

* Try to skip if import fails.

* Revert "Try to skip if import fails."

This reverts commit 254bbd8548.

* Make other Python versions happy...

* Update tests/utils/shippable/units.sh

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
(cherry picked from commit cf43356753)

* Add newline.

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-08-29 13:19:55 +02:00
patchback[bot]
f21b41d3aa Fixed incorrect VMID: cloning to an existing VM (#3266) (#3281)
* Fixed incorrect VMID: cloning to an existing VM

During a cloning operation, if the destination VM already exists the VMID returned is not correct.
The VMID returned should be that of the destination VM and not that of the source VM (consistent with line 1230).
A playbook that relies on the returned VMID, for example, to perform other operations on the destination VM, will not work properly if it is unexpectedly interrupted.

* Add files via upload

* moved 3266-vmid-existing-target-clone.yml to changelogs/fragments/
replaced line separator CRLF -> LF

* storing vmid list in variable to avoid multiple API calls

(cherry picked from commit 4e2d4e3c68)

Co-authored-by: Atlas974 <43972908+Atlas974@users.noreply.github.com>
2021-08-27 19:04:00 +02:00
patchback[bot]
01e995abf2 Stop notifications for apache2_module for me (#3261) (#3270)
(cherry picked from commit e40aa69e77)

Co-authored-by: Robin Roth <robin@rroth.de>
2021-08-26 08:23:40 +02:00
patchback[bot]
760b1e2cad nmcli: allow IPv4/IPv6 configuration on ipip and sit devices (#3239) (#3254)
* Allow IPv4/IPv6 configuration on mode "sit" tunnel devices

* Update Unit Test for Allow IPv4/IPv6 configuration on mode "sit" tunnel devices

* Add changelog for Allow IPv4/IPv6 configuration on mode "sit" tunnel devices

* Update changelogs/fragments/3239-nmcli-sit-ip-config-bugfix.yaml

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>

* Added ip4/ip6 configuration arguments for ipip tunnels

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
(cherry picked from commit 1ca9c35010)

Co-authored-by: zerotens <zerotens@users.noreply.github.com>
2021-08-23 18:38:18 +02:00
patchback[bot]
276fb08503 Temporarily disable datadog_downtime unit tests. (#3222) (#3223)
(cherry picked from commit f19e191467)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-08-17 06:34:05 +00:00
patchback[bot]
2d2bbb8fe3 Linode inventory plugin typo fixes (#3218) (#3220)
- Fix a typo in the Linode inventory plugin unit tests
- Fix some style issues in descriptions where punctuation was missing

Signed-off-by: Kellin <kellin@retromud.org>
(cherry picked from commit fccae19177)

Co-authored-by: Kellin <kellin@retromud.org>
2021-08-17 07:20:27 +02:00
patchback[bot]
f2b71f7874 vdo - refactor (#3191) (#3212)
* refactor to vdo

* adjusted if condition

* added changelog fragment

* Update plugins/modules/system/vdo.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* adjustements per the PR

* more occurrences of bool compared with yes or no

* Update changelogs/fragments/3191-vdo-refactor.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 16945d3847)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-08-16 12:33:01 +00:00
patchback[bot]
9c36a23a2d Add ipv4 example to linode inventory docs (#3200) (#3208)
* Add ipv4 example to linode inventory

* Update plugins/inventory/linode.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 432c891487)

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-08-15 13:25:22 +02:00
Felix Fontein
bc02dc4c44 [PR #3194/1fec1d0c backport][stable-3] Fix sanity failures (#3195) (#3196)
* Fix new devel sanity errors. (#3194)

(cherry picked from commit 1fec1d0c81)

* Add two more.

* Fix PR #.

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1a3c221995)

Co-authored-by: patchback[bot] <45432694+patchback[bot]@users.noreply.github.com>
2021-08-12 14:17:01 +02:00
Felix Fontein
d85908da6e Next expected release is 2.5.6. 2021-08-10 19:46:46 +02:00
141 changed files with 1203 additions and 868 deletions

View File

@@ -54,14 +54,14 @@ pool: Standard
stages:
### Sanity
- stage: Sanity_devel
displayName: Sanity devel
- stage: Sanity_2_12
displayName: Sanity 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: devel/sanity/{0}
testFormat: 2.12/sanity/{0}
targets:
- test: 1
- test: 2
@@ -108,14 +108,14 @@ stages:
- test: 3
- test: 4
### Units
- stage: Units_devel
displayName: Units devel
- stage: Units_2_12
displayName: Units 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: devel/units/{0}/1
testFormat: 2.12/units/{0}/1
targets:
- test: 2.6
- test: 2.7
@@ -174,13 +174,13 @@ stages:
- test: 3.8
## Remote
- stage: Remote_devel
displayName: Remote devel
- stage: Remote_2_12
displayName: Remote 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/{0}
testFormat: 2.12/{0}
targets:
- name: macOS 11.1
test: macos/11.1
@@ -255,13 +255,13 @@ stages:
- 2
### Docker
- stage: Docker_devel
displayName: Docker devel
- stage: Docker_2_12
displayName: Docker 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/linux/{0}
testFormat: 2.12/linux/{0}
targets:
- name: CentOS 6
test: centos6
@@ -342,14 +342,14 @@ stages:
- 3
### Cloud
- stage: Cloud_devel
displayName: Cloud devel
- stage: Cloud_2_12
displayName: Cloud 2.12
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: devel/cloud/{0}/1
testFormat: 2.12/cloud/{0}/1
targets:
- test: 3.8
- stage: Cloud_2_11
@@ -386,23 +386,23 @@ stages:
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity_devel
- Sanity_2_12
- Sanity_2_9
- Sanity_2_10
- Sanity_2_11
- Units_devel
- Units_2_12
- Units_2_9
- Units_2_10
- Units_2_11
- Remote_devel
- Remote_2_12
- Remote_2_9
- Remote_2_10
- Remote_2_11
- Docker_devel
- Docker_2_12
- Docker_2_9
- Docker_2_10
- Docker_2_11
- Cloud_devel
- Cloud_2_12
- Cloud_2_9
- Cloud_2_10
- Cloud_2_11

9
.github/BOTMETA.yml vendored
View File

@@ -1,7 +1,8 @@
notifications: true
automerge: true
files:
plugins/:
supershipit: quidame Ajpantuso
supershipit: quidame
changelogs/fragments/:
support: community
$actions:
@@ -957,11 +958,13 @@ files:
$modules/web_infrastructure/apache2_mod_proxy.py:
maintainers: oboukili
$modules/web_infrastructure/apache2_module.py:
maintainers: berendt n0trax robinro
maintainers: berendt n0trax
ignore: robinro
$modules/web_infrastructure/deploy_helper.py:
maintainers: ramondelafuente
$modules/web_infrastructure/django_manage.py:
maintainers: scottanderson42 russoz tastychutney
maintainers: russoz
ignore: scottanderson42 tastychutney
labels: django_manage
$modules/web_infrastructure/ejabberd_user.py:
maintainers: privateip

View File

@@ -6,6 +6,68 @@ Community General Release Notes
This changelog describes changes after version 1.0.0.
v2.5.7
======
Release Summary
---------------
Regular bugfix release. Please note that this is the last regular bugfix release, from now on only security fixes and major bugfixes will be accepted for the ``stable-2`` branch.
Bugfixes
--------
- gitlab_deploy_key - fix idempotency on projects with multiple deploy keys (https://github.com/ansible-collections/community.general/pull/3473).
- gitlab_group_members - ``get_group_id`` return the group ID by matching ``full_path``, ``path`` or ``name`` (https://github.com/ansible-collections/community.general/pull/3400).
- gitlab_project_members - ``get_project_id`` return the project id by matching ``full_path`` or ``name`` (https://github.com/ansible-collections/community.general/pull/3602).
- ipa_* modules - fix environment fallback for ``ipa_host`` option (https://github.com/ansible-collections/community.general/issues/3560).
- jboss - fix the deployment file permission issue when Jboss server is running under non-root user. The deployment file is copied with file content only. The file permission is set to ``440`` and belongs to root user. When the JBoss ``WildFly`` server is running under non-root user, it is unable to read the deployment file (https://github.com/ansible-collections/community.general/pull/3426).
- logstash callback plugin - replace ``_option`` with ``context.CLIARGS`` to fix the plugin on ansible-base and ansible-core (https://github.com/ansible-collections/community.general/issues/2692).
- proxmox_group_info - fix module crash if a ``group`` parameter is used (https://github.com/ansible-collections/community.general/pull/3649).
- redfish_utils module utils - if a manager network property is not specified in the service, attempt to change the requested settings (https://github.com/ansible-collections/community.general/issues/3404/).
- xattr - fix exception caused by ``_run_xattr()`` raising a ``ValueError`` due to a mishandling of base64-encoded value (https://github.com/ansible-collections/community.general/issues/3673).
- yaml callback plugin - avoid modifying PyYAML so that other plugins using it on the controller, like the ``to_yaml`` filter, do not produce different output (https://github.com/ansible-collections/community.general/issues/3471, https://github.com/ansible-collections/community.general/pull/3478).
v2.5.6
======
Release Summary
---------------
Regular bugfix release.
Minor Changes
-------------
- pamd - minor refactorings (https://github.com/ansible-collections/community.general/pull/3285).
- vdo - minor refactoring of the code (https://github.com/ansible-collections/community.general/pull/3191).
Bugfixes
--------
- copr - fix chroot naming issues, ``centos-stream`` changed naming to ``centos-stream-<number>`` (for exmaple ``centos-stream-8``) (https://github.com/ansible-collections/community.general/issues/2084, https://github.com/ansible-collections/community.general/pull/3237).
- launchd - use private attribute to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- logdns callback plugin - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- maven_artifact - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- memcached cache plugin - change function argument names to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- netapp module utils - remove always-true conditional to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- nmcli - added ip4/ip6 configuration arguments for ``sit`` and ``ipip`` tunnels (https://github.com/ansible-collections/community.general/issues/3238, https://github.com/ansible-collections/community.general/pull/3239).
- one_template - change function argument name to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- online inventory plugin - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- online module utils - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- openbsd_pkg - fix crash from ``KeyError`` exception when package installs, but ``pkg_add`` returns with a non-zero exit code (https://github.com/ansible-collections/community.general/pull/3336).
- packet_device - use generator to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- packet_sshkey - use generator to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- pamd - code for ``state=updated`` when dealing with the pam module arguments, made no distinction between ``None`` and an empty list (https://github.com/ansible-collections/community.general/issues/3260).
- proxmox_kvm - clone operation should return the VMID of the target VM and not that of the source VM. This was failing when the target VM with the chosen name already existed (https://github.com/ansible-collections/community.general/pull/3266).
- saltstack connection plugin - fix function signature (https://github.com/ansible-collections/community.general/pull/3194).
- scaleway inventory script - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3195).
- scaleway module utils - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- udm_dns_record - fixed managing of PTR records, which can never have worked before (https://github.com/ansible-collections/community.general/pull/3256).
- ufw - use generator to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- vbox inventory script - change function argument name to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3195).
- vdo - boolean arguments now compared with proper ``true`` and ``false`` values instead of string representations like ``"yes"`` or ``"no"`` (https://github.com/ansible-collections/community.general/pull/3191).
v2.5.5
======
@@ -469,7 +531,7 @@ Deprecated Features
- puppet - deprecated undocumented parameter ``show_diff``, will be removed in 7.0.0. (https://github.com/ansible-collections/community.general/pull/1927).
- runit - unused parameter ``dist`` marked for deprecation (https://github.com/ansible-collections/community.general/pull/1830).
- slackpkg - deprecated invalid parameter alias ``update-cache``, will be removed in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
- urmpi - deprecated invalid parameter aliases ``update-cache`` and ``no-recommends``, will be removed in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
- urpmi - deprecated invalid parameter aliases ``update-cache`` and ``no-recommends``, will be removed in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
- xbps - deprecated invalid parameter alias ``update-cache``, will be removed in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
Bugfixes

View File

@@ -17,7 +17,7 @@ If you encounter abusive behavior violating the [Ansible Code of Conduct](https:
## Tested with Ansible
Tested with the current Ansible 2.9, ansible-base 2.10 and ansible-core 2.11 releases and the current development version of ansible-core. Ansible versions before 2.9.10 are not supported.
Tested with the current Ansible 2.9, ansible-base 2.10, ansible-core 2.11 and ansible-core 2.12 releases. Ansible versions before 2.9.10 are not supported.
## External requirements
@@ -76,7 +76,21 @@ Also for some notes specific to this collection see [our CONTRIBUTING documentat
See [here](https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html#testing-collections).
### Communication
## Collection maintenance
To learn how to maintain / become a maintainer of this collection, refer to:
* [Committer guidelines](https://github.com/ansible-collections/community.general/blob/main/commit-rights.md).
* [Maintainer guidelines](https://github.com/ansible/community-docs/blob/main/maintaining.rst).
It is necessary for maintainers of this collection to be subscribed to:
* The collection itself (the `Watch` button → `All Activity` in the upper right corner of the repository's homepage).
* The "Changes Impacting Collection Contributors and Maintainers" [issue](https://github.com/ansible-collections/overview/issues/45).
They also should be subscribed to Ansible's [The Bullhorn newsletter](https://docs.ansible.com/ansible/devel/community/communication.html#the-bullhorn).
## Communication
We announce important development changes and releases through Ansible's [The Bullhorn newsletter](https://eepurl.com/gZmiEP). If you are a collection developer, be sure you are subscribed.
@@ -86,16 +100,11 @@ We take part in the global quarterly [Ansible Contributor Summit](https://github
For more information about communities, meetings and agendas see [Community Wiki](https://github.com/ansible/community/wiki/Community).
For more information about communication, refer to the [Ansible communication guide](https://docs.ansible.com/ansible/devel/community/communication.html).
For more information about communication, refer to Ansible's the [Communication guide](https://docs.ansible.com/ansible/devel/community/communication.html).
### Publishing New Version
## Publishing New Version
Basic instructions without release branches:
1. Create `changelogs/fragments/<version>.yml` with `release_summary:` section (which must be a string, not a list).
2. Run `antsibull-changelog release --collection-flatmap yes`
3. Make sure `CHANGELOG.rst` and `changelogs/changelog.yaml` are added to git, and the deleted fragments have been removed.
4. Tag the commit with `<version>`. Push changes and tag to the main repository.
See the [Releasing guidelines](https://github.com/ansible/community-docs/blob/main/releasing_collections.rst) to learn how to release this collection.
## Release notes
@@ -103,10 +112,10 @@ See the [changelog](https://github.com/ansible-collections/community.general/blo
## Roadmap
See [this issue](https://github.com/ansible-collections/community.general/issues/582) for information on releasing, versioning and deprecation.
In general, we plan to release a major version every six months, and minor versions every two months. Major versions can contain breaking changes, while minor versions only contain new features and bugfixes.
See [this issue](https://github.com/ansible-collections/community.general/issues/582) for information on releasing, versioning, and deprecation.
## More information
- [Ansible Collection overview](https://github.com/ansible-collections/overview)

View File

@@ -1446,7 +1446,7 @@ releases:
- runit - unused parameter ``dist`` marked for deprecation (https://github.com/ansible-collections/community.general/pull/1830).
- slackpkg - deprecated invalid parameter alias ``update-cache``, will be removed
in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
- urmpi - deprecated invalid parameter aliases ``update-cache`` and ``no-recommends``,
- urpmi - deprecated invalid parameter aliases ``update-cache`` and ``no-recommends``,
will be removed in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
- xbps - deprecated invalid parameter alias ``update-cache``, will be removed
in 5.0.0 (https://github.com/ansible-collections/community.general/pull/1927).
@@ -2105,3 +2105,97 @@ releases:
- 3139-tss-lookup-plugin-update-to-make-compatible-with-sdk-v1.yml
- 3161-openbsd-pkg-fix-regexp-matching-crash.yml
release_date: '2021-08-10'
2.5.6:
changes:
bugfixes:
- copr - fix chroot naming issues, ``centos-stream`` changed naming to ``centos-stream-<number>``
(for exmaple ``centos-stream-8``) (https://github.com/ansible-collections/community.general/issues/2084,
https://github.com/ansible-collections/community.general/pull/3237).
- launchd - use private attribute to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- logdns callback plugin - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- maven_artifact - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- memcached cache plugin - change function argument names to fix sanity errors
(https://github.com/ansible-collections/community.general/pull/3194).
- netapp module utils - remove always-true conditional to fix sanity errors
(https://github.com/ansible-collections/community.general/pull/3194).
- nmcli - added ip4/ip6 configuration arguments for ``sit`` and ``ipip`` tunnels
(https://github.com/ansible-collections/community.general/issues/3238, https://github.com/ansible-collections/community.general/pull/3239).
- one_template - change function argument name to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- online inventory plugin - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- online module utils - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- openbsd_pkg - fix crash from ``KeyError`` exception when package installs,
but ``pkg_add`` returns with a non-zero exit code (https://github.com/ansible-collections/community.general/pull/3336).
- packet_device - use generator to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- packet_sshkey - use generator to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- pamd - code for ``state=updated`` when dealing with the pam module arguments,
made no distinction between ``None`` and an empty list (https://github.com/ansible-collections/community.general/issues/3260).
- proxmox_kvm - clone operation should return the VMID of the target VM and
not that of the source VM. This was failing when the target VM with the chosen
name already existed (https://github.com/ansible-collections/community.general/pull/3266).
- saltstack connection plugin - fix function signature (https://github.com/ansible-collections/community.general/pull/3194).
- scaleway inventory script - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3195).
- scaleway module utils - improve split call to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- udm_dns_record - fixed managing of PTR records, which can never have worked
before (https://github.com/ansible-collections/community.general/pull/3256).
- ufw - use generator to fix sanity errors (https://github.com/ansible-collections/community.general/pull/3194).
- vbox inventory script - change function argument name to fix sanity errors
(https://github.com/ansible-collections/community.general/pull/3195).
- vdo - boolean arguments now compared with proper ``true`` and ``false`` values
instead of string representations like ``"yes"`` or ``"no"`` (https://github.com/ansible-collections/community.general/pull/3191).
minor_changes:
- pamd - minor refactorings (https://github.com/ansible-collections/community.general/pull/3285).
- vdo - minor refactoring of the code (https://github.com/ansible-collections/community.general/pull/3191).
release_summary: Regular bugfix release.
fragments:
- 2.5.6.yml
- 3191-vdo-refactor.yml
- 3194-sanity.yml
- 3237-copr-fix_chroot_naming.yml
- 3239-nmcli-sit-ipip-config-bugfix.yaml
- 3256-fix-ptr-handling-in-udm_dns_record.yml
- 3266-vmid-existing-target-clone.yml
- 3285-pamd-updated-with-empty-args.yaml
- 3336-openbsd_pkg-fix-KeyError.yml
release_date: '2021-09-21'
2.5.7:
changes:
bugfixes:
- gitlab_deploy_key - fix idempotency on projects with multiple deploy keys
(https://github.com/ansible-collections/community.general/pull/3473).
- gitlab_group_members - ``get_group_id`` return the group ID by matching ``full_path``,
``path`` or ``name`` (https://github.com/ansible-collections/community.general/pull/3400).
- gitlab_project_members - ``get_project_id`` return the project id by matching
``full_path`` or ``name`` (https://github.com/ansible-collections/community.general/pull/3602).
- ipa_* modules - fix environment fallback for ``ipa_host`` option (https://github.com/ansible-collections/community.general/issues/3560).
- jboss - fix the deployment file permission issue when Jboss server is running
under non-root user. The deployment file is copied with file content only.
The file permission is set to ``440`` and belongs to root user. When the JBoss
``WildFly`` server is running under non-root user, it is unable to read the
deployment file (https://github.com/ansible-collections/community.general/pull/3426).
- logstash callback plugin - replace ``_option`` with ``context.CLIARGS`` to
fix the plugin on ansible-base and ansible-core (https://github.com/ansible-collections/community.general/issues/2692).
- proxmox_group_info - fix module crash if a ``group`` parameter is used (https://github.com/ansible-collections/community.general/pull/3649).
- redfish_utils module utils - if a manager network property is not specified
in the service, attempt to change the requested settings (https://github.com/ansible-collections/community.general/issues/3404/).
- xattr - fix exception caused by ``_run_xattr()`` raising a ``ValueError``
due to a mishandling of base64-encoded value (https://github.com/ansible-collections/community.general/issues/3673).
- yaml callback plugin - avoid modifying PyYAML so that other plugins using
it on the controller, like the ``to_yaml`` filter, do not produce different
output (https://github.com/ansible-collections/community.general/issues/3471,
https://github.com/ansible-collections/community.general/pull/3478).
release_summary: Regular bugfix release. Please note that this is the last regular
bugfix release, from now on only security fixes and major bugfixes will be
accepted for the ``stable-2`` branch.
fragments:
- 2.5.7.yml
- 2692-logstash-callback-plugin-replacing_options.yml
- 3400-fix-gitLab-api-searches-always-return-first-found-match-3386.yml
- 3404-redfish_utils-skip-manager-network-check.yml
- 3426-copy-permissions-along-with-file-for-jboss-module.yml
- 3473-gitlab_deploy_key-fix_idempotency.yml
- 3478-yaml-callback.yml
- 3561-fix-ipa-host-var-detection.yml
- 3602-fix-gitlab_project_members-improve-search-method.yml
- 3649-proxmox_group_info_TypeError.yml
- 3675-xattr-handle-base64-values.yml
release_date: '2021-11-09'

View File

@@ -69,5 +69,6 @@ Individuals who have been asked to become a part of this group have generally be
| ------------------- | -------------------- | ------------------ | -------------------- |
| Alexei Znamensky | russoz | russoz | |
| Andrew Klychkov | andersson007 | andersson007_ | |
| Andrew Pantuso | Ajpantuso | ajpantuso | |
| Felix Fontein | felixfontein | felixfontein | |
| John R Barker | gundalow | gundalow | |

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 2.5.5
version: 2.5.7
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -154,12 +154,12 @@ class CacheModuleKeys(MutableSet):
def __len__(self):
return len(self._keyset)
def add(self, key):
self._keyset[key] = time.time()
def add(self, value):
self._keyset[value] = time.time()
self._cache.set(self.PREFIX, self._keyset)
def discard(self, key):
del self._keyset[key]
def discard(self, value):
del self._keyset[value]
self._cache.set(self.PREFIX, self._keyset)
def remove_by_timerange(self, s_min, s_max):

View File

@@ -78,7 +78,7 @@ def get_mac():
# Getting hostname of system:
def get_hostname():
return str(socket.gethostname()).split('.local')[0]
return str(socket.gethostname()).split('.local', 1)[0]
# Getting IP of system:

View File

@@ -94,6 +94,7 @@ ansible.cfg: |
import os
import json
from ansible import context
import socket
import uuid
import logging
@@ -152,11 +153,11 @@ class CallbackModule(CallbackBase):
self.base_data['ansible_pre_command_output'] = os.popen(
self.ls_pre_command).read()
if self._options is not None:
self.base_data['ansible_checkmode'] = self._options.check
self.base_data['ansible_tags'] = self._options.tags
self.base_data['ansible_skip_tags'] = self._options.skip_tags
self.base_data['inventory'] = self._options.inventory
if context.CLIARGS is not None:
self.base_data['ansible_checkmode'] = context.CLIARGS.get('check')
self.base_data['ansible_tags'] = context.CLIARGS.get('tags')
self.base_data['ansible_skip_tags'] = context.CLIARGS.get('skip_tags')
self.base_data['inventory'] = context.CLIARGS.get('inventory')
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)

View File

@@ -42,28 +42,29 @@ def should_use_block(value):
return False
def my_represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
if style is None:
if should_use_block(value):
style = '|'
# we care more about readable than accuracy, so...
# ...no trailing space
value = value.rstrip()
# ...and non-printable characters
value = ''.join(x for x in value if x in string.printable or ord(x) >= 0xA0)
# ...tabs prevent blocks from expanding
value = value.expandtabs()
# ...and odd bits of whitespace
value = re.sub(r'[\x0b\x0c\r]', '', value)
# ...as does trailing space
value = re.sub(r' +\n', '\n', value)
else:
style = self.default_style
node = yaml.representer.ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
class MyDumper(AnsibleDumper):
def represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
if style is None:
if should_use_block(value):
style = '|'
# we care more about readable than accuracy, so...
# ...no trailing space
value = value.rstrip()
# ...and non-printable characters
value = ''.join(x for x in value if x in string.printable or ord(x) >= 0xA0)
# ...tabs prevent blocks from expanding
value = value.expandtabs()
# ...and odd bits of whitespace
value = re.sub(r'[\x0b\x0c\r]', '', value)
# ...as does trailing space
value = re.sub(r' +\n', '\n', value)
else:
style = self.default_style
node = yaml.representer.ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
class CallbackModule(Default):
@@ -79,7 +80,6 @@ class CallbackModule(Default):
def __init__(self):
super(CallbackModule, self).__init__()
yaml.representer.BaseRepresenter.represent_scalar = my_represent_scalar
def _dump_results(self, result, indent=None, sort_keys=True, keep_invocation=False):
if result.get('_ansible_no_log', False):
@@ -121,7 +121,7 @@ class CallbackModule(Default):
if abridged_result:
dumped += '\n'
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=MyDumper, default_flow_style=False))
# indent by a couple of spaces
dumped = '\n '.join(dumped.split('\n')).rstrip()

View File

@@ -58,7 +58,7 @@ class Connection(ConnectionBase):
self._connected = True
return self
def exec_command(self, cmd, sudoable=False, in_data=None):
def exec_command(self, cmd, in_data=None, sudoable=False):
''' run a command on the remote minion '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)

View File

@@ -22,7 +22,7 @@ DOCUMENTATION = r'''
- constructed
options:
plugin:
description: marks this as an instance of the 'linode' plugin
description: Marks this as an instance of the 'linode' plugin.
required: true
choices: ['linode', 'community.general.linode']
access_token:
@@ -77,6 +77,10 @@ groups:
webservers: "'web' in (tags|list)"
mailservers: "'mail' in (tags|list)"
compose:
# By default, Ansible tries to connect to the label of the instance.
# Since that might not be a valid name to connect to, you can
# replace it with the first IPv4 address of the linode as follows:
ansible_ssh_host: ipv4[0]
ansible_port: 2222
'''

View File

@@ -234,7 +234,7 @@ class InventoryModule(BaseInventoryPlugin):
self.headers = {
'Authorization': "Bearer %s" % token,
'User-Agent': "ansible %s Python %s" % (ansible_version, python_version.split(' ')[0]),
'User-Agent': "ansible %s Python %s" % (ansible_version, python_version.split(' ', 1)[0]),
'Content-type': 'application/json'
}

View File

@@ -81,13 +81,24 @@ DOCUMENTATION = '''
'''
EXAMPLES = '''
# Minimal example which will not gather additional facts for QEMU/LXC guests
# By not specifying a URL the plugin will attempt to connect to the controller host on port 8006
# my.proxmox.yml
plugin: community.general.proxmox
url: http://localhost:8006
user: ansible@pve
password: secure
validate_certs: no
# More complete example demonstrating the use of 'want_facts' and the constructed options
# Note that using facts returned by 'want_facts' in constructed options requires 'want_facts=true'
# my.proxmox.yml
plugin: community.general.proxmox
url: http://pve.domain.com:8006
user: ansible@pve
password: secure
validate_certs: false
want_facts: true
keyed_groups:
# proxmox_tags_parsed is an example of a fact only returned when 'want_facts=true'
- key: proxmox_tags_parsed
separator: ""
prefix: group

View File

@@ -383,8 +383,8 @@ class NetAppESeriesModule(object):
path = path[1:]
request_url = self.url + self.DEFAULT_REST_API_PATH + path
if self.log_requests or True:
self.module.log(pformat(dict(url=request_url, data=data, method=method)))
# if self.log_requests:
self.module.log(pformat(dict(url=request_url, data=data, method=method)))
return request(url=request_url, data=data, method=method, headers=headers, use_proxy=True, force=False, last_mod_time=None,
timeout=self.DEFAULT_TIMEOUT, http_agent=self.HTTP_AGENT, force_basic_auth=True, ignore_errors=ignore_errors, **self.creds)

View File

@@ -31,6 +31,7 @@ def _env_then_dns_fallback(*args, **kwargs):
result = env_fallback(*args, **kwargs)
if result == '':
raise AnsibleFallbackNotFound
return result
except AnsibleFallbackNotFound:
# If no host was given, we try to guess it from IPA.
# The ipa-ca entry is a standard entry that IPA will have set for

View File

@@ -100,7 +100,7 @@ class Online(object):
@staticmethod
def get_user_agent_string(module):
return "ansible %s Python %s" % (module.ansible_version, sys.version.split(' ')[0])
return "ansible %s Python %s" % (module.ansible_version, sys.version.split(' ', 1)[0])
def get(self, path, data=None, headers=None):
return self.send('GET', path, data, headers)

View File

@@ -2740,7 +2740,9 @@ class RedfishUtils(object):
if isinstance(set_value, dict):
for subprop in payload[property].keys():
if subprop not in target_ethernet_current_setting[property]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
# Not configured already; need to apply the request
need_change = True
break
sub_set_value = payload[property][subprop]
sub_cur_value = target_ethernet_current_setting[property][subprop]
if sub_set_value != sub_cur_value:
@@ -2754,7 +2756,9 @@ class RedfishUtils(object):
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in target_ethernet_current_setting[property][i]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
# Not configured already; need to apply the request
need_change = True
break
sub_set_value = payload[property][i][subprop]
sub_cur_value = target_ethernet_current_setting[property][i][subprop]
if sub_set_value != sub_cur_value:

View File

@@ -141,7 +141,7 @@ class Scaleway(object):
@staticmethod
def get_user_agent_string(module):
return "ansible %s Python %s" % (module.ansible_version, sys.version.split(' ')[0])
return "ansible %s Python %s" % (module.ansible_version, sys.version.split(' ', 1)[0])
def get(self, path, data=None, headers=None, params=None):
return self.send(method='GET', path=path, data=data, headers=headers, params=params)

View File

@@ -31,7 +31,14 @@ options:
type: str
disk:
description:
- hard disk size in GB for instance
- This option was previously described as "hard disk size in GB for instance" however several formats describing
a lxc mount are permitted.
- Older versions of Proxmox will accept a numeric value for size using the I(storage) parameter to automatically
choose which storage to allocate from, however new versions enforce the C(<STORAGE>:<SIZE>) syntax.
- "Additional options are available by using some combination of the following key-value pairs as a
comma-delimited list C([volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>]
[,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>])."
- See U(https://pve.proxmox.com/wiki/Linux_Container) for a full description.
- If I(proxmox_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(3). Note that the default value of I(proxmox_default_behavior)
changes in community.general 4.0.0.

View File

@@ -131,7 +131,7 @@ def main():
group = module.params['group']
if group:
groups = [proxmox.get_group(group=group)]
groups = [proxmox.get_group(groupid=group)]
else:
groups = proxmox.get_groups()
result['proxmox_groups'] = [group.group for group in groups]

View File

@@ -1201,8 +1201,9 @@ def main():
module.fail_json(vmid=vmid, msg='VM with vmid = %s does not exist in cluster' % vmid)
# Ensure the choosen VM name doesn't already exist when cloning
if get_vmid(proxmox, name):
module.exit_json(changed=False, vmid=vmid, msg="VM with name <%s> already exists" % name)
existing_vmid = get_vmid(proxmox, name)
if existing_vmid:
module.exit_json(changed=False, vmid=existing_vmid[0], msg="VM with name <%s> already exists" % name)
# Ensure the choosen VM id doesn't already exist when cloning
if get_vm(proxmox, newid):

View File

@@ -212,8 +212,8 @@ class TemplateModule(OpenNebulaModule):
def get_template_by_id(self, template_id):
return self.get_template(lambda template: (template.ID == template_id))
def get_template_by_name(self, template_name):
return self.get_template(lambda template: (template.NAME == template_name))
def get_template_by_name(self, name):
return self.get_template(lambda template: (template.NAME == name))
def get_template_instance(self, requested_id, requested_name):
if requested_id:

View File

@@ -508,11 +508,10 @@ def wait_for_devices_active(module, packet_conn, watched_devices):
def wait_for_public_IPv(module, packet_conn, created_devices):
def has_public_ip(addr_list, ip_v):
return any([a['public'] and a['address_family'] == ip_v and
a['address'] for a in addr_list])
return any(a['public'] and a['address_family'] == ip_v and a['address'] for a in addr_list)
def all_have_public_ip(ds, ip_v):
return all([has_public_ip(d.ip_addresses, ip_v) for d in ds])
return all(has_public_ip(d.ip_addresses, ip_v) for d in ds)
address_family = module.params.get('wait_for_public_IPv')

View File

@@ -167,7 +167,7 @@ def get_sshkey_selector(module):
return k.key == select_dict['key']
else:
# if key string not specified, all the fields must match
return all([select_dict[f] == getattr(k, f) for f in select_dict])
return all(select_dict[f] == getattr(k, f) for f in select_dict)
return selector

View File

@@ -21,6 +21,7 @@ description:
requirements:
- Python >= 2.6
- Univention
- ipaddress (for I(type=ptr_record))
options:
state:
required: false
@@ -33,10 +34,12 @@ options:
description:
- "Name of the record, this is also the DNS record. E.g. www for
www.example.com."
- For PTR records this has to be the IP address.
zone:
required: true
description:
- Corresponding DNS zone for this record, e.g. example.com.
- For PTR records this has to be the full reverse zone (for example C(1.1.192.in-addr.arpa)).
type:
required: true
description:
@@ -63,12 +66,29 @@ EXAMPLES = '''
a:
- 192.0.2.1
- 2001:0db8::42
- name: Create a DNS v4 PTR record on a UCS
community.general.udm_dns_record:
name: 192.0.2.1
zone: 2.0.192.in-addr.arpa
type: ptr_record
data:
ptr_record: "www.example.com."
- name: Create a DNS v6 PTR record on a UCS
community.general.udm_dns_record:
name: 2001:db8:0:0:0:ff00:42:8329
zone: 2.4.0.0.0.0.f.f.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa
type: ptr_record
data:
ptr_record: "www.example.com."
'''
RETURN = '''#'''
HAVE_UNIVENTION = False
HAVE_IPADDRESS = False
try:
from univention.admin.handlers.dns import (
forward_zone,
@@ -79,6 +99,7 @@ except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import missing_required_lib
from ansible_collections.community.general.plugins.module_utils.univention_umc import (
umc_module_for_add,
umc_module_for_edit,
@@ -87,6 +108,11 @@ from ansible_collections.community.general.plugins.module_utils.univention_umc i
config,
uldap,
)
try:
import ipaddress
HAVE_IPADDRESS = True
except ImportError:
pass
def main():
@@ -121,14 +147,30 @@ def main():
changed = False
diff = None
workname = name
if type == 'ptr_record':
if not HAVE_IPADDRESS:
module.fail_json(msg=missing_required_lib('ipaddress'))
try:
if 'arpa' not in zone:
raise Exception("Zone must be reversed zone for ptr_record. (e.g. 1.1.192.in-addr.arpa)")
ipaddr_rev = ipaddress.ip_address(name).reverse_pointer
subnet_offset = ipaddr_rev.find(zone)
if subnet_offset == -1:
raise Exception("reversed IP address {0} is not part of zone.".format(ipaddr_rev))
workname = ipaddr_rev[0:subnet_offset - 1]
except Exception as e:
module.fail_json(
msg='handling PTR record for {0} in zone {1} failed: {2}'.format(name, zone, e)
)
obj = list(ldap_search(
'(&(objectClass=dNSZone)(zoneName={0})(relativeDomainName={1}))'.format(zone, name),
'(&(objectClass=dNSZone)(zoneName={0})(relativeDomainName={1}))'.format(zone, workname),
attr=['dNSZone']
))
exists = bool(len(obj))
container = 'zoneName={0},cn=dns,{1}'.format(zone, base_dn())
dn = 'relativeDomainName={0},{1}'.format(name, container)
dn = 'relativeDomainName={0},{1}'.format(workname, container)
if state == 'present':
try:
@@ -141,13 +183,21 @@ def main():
) or reverse_zone.lookup(
config(),
uldap(),
'(zone={0})'.format(zone),
'(zoneName={0})'.format(zone),
scope='domain',
)
if len(so) == 0:
raise Exception("Did not find zone '{0}' in Univention".format(zone))
obj = umc_module_for_add('dns/{0}'.format(type), container, superordinate=so[0])
else:
obj = umc_module_for_edit('dns/{0}'.format(type), dn)
obj['name'] = name
if type == 'ptr_record':
obj['ip'] = name
obj['address'] = workname
else:
obj['name'] = name
for k, v in data.items():
obj[k] = v
diff = obj.diff()

View File

@@ -157,7 +157,7 @@ def _run_xattr(module, cmd, check_rc=True):
if line.startswith('#') or line == '':
pass
elif '=' in line:
(key, val) = line.split('=')
(key, val) = line.split('=', 1)
result[key] = val.strip('"')
else:
result[line] = ''

View File

@@ -84,13 +84,6 @@ EXAMPLES = '''
account_api_token: dummyapitoken
delegate_to: localhost
- name: Fetch my.com domain records
community.general.dnsimple:
domain: my.com
state: present
delegate_to: localhost
register: records
- name: Delete a domain
community.general.dnsimple:
domain: my.com

View File

@@ -811,6 +811,8 @@ class Nmcli(object):
'ethernet',
'generic',
'infiniband',
'ipip',
'sit',
'team',
'vlan',
)

View File

@@ -556,7 +556,7 @@ class MavenDownloader:
return "Cannot find md5 from " + remote_url
try:
# Check if remote md5 only contains md5 or md5 + filename
_remote_md5 = remote_md5.split(None)[0]
_remote_md5 = remote_md5.split(None, 1)[0]
remote_md5 = _remote_md5
# remote_md5 is empty so we continue and keep original md5 string
# This should not happen since we check for remote_md5 before

View File

@@ -120,8 +120,7 @@ class CoprModule(object):
@property
def short_chroot(self):
"""str: Chroot (distribution-version-architecture) shorten to distribution-version."""
chroot_parts = self.chroot.split("-")
return "{0}-{1}".format(chroot_parts[0], chroot_parts[1])
return self.chroot.rsplit('-', 1)[0]
@property
def arch(self):
@@ -193,18 +192,20 @@ class CoprModule(object):
Returns:
Information about the repository.
"""
distribution, version = self.short_chroot.split("-")
distribution, version = self.short_chroot.split('-', 1)
chroot = self.short_chroot
while True:
repo_info, status_code = self._get(chroot)
if repo_info:
return repo_info
if distribution == "rhel":
chroot = "centos-stream"
chroot = "centos-stream-8"
distribution = "centos"
elif distribution == "centos":
if version == "stream":
if version == "stream-8":
version = "8"
elif version == "stream-9":
version = "9"
chroot = "epel-{0}".format(version)
distribution = "epel"
else:

View File

@@ -132,10 +132,10 @@ EXAMPLES = '''
name: homebrew/cask/foo
state: present
- name: Use ignored-pinned option while upgrading all
- name: Use ignore-pinned option while upgrading all
community.general.homebrew:
upgrade_all: yes
upgrade_options: ignored-pinned
upgrade_options: ignore-pinned
'''
RETURN = '''

View File

@@ -246,6 +246,7 @@ def package_present(names, pkg_spec, module):
if match:
# It turns out we were able to install the package.
module.debug("package_present(): we were able to install package for name '%s'" % name)
pkg_spec[name]['changed'] = True
else:
# We really did fail, fake the return code.
module.debug("package_present(): we really did fail for name '%s'" % name)

View File

@@ -168,7 +168,9 @@ EXAMPLES = '''
password: "{{ password }}"
resource_uri: "/redfish/v1/Managers/1/NetworkProtocol/Oem/Lenovo/DNS"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.data }}"
- name: Get Lenovo FoD key collection resource via GetCollectionResource command
@@ -180,7 +182,9 @@ EXAMPLES = '''
password: "{{ password }}"
resource_uri: "/redfish/v1/Managers/1/Oem/Lenovo/FoD/Keys"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.data_list }}"
- name: Update ComputeSystem property AssetTag via PatchResource command

View File

@@ -46,7 +46,9 @@ EXAMPLES = '''
api_version: 500
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Data Centers
ansible.builtin.debug:
msg: "{{ result.datacenters }}"
- name: Gather paginated, filtered and sorted information about Data Centers
@@ -61,7 +63,9 @@ EXAMPLES = '''
sort: 'name:descending'
filter: 'state=Unmanaged'
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered and sorted list of Data Centers
ansible.builtin.debug:
msg: "{{ result.datacenters }}"
- name: Gather information about a Data Center by name
@@ -73,7 +77,9 @@ EXAMPLES = '''
name: "My Data Center"
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Data Center found by name
ansible.builtin.debug:
msg: "{{ result.datacenters }}"
- name: Gather information about the Data Center Visual Content
@@ -87,9 +93,13 @@ EXAMPLES = '''
- visualContent
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Data Center found by name
ansible.builtin.debug:
msg: "{{ result.datacenters }}"
- ansible.builtin.debug:
- name: Print fetched information about Data Center Visual Content
ansible.builtin.debug:
msg: "{{ result.datacenter_visual_content }}"
'''

View File

@@ -49,7 +49,9 @@ EXAMPLES = '''
no_log: true
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Enclosures
ansible.builtin.debug:
msg: "{{ result.enclosures }}"
- name: Gather paginated, filtered and sorted information about Enclosures
@@ -66,7 +68,9 @@ EXAMPLES = '''
no_log: true
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered ans sorted list of Enclosures
ansible.builtin.debug:
msg: "{{ result.enclosures }}"
- name: Gather information about an Enclosure by name
@@ -79,7 +83,9 @@ EXAMPLES = '''
no_log: true
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Enclosure found by name
ansible.builtin.debug:
msg: "{{ result.enclosures }}"
- name: Gather information about an Enclosure by name with options
@@ -96,13 +102,21 @@ EXAMPLES = '''
no_log: true
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Enclosure found by name
ansible.builtin.debug:
msg: "{{ result.enclosures }}"
- ansible.builtin.debug:
- name: Print fetched information about Enclosure Script
ansible.builtin.debug:
msg: "{{ result.enclosure_script }}"
- ansible.builtin.debug:
- name: Print fetched information about Enclosure Environmental Configuration
ansible.builtin.debug:
msg: "{{ result.enclosure_environmental_configuration }}"
- ansible.builtin.debug:
- name: Print fetched information about Enclosure Utilization
ansible.builtin.debug:
msg: "{{ result.enclosure_utilization }}"
- name: "Gather information about an Enclosure with temperature data at a resolution of one sample per day, between two
@@ -124,9 +138,13 @@ EXAMPLES = '''
no_log: true
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Enclosure found by name
ansible.builtin.debug:
msg: "{{ result.enclosures }}"
- ansible.builtin.debug:
- name: Print fetched information about Enclosure Utilization
ansible.builtin.debug:
msg: "{{ result.enclosure_utilization }}"
'''

View File

@@ -43,7 +43,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Ethernet Networks
ansible.builtin.debug:
msg: "{{ result.ethernet_networks }}"
- name: Gather paginated and filtered information about Ethernet Networks
@@ -57,7 +58,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated and filtered list of Ethernet Networks
ansible.builtin.debug:
msg: "{{ result.ethernet_networks }}"
- name: Gather information about an Ethernet Network by name
@@ -67,7 +69,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Ethernet Network found by name
ansible.builtin.debug:
msg: "{{ result.ethernet_networks }}"
- name: Gather information about an Ethernet Network by name with options
@@ -80,9 +83,12 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Ethernet Network Associated Profiles
ansible.builtin.debug:
msg: "{{ result.enet_associated_profiles }}"
- ansible.builtin.debug:
- name: Print fetched information about Ethernet Network Associated Uplink Groups
ansible.builtin.debug:
msg: "{{ result.enet_associated_uplink_groups }}"
'''

View File

@@ -38,7 +38,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Fibre Channel Networks
ansible.builtin.debug:
msg: "{{ result.fc_networks }}"
- name: Gather paginated, filtered and sorted information about Fibre Channel Networks
@@ -51,7 +52,9 @@ EXAMPLES = '''
filter: 'fabricType=FabricAttach'
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered and sorted list of Fibre Channel Networks
ansible.builtin.debug:
msg: "{{ result.fc_networks }}"
- name: Gather information about a Fibre Channel Network by name
@@ -61,7 +64,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Fibre Channel Network found by name
ansible.builtin.debug:
msg: "{{ result.fc_networks }}"
'''

View File

@@ -37,7 +37,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about FCoE Networks
ansible.builtin.debug:
msg: "{{ result.fcoe_networks }}"
- name: Gather paginated, filtered and sorted information about FCoE Networks
@@ -51,7 +52,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered and sorted list of FCoE Networks
ansible.builtin.debug:
msg: "{{ result.fcoe_networks }}"
- name: Gather information about a FCoE Network by name
@@ -61,7 +63,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about FCoE Network found by name
ansible.builtin.debug:
msg: "{{ result.fcoe_networks }}"
'''

View File

@@ -42,7 +42,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Logical Interconnect Groups
ansible.builtin.debug:
msg: "{{ result.logical_interconnect_groups }}"
- name: Gather paginated, filtered and sorted information about Logical Interconnect Groups
@@ -60,7 +61,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered and sorted list of Logical Interconnect Groups
ansible.builtin.debug:
msg: "{{ result.logical_interconnect_groups }}"
- name: Gather information about a Logical Interconnect Group by name
@@ -74,7 +76,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Logical Interconnect Group found by name
ansible.builtin.debug:
msg: "{{ result.logical_interconnect_groups }}"
'''

View File

@@ -50,10 +50,11 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Network Sets
ansible.builtin.debug:
msg: "{{ result.network_sets }}"
- name: Gather paginated, filtered, and sorted information about Network Sets
- name: Gather paginated, filtered and sorted information about Network Sets
community.general.oneview_network_set_info:
hostname: 172.16.101.48
username: administrator
@@ -68,7 +69,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered and sorted list of Network Sets
ansible.builtin.debug:
msg: "{{ result.network_sets }}"
- name: Gather information about all Network Sets, excluding Ethernet networks
@@ -83,7 +85,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Network Sets, excluding Ethernet networks
ansible.builtin.debug:
msg: "{{ result.network_sets }}"
- name: Gather information about a Network Set by name
@@ -97,7 +100,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Network Set found by name
ansible.builtin.debug:
msg: "{{ result.network_sets }}"
- name: Gather information about a Network Set by name, excluding Ethernet networks
@@ -113,7 +117,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about Network Set found by name, excluding Ethernet networks
ansible.builtin.debug:
msg: "{{ result.network_sets }}"
'''

View File

@@ -45,7 +45,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about SAN Managers
ansible.builtin.debug:
msg: "{{ result.san_managers }}"
- name: Gather paginated, filtered and sorted information about SAN Managers
@@ -59,7 +60,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about paginated, filtered and sorted list of SAN Managers
ansible.builtin.debug:
msg: "{{ result.san_managers }}"
- name: Gather information about a SAN Manager by provider display name
@@ -69,7 +71,8 @@ EXAMPLES = '''
delegate_to: localhost
register: result
- ansible.builtin.debug:
- name: Print fetched information about SAN Manager found by provider display name
ansible.builtin.debug:
msg: "{{ result.san_managers }}"
'''

View File

@@ -67,7 +67,9 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.cpu.entries | to_nice_json }}"
- name: Get CPU model
@@ -78,7 +80,9 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.cpu.entries.0.Model }}"
- name: Get memory inventory
@@ -108,7 +112,9 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.virtual_media.entries | to_nice_json }}"
- name: Get Volume Inventory
@@ -119,7 +125,8 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.volume.entries | to_nice_json }}"
- name: Get Session information
@@ -130,7 +137,9 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts.session.entries | to_nice_json }}"
- name: Get default inventory information
@@ -139,7 +148,8 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
register: result
- ansible.builtin.debug:
- name: Print fetched information
ansible.builtin.debug:
msg: "{{ result.redfish_facts | to_nice_json }}"
- name: Get several inventories

View File

@@ -96,7 +96,7 @@ author:
'''
EXAMPLES = '''
- name: create a new webhook that triggers on push (password auth)
- name: Create a new webhook that triggers on push (password auth)
community.general.github_webhook:
repository: ansible/ansible
url: https://www.example.com/hooks/

View File

@@ -204,7 +204,7 @@ class GitLabDeployKey(object):
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
deployKeys = project.keys.list(all=True)
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey

View File

@@ -102,9 +102,13 @@ class GitLabGroup(object):
# get group id if group exists
def get_group_id(self, gitlab_group):
group_exists = self._gitlab.groups.list(search=gitlab_group)
if group_exists:
return group_exists[0].id
groups = self._gitlab.groups.list(search=gitlab_group)
for group in groups:
if group.full_path == gitlab_group:
return group.id
for group in groups:
if group.path == gitlab_group or group.name == gitlab_group:
return group.id
# get all members in a group
def get_members_in_a_group(self, gitlab_group_id):

View File

@@ -48,7 +48,7 @@ options:
type: str
project:
description:
- The name of the GitLab project the member is added to/removed from.
- The name (or full path) of the GitLab project the member is added to/removed from.
required: true
type: str
gitlab_user:
@@ -118,9 +118,13 @@ class GitLabProjectMembers(object):
self._gitlab = gl
def get_project(self, project_name):
project_exists = self._gitlab.projects.list(search=project_name)
if project_exists:
return project_exists[0].id
try:
project_exists = self._gitlab.projects.get(project_name)
return project_exists.id
except gitlab.exceptions.GitlabGetError as e:
project_exists = self._gitlab.projects.list(search=project_name)
if project_exists:
return project_exists[0].id
def get_user_id(self, gitlab_user):
user_exists = self._gitlab.users.list(username=gitlab_user)

View File

@@ -86,7 +86,7 @@ options:
type: str
maximum_timeout:
description:
- The maximum timeout that a runner has to pick up a specific job.
- The maximum time that a runner has to complete a specific job.
required: False
default: 3600
type: int

View File

@@ -141,14 +141,14 @@ class Plist:
self.__changed = False
self.__service = service
state, pid, dummy, dummy = LaunchCtlList(module, service).run()
state, pid, dummy, dummy = LaunchCtlList(module, self.__service).run()
# Check if readPlist is available or not
self.old_plistlib = hasattr(plistlib, 'readPlist')
self.__file = self.__find_service_plist(service)
self.__file = self.__find_service_plist(self.__service)
if self.__file is None:
msg = 'Unable to infer the path of %s service plist file' % service
msg = 'Unable to infer the path of %s service plist file' % self.__service
if pid is None and state == ServiceState.UNLOADED:
msg += ' and it was not found among active services'
module.fail_json(msg=msg)

View File

@@ -274,8 +274,7 @@ RULE_REGEX = re.compile(r"""(?P<rule_type>-?(?:auth|account|session|password))\s
(?P<control>\[.*\]|\S*)\s+
(?P<path>\S*)\s*
(?P<args>.*)\s*""", re.X)
RULE_ARG_REGEX = re.compile(r"""(\[.*\]|\S*)""")
RULE_ARG_REGEX = re.compile(r"(\[.*\]|\S*)")
VALID_TYPES = ['account', '-account', 'auth', '-auth', 'password', '-password', 'session', '-session']
@@ -358,11 +357,9 @@ class PamdRule(PamdLine):
# Method to check if a rule matches the type, control and path.
def matches(self, rule_type, rule_control, rule_path, rule_args=None):
if (rule_type == self.rule_type and
return (rule_type == self.rule_type and
rule_control == self.rule_control and
rule_path == self.rule_path):
return True
return False
rule_path == self.rule_path)
@classmethod
def rule_from_string(cls, line):
@@ -507,25 +504,25 @@ class PamdService(object):
# Get a list of rules we want to change
rules_to_find = self.get(rule_type, rule_control, rule_path)
new_args = parse_module_arguments(new_args)
new_args = parse_module_arguments(new_args, return_none=True)
changes = 0
for current_rule in rules_to_find:
rule_changed = False
if new_type:
if(current_rule.rule_type != new_type):
if current_rule.rule_type != new_type:
rule_changed = True
current_rule.rule_type = new_type
if new_control:
if(current_rule.rule_control != new_control):
if current_rule.rule_control != new_control:
rule_changed = True
current_rule.rule_control = new_control
if new_path:
if(current_rule.rule_path != new_path):
if current_rule.rule_path != new_path:
rule_changed = True
current_rule.rule_path = new_path
if new_args:
if(current_rule.rule_args != new_args):
if new_args is not None:
if current_rule.rule_args != new_args:
rule_changed = True
current_rule.rule_args = new_args
@@ -724,8 +721,9 @@ class PamdService(object):
current_line = self._head
while current_line is not None:
if not current_line.validate()[0]:
return current_line.validate()
curr_validate = current_line.validate()
if not curr_validate[0]:
return curr_validate
current_line = current_line.next
return True, "Module is valid"
@@ -750,22 +748,25 @@ class PamdService(object):
return '\n'.join(lines) + '\n'
def parse_module_arguments(module_arguments):
# Return empty list if we have no args to parse
if not module_arguments:
return []
elif isinstance(module_arguments, list) and len(module_arguments) == 1 and not module_arguments[0]:
def parse_module_arguments(module_arguments, return_none=False):
# If args is None, return empty list by default.
# But if return_none is True, then return None
if module_arguments is None:
return None if return_none else []
if isinstance(module_arguments, list) and len(module_arguments) == 1 and not module_arguments[0]:
return []
if not isinstance(module_arguments, list):
module_arguments = [module_arguments]
parsed_args = list()
# From this point on, module_arguments is guaranteed to be a list, empty or not
parsed_args = []
re_clear_spaces = re.compile(r"\s*=\s*")
for arg in module_arguments:
for item in filter(None, RULE_ARG_REGEX.findall(arg)):
if not item.startswith("["):
re.sub("\\s*=\\s*", "=", item)
re_clear_spaces.sub("=", item)
parsed_args.append(item)
return parsed_args
@@ -861,8 +862,7 @@ def main():
fd.write(str(service))
except IOError:
module.fail_json(msg='Unable to create temporary \
file %s' % temp_file)
module.fail_json(msg='Unable to create temporary file %s' % temp_file)
module.atomic_move(temp_file.name, os.path.realpath(fname))

View File

@@ -526,8 +526,8 @@ def main():
lines = [(numbered_line_re.match(line), '(v6)' in line) for line in numbered_state.splitlines()]
lines = [(int(matcher.group(1)), ipv6) for (matcher, ipv6) in lines if matcher]
last_number = max([no for (no, ipv6) in lines]) if lines else 0
has_ipv4 = any([not ipv6 for (no, ipv6) in lines])
has_ipv6 = any([ipv6 for (no, ipv6) in lines])
has_ipv4 = any(not ipv6 for (no, ipv6) in lines)
has_ipv6 = any(ipv6 for (no, ipv6) in lines)
if relative_to_cmd == 'first-ipv4':
relative_to = 1
elif relative_to_cmd == 'last-ipv4':

View File

@@ -314,7 +314,7 @@ except ImportError:
#
# @return vdolist A list of currently created VDO volumes.
def inventory_vdos(module, vdocmd):
rc, vdostatusout, err = module.run_command("%s status" % (vdocmd))
rc, vdostatusout, err = module.run_command([vdocmd, "status"])
# if rc != 0:
# module.fail_json(msg="Inventorying VDOs failed: %s"
@@ -322,15 +322,13 @@ def inventory_vdos(module, vdocmd):
vdolist = []
if (rc == 2 and
re.findall(r"vdoconf.yml does not exist", err, re.MULTILINE)):
if rc == 2 and re.findall(r"vdoconf\.yml does not exist", err, re.MULTILINE):
# If there is no /etc/vdoconf.yml file, assume there are no
# VDO volumes. Return an empty list of VDO volumes.
return vdolist
if rc != 0:
module.fail_json(msg="Inventorying VDOs failed: %s"
% vdostatusout, rc=rc, err=err)
module.fail_json(msg="Inventorying VDOs failed: %s" % vdostatusout, rc=rc, err=err)
vdostatusyaml = yaml.load(vdostatusout)
if vdostatusyaml is None:
@@ -345,7 +343,7 @@ def inventory_vdos(module, vdocmd):
def list_running_vdos(module, vdocmd):
rc, vdolistout, err = module.run_command("%s list" % (vdocmd))
rc, vdolistout, err = module.run_command([vdocmd, "list"])
runningvdolist = filter(None, vdolistout.split('\n'))
return runningvdolist
@@ -359,36 +357,30 @@ def list_running_vdos(module, vdocmd):
#
# @return vdocmdoptions A string to be used in a 'vdo <action>' command.
def start_vdo(module, vdoname, vdocmd):
rc, out, err = module.run_command("%s start --name=%s" % (vdocmd, vdoname))
rc, out, err = module.run_command([vdocmd, "start", "--name=%s" % vdoname])
if rc == 0:
module.log("started VDO volume %s" % vdoname)
return rc
def stop_vdo(module, vdoname, vdocmd):
rc, out, err = module.run_command("%s stop --name=%s" % (vdocmd, vdoname))
rc, out, err = module.run_command([vdocmd, "stop", "--name=%s" % vdoname])
if rc == 0:
module.log("stopped VDO volume %s" % vdoname)
return rc
def activate_vdo(module, vdoname, vdocmd):
rc, out, err = module.run_command("%s activate --name=%s"
% (vdocmd, vdoname))
rc, out, err = module.run_command([vdocmd, "activate", "--name=%s" % vdoname])
if rc == 0:
module.log("activated VDO volume %s" % vdoname)
return rc
def deactivate_vdo(module, vdoname, vdocmd):
rc, out, err = module.run_command("%s deactivate --name=%s"
% (vdocmd, vdoname))
rc, out, err = module.run_command([vdocmd, "deactivate", "--name=%s" % vdoname])
if rc == 0:
module.log("deactivated VDO volume %s" % vdoname)
return rc
@@ -396,32 +388,31 @@ def add_vdooptions(params):
vdocmdoptions = ""
options = []
if ('logicalsize' in params) and (params['logicalsize'] is not None):
if params.get('logicalsize') is not None:
options.append("--vdoLogicalSize=" + params['logicalsize'])
if (('blockmapcachesize' in params) and
(params['blockmapcachesize'] is not None)):
if params.get('blockmapcachesize') is not None:
options.append("--blockMapCacheSize=" + params['blockmapcachesize'])
if ('readcache' in params) and (params['readcache'] == 'enabled'):
if params.get('readcache') == 'enabled':
options.append("--readCache=enabled")
if ('readcachesize' in params) and (params['readcachesize'] is not None):
if params.get('readcachesize') is not None:
options.append("--readCacheSize=" + params['readcachesize'])
if ('slabsize' in params) and (params['slabsize'] is not None):
if params.get('slabsize') is not None:
options.append("--vdoSlabSize=" + params['slabsize'])
if ('emulate512' in params) and (params['emulate512']):
if params.get('emulate512'):
options.append("--emulate512=enabled")
if ('indexmem' in params) and (params['indexmem'] is not None):
if params.get('indexmem') is not None:
options.append("--indexMem=" + params['indexmem'])
if ('indexmode' in params) and (params['indexmode'] == 'sparse'):
if params.get('indexmode') == 'sparse':
options.append("--sparseIndex=enabled")
if ('force' in params) and (params['force']):
if params.get('force'):
options.append("--force")
# Entering an invalid thread config results in a cryptic
@@ -430,23 +421,21 @@ def add_vdooptions(params):
# output a more helpful message, but one would have to log
# onto that system to read the error. For now, heed the thread
# limit warnings in the DOCUMENTATION section above.
if ('ackthreads' in params) and (params['ackthreads'] is not None):
if params.get('ackthreads') is not None:
options.append("--vdoAckThreads=" + params['ackthreads'])
if ('biothreads' in params) and (params['biothreads'] is not None):
if params.get('biothreads') is not None:
options.append("--vdoBioThreads=" + params['biothreads'])
if ('cputhreads' in params) and (params['cputhreads'] is not None):
if params.get('cputhreads') is not None:
options.append("--vdoCpuThreads=" + params['cputhreads'])
if ('logicalthreads' in params) and (params['logicalthreads'] is not None):
if params.get('logicalthreads') is not None:
options.append("--vdoLogicalThreads=" + params['logicalthreads'])
if (('physicalthreads' in params) and
(params['physicalthreads'] is not None)):
if params.get('physicalthreads') is not None:
options.append("--vdoPhysicalThreads=" + params['physicalthreads'])
vdocmdoptions = ' '.join(options)
return vdocmdoptions
@@ -530,31 +519,24 @@ def run_module():
# Since this is a creation of a new VDO volume, it will contain all
# all of the parameters given by the playbook; the rest will
# assume default values.
options = module.params
vdocmdoptions = add_vdooptions(options)
rc, out, err = module.run_command("%s create --name=%s --device=%s %s"
% (vdocmd, desiredvdo, device,
vdocmdoptions))
vdocmdoptions = add_vdooptions(module.params)
rc, out, err = module.run_command(
[vdocmd, "create", "--name=%s" % desiredvdo, "--device=%s" % device] + vdocmdoptions)
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Creating VDO %s failed."
% desiredvdo, rc=rc, err=err)
module.fail_json(msg="Creating VDO %s failed." % desiredvdo, rc=rc, err=err)
if (module.params['compression'] == 'disabled'):
rc, out, err = module.run_command("%s disableCompression --name=%s"
% (vdocmd, desiredvdo))
if module.params['compression'] == 'disabled':
rc, out, err = module.run_command([vdocmd, "disableCompression", "--name=%s" % desiredvdo])
if ((module.params['deduplication'] is not None) and
module.params['deduplication'] == 'disabled'):
rc, out, err = module.run_command("%s disableDeduplication "
"--name=%s"
% (vdocmd, desiredvdo))
if module.params['deduplication'] == 'disabled':
rc, out, err = module.run_command([vdocmd, "disableDeduplication", "--name=%s" % desiredvdo])
if module.params['activated'] == 'no':
if module.params['activated'] is False:
deactivate_vdo(module, desiredvdo, vdocmd)
if module.params['running'] == 'no':
if module.params['running'] is False:
stop_vdo(module, desiredvdo, vdocmd)
# Print a post-run list of VDO volumes in the result object.
@@ -563,8 +545,8 @@ def run_module():
module.exit_json(**result)
# Modify the current parameters of a VDO that exists.
if (desiredvdo in vdolist) and (state == 'present'):
rc, vdostatusoutput, err = module.run_command("%s status" % (vdocmd))
if desiredvdo in vdolist and state == 'present':
rc, vdostatusoutput, err = module.run_command([vdocmd, "status"])
vdostatusyaml = yaml.load(vdostatusoutput)
# An empty dictionary to contain dictionaries of VDO statistics
@@ -629,7 +611,7 @@ def run_module():
diffparams = {}
# Check for differences between the playbook parameters and the
# current parameters. This will need a comparison function;
# current parameters. This will need a comparison function;
# since AnsibleModule params are all strings, compare them as
# strings (but if it's None; skip).
for key in currentparams.keys():
@@ -640,10 +622,7 @@ def run_module():
if diffparams:
vdocmdoptions = add_vdooptions(diffparams)
if vdocmdoptions:
rc, out, err = module.run_command("%s modify --name=%s %s"
% (vdocmd,
desiredvdo,
vdocmdoptions))
rc, out, err = module.run_command([vdocmd, "modify", "--name=%s" % desiredvdo] + vdocmdoptions)
if rc == 0:
result['changed'] = True
else:
@@ -652,107 +631,36 @@ def run_module():
if 'deduplication' in diffparams.keys():
dedupemod = diffparams['deduplication']
if dedupemod == 'disabled':
rc, out, err = module.run_command("%s "
"disableDeduplication "
"--name=%s"
% (vdocmd, desiredvdo))
dedupeparam = "disableDeduplication" if dedupemod == 'disabled' else "enableDeduplication"
rc, out, err = module.run_command([vdocmd, dedupeparam, "--name=%s" % desiredvdo])
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing deduplication on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
if dedupemod == 'enabled':
rc, out, err = module.run_command("%s "
"enableDeduplication "
"--name=%s"
% (vdocmd, desiredvdo))
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing deduplication on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing deduplication on VDO volume %s failed." % desiredvdo, rc=rc, err=err)
if 'compression' in diffparams.keys():
compressmod = diffparams['compression']
if compressmod == 'disabled':
rc, out, err = module.run_command("%s disableCompression "
"--name=%s"
% (vdocmd, desiredvdo))
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing compression on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
if compressmod == 'enabled':
rc, out, err = module.run_command("%s enableCompression "
"--name=%s"
% (vdocmd, desiredvdo))
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing compression on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
compressparam = "disableCompression" if compressmod == 'disabled' else "enableCompression"
rc, out, err = module.run_command([vdocmd, compressparam, "--name=%s" % desiredvdo])
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing compression on VDO volume %s failed." % desiredvdo, rc=rc, err=err)
if 'writepolicy' in diffparams.keys():
writepolmod = diffparams['writepolicy']
if writepolmod == 'auto':
rc, out, err = module.run_command("%s "
"changeWritePolicy "
"--name=%s "
"--writePolicy=%s"
% (vdocmd,
desiredvdo,
writepolmod))
rc, out, err = module.run_command([
vdocmd,
"changeWritePolicy",
"--name=%s" % desiredvdo,
"--writePolicy=%s" % writepolmod,
])
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing write policy on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
if writepolmod == 'sync':
rc, out, err = module.run_command("%s "
"changeWritePolicy "
"--name=%s "
"--writePolicy=%s"
% (vdocmd,
desiredvdo,
writepolmod))
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing write policy on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
if writepolmod == 'async':
rc, out, err = module.run_command("%s "
"changeWritePolicy "
"--name=%s "
"--writePolicy=%s"
% (vdocmd,
desiredvdo,
writepolmod))
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing write policy on "
"VDO volume %s failed."
% desiredvdo, rc=rc, err=err)
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Changing write policy on VDO volume %s failed." % desiredvdo, rc=rc, err=err)
# Process the size parameters, to determine of a growPhysical or
# growLogical operation needs to occur.
@@ -770,19 +678,15 @@ def run_module():
diffsizeparams = {}
for key in sizeparams.keys():
if module.params[key] is not None:
if str(sizeparams[key]) != module.params[key]:
diffsizeparams[key] = module.params[key]
if module.params[key] is not None and str(sizeparams[key]) != module.params[key]:
diffsizeparams[key] = module.params[key]
if module.params['growphysical']:
physdevice = module.params['device']
rc, devsectors, err = module.run_command("blockdev --getsz %s"
% (physdevice))
rc, devsectors, err = module.run_command([module.get_bin_path("blockdev"), "--getsz", physdevice])
devblocks = (int(devsectors) / 8)
dmvdoname = ('/dev/mapper/' + desiredvdo)
currentvdostats = (processedvdos[desiredvdo]
['VDO statistics']
[dmvdoname])
currentvdostats = processedvdos[desiredvdo]['VDO statistics'][dmvdoname]
currentphysblocks = currentvdostats['physical blocks']
# Set a growPhysical threshold to grow only when there is
@@ -794,34 +698,25 @@ def run_module():
if currentphysblocks > growthresh:
result['changed'] = True
rc, out, err = module.run_command("%s growPhysical --name=%s"
% (vdocmd, desiredvdo))
rc, out, err = module.run_command([vdocmd, "growPhysical", "--name=%s" % desiredvdo])
if 'logicalsize' in diffsizeparams.keys():
result['changed'] = True
vdocmdoptions = ("--vdoLogicalSize=" +
diffsizeparams['logicalsize'])
rc, out, err = module.run_command("%s growLogical --name=%s %s"
% (vdocmd,
desiredvdo,
vdocmdoptions))
rc, out, err = module.run_command([vdocmd, "growLogical", "--name=%s" % desiredvdo, "--vdoLogicalSize=%s" % diffsizeparams['logicalsize']])
vdoactivatestatus = processedvdos[desiredvdo]['Activate']
if ((module.params['activated'] == 'no') and
(vdoactivatestatus == 'enabled')):
if module.params['activated'] is False and vdoactivatestatus == 'enabled':
deactivate_vdo(module, desiredvdo, vdocmd)
if not result['changed']:
result['changed'] = True
if ((module.params['activated'] == 'yes') and
(vdoactivatestatus == 'disabled')):
if module.params['activated'] and vdoactivatestatus == 'disabled':
activate_vdo(module, desiredvdo, vdocmd)
if not result['changed']:
result['changed'] = True
if ((module.params['running'] == 'no') and
(desiredvdo in runningvdolist)):
if module.params['running'] is False and desiredvdo in runningvdolist:
stop_vdo(module, desiredvdo, vdocmd)
if not result['changed']:
result['changed'] = True
@@ -833,10 +728,7 @@ def run_module():
# the activate_vdo() operation succeeded, as 'vdoactivatestatus'
# will have the activated status prior to the activate_vdo()
# call.
if (((vdoactivatestatus == 'enabled') or
(module.params['activated'] == 'yes')) and
(module.params['running'] == 'yes') and
(desiredvdo not in runningvdolist)):
if (vdoactivatestatus == 'enabled' or module.params['activated']) and module.params['running'] and desiredvdo not in runningvdolist:
start_vdo(module, desiredvdo, vdocmd)
if not result['changed']:
result['changed'] = True
@@ -849,14 +741,12 @@ def run_module():
module.exit_json(**result)
# Remove a desired VDO that currently exists.
if (desiredvdo in vdolist) and (state == 'absent'):
rc, out, err = module.run_command("%s remove --name=%s"
% (vdocmd, desiredvdo))
if desiredvdo in vdolist and state == 'absent':
rc, out, err = module.run_command([vdocmd, "remove", "--name=%s" % desiredvdo])
if rc == 0:
result['changed'] = True
else:
module.fail_json(msg="Removing VDO %s failed."
% desiredvdo, rc=rc, err=err)
module.fail_json(msg="Removing VDO %s failed." % desiredvdo, rc=rc, err=err)
# Print a post-run list of VDO volumes in the result object.
vdolist = inventory_vdos(module, vdocmd)
@@ -868,8 +758,7 @@ def run_module():
# not exist. Print a post-run list of VDO volumes in the result
# object.
vdolist = inventory_vdos(module, vdocmd)
module.log("received request to remove non-existent VDO volume %s"
% desiredvdo)
module.log("received request to remove non-existent VDO volume %s" % desiredvdo)
module.exit_json(**result)

View File

@@ -142,7 +142,7 @@ def main():
# Clean up old failed deployment
os.remove(os.path.join(deploy_path, "%s.failed" % deployment))
shutil.copyfile(src, os.path.join(deploy_path, deployment))
module.preserved_copy(src, os.path.join(deploy_path, deployment))
while not deployed:
deployed = is_deployed(deploy_path, deployment)
if is_failed(deploy_path, deployment):
@@ -153,7 +153,7 @@ def main():
if state == 'present' and deployed:
if module.sha1(src) != module.sha1(os.path.join(deploy_path, deployment)):
os.remove(os.path.join(deploy_path, "%s.deployed" % deployment))
shutil.copyfile(src, os.path.join(deploy_path, deployment))
module.preserved_copy(src, os.path.join(deploy_path, deployment))
deployed = False
while not deployed:
deployed = is_deployed(deploy_path, deployment)

View File

@@ -51,7 +51,7 @@ class ScalewayAPI:
def __init__(self, auth_token, region):
self.session = requests.session()
self.session.headers.update({
'User-Agent': 'Ansible Python/%s' % (sys.version.split(' ')[0])
'User-Agent': 'Ansible Python/%s' % (sys.version.split(' ', 1)[0])
})
self.session.headers.update({
'X-Auth-Token': auth_token.encode('latin1')

View File

@@ -12,10 +12,10 @@ import json
class SetEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, set):
return list(obj)
return json.JSONEncoder.default(self, obj)
def default(self, o):
if isinstance(o, set):
return list(o)
return json.JSONEncoder.default(self, o)
VBOX = "VBoxManage"

View File

@@ -0,0 +1,2 @@
dependencies:
- setup_remote_tmp_dir

View File

@@ -2,3 +2,4 @@ needs/root
shippable/posix/group2
destructive
skip/aix
skip/osx # FIXME

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_pkg_mgr
- setup_remote_tmp_dir
- prepare_tests

View File

@@ -2,21 +2,21 @@
- name: Create broken link
file:
src: /nowhere
dest: "{{ output_dir }}/nowhere.txt"
dest: "{{ remote_tmp_dir }}/nowhere.txt"
state: link
force: yes
- name: Archive broken link (tar.gz)
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_broken_link.tar.gz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_broken_link.tar.gz"
- name: Archive broken link (tar.bz2)
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_broken_link.tar.bz2"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_broken_link.tar.bz2"
- name: Archive broken link (zip)
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_broken_link.zip"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_broken_link.zip"

View File

@@ -74,7 +74,7 @@
register: backports_lzma_pip
- name: prep our files
copy: src={{ item }} dest={{output_dir}}/{{ item }}
copy: src={{ item }} dest={{remote_tmp_dir}}/{{ item }}
with_items:
- foo.txt
- bar.txt
@@ -84,15 +84,15 @@
- name: archive using gz
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_01.gz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_01.gz"
format: gz
register: archive_gz_result_01
- debug: msg="{{ archive_gz_result_01 }}"
- name: verify that the files archived
file: path={{output_dir}}/archive_01.gz state=file
file: path={{remote_tmp_dir}}/archive_01.gz state=file
- name: check if gz file exists and includes all text files
assert:
@@ -103,15 +103,15 @@
- name: archive using zip
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_01.zip"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_01.zip"
format: zip
register: archive_zip_result_01
- debug: msg="{{ archive_zip_result_01 }}"
- name: verify that the files archived
file: path={{output_dir}}/archive_01.zip state=file
file: path={{remote_tmp_dir}}/archive_01.zip state=file
- name: check if zip file exists
assert:
@@ -122,15 +122,15 @@
- name: archive using bz2
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_01.bz2"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_01.bz2"
format: bz2
register: archive_bz2_result_01
- debug: msg="{{ archive_bz2_result_01 }}"
- name: verify that the files archived
file: path={{output_dir}}/archive_01.bz2 state=file
file: path={{remote_tmp_dir}}/archive_01.bz2 state=file
- name: check if bzip file exists
assert:
@@ -141,15 +141,15 @@
- name: archive using xz
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_01.xz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_01.xz"
format: xz
register: archive_xz_result_01
- debug: msg="{{ archive_xz_result_01 }}"
- name: verify that the files archived
file: path={{output_dir}}/archive_01.xz state=file
file: path={{remote_tmp_dir}}/archive_01.xz state=file
- name: check if xz file exists
assert:
@@ -160,15 +160,15 @@
- name: archive and set mode to 0600
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_02.gz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_02.gz"
format: gz
mode: "u+rwX,g-rwx,o-rwx"
register: archive_bz2_result_02
- name: Test that the file modes were changed
stat:
path: "{{ output_dir }}/archive_02.gz"
path: "{{ remote_tmp_dir }}/archive_02.gz"
register: archive_02_gz_stat
- debug: msg="{{ archive_02_gz_stat}}"
@@ -182,20 +182,20 @@
- "{{ archive_bz2_result_02['archived']| length}} == 3"
- name: remove our gz
file: path="{{ output_dir }}/archive_02.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_02.gz" state=absent
- name: archive and set mode to 0600
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_02.zip"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_02.zip"
format: zip
mode: "u+rwX,g-rwx,o-rwx"
register: archive_zip_result_02
- name: Test that the file modes were changed
stat:
path: "{{ output_dir }}/archive_02.zip"
path: "{{ remote_tmp_dir }}/archive_02.zip"
register: archive_02_zip_stat
- name: Test that the file modes were changed
@@ -207,20 +207,20 @@
- "{{ archive_zip_result_02['archived']| length}} == 3"
- name: remove our zip
file: path="{{ output_dir }}/archive_02.zip" state=absent
file: path="{{ remote_tmp_dir }}/archive_02.zip" state=absent
- name: archive and set mode to 0600
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_02.bz2"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_02.bz2"
format: bz2
mode: "u+rwX,g-rwx,o-rwx"
register: archive_bz2_result_02
- name: Test that the file modes were changed
stat:
path: "{{ output_dir }}/archive_02.bz2"
path: "{{ remote_tmp_dir }}/archive_02.bz2"
register: archive_02_bz2_stat
- name: Test that the file modes were changed
@@ -232,19 +232,19 @@
- "{{ archive_bz2_result_02['archived']| length}} == 3"
- name: remove our bz2
file: path="{{ output_dir }}/archive_02.bz2" state=absent
file: path="{{ remote_tmp_dir }}/archive_02.bz2" state=absent
- name: archive and set mode to 0600
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_02.xz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_02.xz"
format: xz
mode: "u+rwX,g-rwx,o-rwx"
register: archive_xz_result_02
- name: Test that the file modes were changed
stat:
path: "{{ output_dir }}/archive_02.xz"
path: "{{ remote_tmp_dir }}/archive_02.xz"
register: archive_02_xz_stat
- name: Test that the file modes were changed
@@ -256,20 +256,20 @@
- "{{ archive_xz_result_02['archived']| length}} == 3"
- name: remove our xz
file: path="{{ output_dir }}/archive_02.xz" state=absent
file: path="{{ remote_tmp_dir }}/archive_02.xz" state=absent
- name: archive multiple files as list
archive:
path:
- "{{ output_dir }}/empty.txt"
- "{{ output_dir }}/foo.txt"
- "{{ output_dir }}/bar.txt"
dest: "{{ output_dir }}/archive_list.gz"
- "{{ remote_tmp_dir }}/empty.txt"
- "{{ remote_tmp_dir }}/foo.txt"
- "{{ remote_tmp_dir }}/bar.txt"
dest: "{{ remote_tmp_dir }}/archive_list.gz"
format: gz
register: archive_gz_list_result
- name: verify that the files archived
file: path={{output_dir}}/archive_list.gz state=file
file: path={{remote_tmp_dir}}/archive_list.gz state=file
- name: check if gz file exists and includes all text files
assert:
@@ -279,18 +279,18 @@
- "{{ archive_gz_list_result['archived'] | length }} == 3"
- name: remove our gz
file: path="{{ output_dir }}/archive_list.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_list.gz" state=absent
- name: test that gz archive that contains non-ascii filenames
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/test-archive-nonascii-くらとみ.tar.gz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.tar.gz"
format: gz
register: nonascii_result_0
- name: Check that file is really there
stat:
path: "{{ output_dir }}/test-archive-nonascii-くらとみ.tar.gz"
path: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.tar.gz"
register: nonascii_stat0
- name: Assert that nonascii tests succeeded
@@ -300,18 +300,18 @@
- "nonascii_stat0.stat.exists == true"
- name: remove nonascii test
file: path="{{ output_dir }}/test-archive-nonascii-くらとみ.tar.gz" state=absent
file: path="{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.tar.gz" state=absent
- name: test that bz2 archive that contains non-ascii filenames
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/test-archive-nonascii-くらとみ.bz2"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.bz2"
format: bz2
register: nonascii_result_1
- name: Check that file is really there
stat:
path: "{{ output_dir }}/test-archive-nonascii-くらとみ.bz2"
path: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.bz2"
register: nonascii_stat_1
- name: Assert that nonascii tests succeeded
@@ -321,18 +321,18 @@
- "nonascii_stat_1.stat.exists == true"
- name: remove nonascii test
file: path="{{ output_dir }}/test-archive-nonascii-くらとみ.bz2" state=absent
file: path="{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.bz2" state=absent
- name: test that xz archive that contains non-ascii filenames
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/test-archive-nonascii-くらとみ.xz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.xz"
format: xz
register: nonascii_result_1
- name: Check that file is really there
stat:
path: "{{ output_dir }}/test-archive-nonascii-くらとみ.xz"
path: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.xz"
register: nonascii_stat_1
- name: Assert that nonascii tests succeeded
@@ -342,18 +342,18 @@
- "nonascii_stat_1.stat.exists == true"
- name: remove nonascii test
file: path="{{ output_dir }}/test-archive-nonascii-くらとみ.xz" state=absent
file: path="{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.xz" state=absent
- name: test that zip archive that contains non-ascii filenames
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/test-archive-nonascii-くらとみ.zip"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.zip"
format: zip
register: nonascii_result_2
- name: Check that file is really there
stat:
path: "{{ output_dir }}/test-archive-nonascii-くらとみ.zip"
path: "{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.zip"
register: nonascii_stat_2
- name: Assert that nonascii tests succeeded
@@ -363,32 +363,32 @@
- "nonascii_stat_2.stat.exists == true"
- name: remove nonascii test
file: path="{{ output_dir }}/test-archive-nonascii-くらとみ.zip" state=absent
file: path="{{ remote_tmp_dir }}/test-archive-nonascii-くらとみ.zip" state=absent
- name: Test that excluded paths do not influence archive root
archive:
path:
- "{{ output_dir }}/sub/subfile.txt"
- "{{ output_dir }}"
- "{{ remote_tmp_dir }}/sub/subfile.txt"
- "{{ remote_tmp_dir }}"
exclude_path:
- "{{ output_dir }}"
dest: "{{ output_dir }}/test-archive-root.tgz"
- "{{ remote_tmp_dir }}"
dest: "{{ remote_tmp_dir }}/test-archive-root.tgz"
register: archive_root_result
- name: Assert that excluded paths do not influence archive root
assert:
that:
- archive_root_result.arcroot != output_dir
- archive_root_result.arcroot != remote_tmp_dir
- name: Remove archive root test
file:
path: "{{ output_dir }}/test-archive-root.tgz"
path: "{{ remote_tmp_dir }}/test-archive-root.tgz"
state: absent
- name: Test Single Target with format={{ item }}
archive:
path: "{{ output_dir }}/foo.txt"
dest: "{{ output_dir }}/test-single-target.{{ item }}"
path: "{{ remote_tmp_dir }}/foo.txt"
dest: "{{ remote_tmp_dir }}/test-single-target.{{ item }}"
format: "{{ item }}"
register: "single_target_test"
loop:
@@ -410,7 +410,7 @@
- name: Retrieve contents of single target archives
ansible.builtin.unarchive:
src: "{{ output_dir }}/test-single-target.zip"
src: "{{ remote_tmp_dir }}/test-single-target.zip"
dest: .
list_files: true
check_mode: true
@@ -427,7 +427,7 @@
- name: Remove single target test with format={{ item }}
file:
path: "{{ output_dir }}/test-single-target.{{ item }}"
path: "{{ remote_tmp_dir }}/test-single-target.{{ item }}"
state: absent
loop:
- zip
@@ -439,22 +439,22 @@
- name: Test that missing files result in incomplete state
archive:
path:
- "{{ output_dir }}/*.txt"
- "{{ output_dir }}/dne.txt"
exclude_path: "{{ output_dir }}/foo.txt"
dest: "{{ output_dir }}/test-incomplete-archive.tgz"
- "{{ remote_tmp_dir }}/*.txt"
- "{{ remote_tmp_dir }}/dne.txt"
exclude_path: "{{ remote_tmp_dir }}/foo.txt"
dest: "{{ remote_tmp_dir }}/test-incomplete-archive.tgz"
register: incomplete_archive_result
- name: Assert that incomplete archive has incomplete state
assert:
that:
- incomplete_archive_result is changed
- "'{{ output_dir }}/dne.txt' in incomplete_archive_result.missing"
- "'{{ output_dir }}/foo.txt' not in incomplete_archive_result.missing"
- "'{{ remote_tmp_dir }}/dne.txt' in incomplete_archive_result.missing"
- "'{{ remote_tmp_dir }}/foo.txt' not in incomplete_archive_result.missing"
- name: Remove incomplete archive
file:
path: "{{ output_dir }}/test-incomplete-archive.tgz"
path: "{{ remote_tmp_dir }}/test-incomplete-archive.tgz"
state: absent
- name: Remove backports.lzma if previously installed (pip)

View File

@@ -1,8 +1,8 @@
---
- name: archive using gz and remove src files
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_remove_01.gz"
path: "{{ remote_tmp_dir }}/*.txt"
dest: "{{ remote_tmp_dir }}/archive_remove_01.gz"
format: gz
remove: yes
register: archive_remove_result_01
@@ -10,7 +10,7 @@
- debug: msg="{{ archive_remove_result_01 }}"
- name: verify that the files archived
file: path={{ output_dir }}/archive_remove_01.gz state=file
file: path={{ remote_tmp_dir }}/archive_remove_01.gz state=file
- name: check if gz file exists and includes all text files and src files has been removed
assert:
@@ -20,19 +20,19 @@
- "{{ archive_remove_result_01['archived'] | length }} == 3"
- name: remove our gz
file: path="{{ output_dir }}/archive_remove_01.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_remove_01.gz" state=absent
- name: check if src files has been removed
assert:
that:
- "'{{ output_dir }}/{{ item }}' is not exists"
- "'{{ remote_tmp_dir }}/{{ item }}' is not exists"
with_items:
- foo.txt
- bar.txt
- empty.txt
- name: prep our files again
copy: src={{ item }} dest={{ output_dir }}/{{ item }}
copy: src={{ item }} dest={{ remote_tmp_dir }}/{{ item }}
with_items:
- foo.txt
- bar.txt
@@ -40,11 +40,11 @@
- name: create a temporary directory to be check if it will be removed
file:
path: "{{ output_dir }}/tmpdir"
path: "{{ remote_tmp_dir }}/tmpdir"
state: directory
- name: prep our files in tmpdir
copy: src={{ item }} dest={{ output_dir }}/tmpdir/{{ item }}
copy: src={{ item }} dest={{ remote_tmp_dir }}/tmpdir/{{ item }}
with_items:
- foo.txt
- bar.txt
@@ -52,8 +52,8 @@
- name: archive using gz and remove src directory
archive:
path: "{{ output_dir }}/tmpdir"
dest: "{{ output_dir }}/archive_remove_02.gz"
path: "{{ remote_tmp_dir }}/tmpdir"
dest: "{{ remote_tmp_dir }}/archive_remove_02.gz"
format: gz
remove: yes
register: archive_remove_result_02
@@ -61,7 +61,7 @@
- debug: msg="{{ archive_remove_result_02 }}"
- name: verify that the files archived
file: path={{ output_dir }}/archive_remove_02.gz state=file
file: path={{ remote_tmp_dir }}/archive_remove_02.gz state=file
- name: check if gz file exists and includes all text files
assert:
@@ -71,20 +71,20 @@
- "{{ archive_remove_result_02['archived'] | length }} == 3"
- name: remove our gz
file: path="{{ output_dir }}/archive_remove_02.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_remove_02.gz" state=absent
- name: check if src folder has been removed
assert:
that:
- "'{{ output_dir }}/tmpdir' is not exists"
- "'{{ remote_tmp_dir }}/tmpdir' is not exists"
- name: create temporary directory again
file:
path: "{{ output_dir }}/tmpdir"
path: "{{ remote_tmp_dir }}/tmpdir"
state: directory
- name: prep our files in tmpdir again
copy: src={{ item }} dest={{ output_dir }}/tmpdir/{{ item }}
copy: src={{ item }} dest={{ remote_tmp_dir }}/tmpdir/{{ item }}
with_items:
- foo.txt
- bar.txt
@@ -92,17 +92,17 @@
- name: archive using gz and remove src directory excluding one file
archive:
path: "{{ output_dir }}/tmpdir/*"
dest: "{{ output_dir }}/archive_remove_03.gz"
path: "{{ remote_tmp_dir }}/tmpdir/*"
dest: "{{ remote_tmp_dir }}/archive_remove_03.gz"
format: gz
remove: yes
exclude_path: "{{ output_dir }}/tmpdir/empty.txt"
exclude_path: "{{ remote_tmp_dir }}/tmpdir/empty.txt"
register: archive_remove_result_03
- debug: msg="{{ archive_remove_result_03 }}"
- name: verify that the files archived
file: path={{ output_dir }}/archive_remove_03.gz state=file
file: path={{ remote_tmp_dir }}/archive_remove_03.gz state=file
- name: check if gz file exists and includes all text files
assert:
@@ -112,13 +112,13 @@
- "{{ archive_remove_result_03['archived'] | length }} == 2"
- name: remove our gz
file: path="{{ output_dir }}/archive_remove_03.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_remove_03.gz" state=absent
- name: verify that excluded file is still present
file: path={{ output_dir }}/tmpdir/empty.txt state=file
file: path={{ remote_tmp_dir }}/tmpdir/empty.txt state=file
- name: prep our files in tmpdir again
copy: src={{ item }} dest={{ output_dir }}/tmpdir/{{ item }}
copy: src={{ item }} dest={{ remote_tmp_dir }}/tmpdir/{{ item }}
with_items:
- foo.txt
- bar.txt
@@ -129,27 +129,27 @@
- name: archive using gz and remove src directory
archive:
path:
- "{{ output_dir }}/tmpdir/*.txt"
- "{{ output_dir }}/tmpdir/sub/*"
dest: "{{ output_dir }}/archive_remove_04.gz"
- "{{ remote_tmp_dir }}/tmpdir/*.txt"
- "{{ remote_tmp_dir }}/tmpdir/sub/*"
dest: "{{ remote_tmp_dir }}/archive_remove_04.gz"
format: gz
remove: yes
exclude_path: "{{ output_dir }}/tmpdir/sub/subfile.txt"
exclude_path: "{{ remote_tmp_dir }}/tmpdir/sub/subfile.txt"
register: archive_remove_result_04
- debug: msg="{{ archive_remove_result_04 }}"
- name: verify that the files archived
file: path={{ output_dir }}/archive_remove_04.gz state=file
file: path={{ remote_tmp_dir }}/archive_remove_04.gz state=file
- name: remove our gz
file: path="{{ output_dir }}/archive_remove_04.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_remove_04.gz" state=absent
- name: verify that excluded sub file is still present
file: path={{ output_dir }}/tmpdir/sub/subfile.txt state=file
file: path={{ remote_tmp_dir }}/tmpdir/sub/subfile.txt state=file
- name: prep our files in tmpdir again
copy: src={{ item }} dest={{ output_dir }}/tmpdir/{{ item }}
copy: src={{ item }} dest={{ remote_tmp_dir }}/tmpdir/{{ item }}
with_items:
- foo.txt
- bar.txt
@@ -160,19 +160,19 @@
- name: archive using gz and remove src directory
archive:
path:
- "{{ output_dir }}/tmpdir/"
dest: "{{ output_dir }}/archive_remove_05.gz"
- "{{ remote_tmp_dir }}/tmpdir/"
dest: "{{ remote_tmp_dir }}/archive_remove_05.gz"
format: gz
remove: yes
exclude_path: "{{ output_dir }}/tmpdir/sub/subfile.txt"
exclude_path: "{{ remote_tmp_dir }}/tmpdir/sub/subfile.txt"
register: archive_remove_result_05
- name: verify that the files archived
file: path={{ output_dir }}/archive_remove_05.gz state=file
file: path={{ remote_tmp_dir }}/archive_remove_05.gz state=file
- name: Verify source files were removed
file:
path: "{{ output_dir }}/tmpdir"
path: "{{ remote_tmp_dir }}/tmpdir"
state: absent
register: archive_source_file_removal_05
@@ -183,4 +183,4 @@
- archive_source_file_removal_05 is not changed
- name: remove our gz
file: path="{{ output_dir }}/archive_remove_05.gz" state=absent
file: path="{{ remote_tmp_dir }}/archive_remove_05.gz" state=absent

View File

@@ -58,3 +58,40 @@
"PLAY RECAP *********************************************************************",
"testhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 "
]
- name: Test to_yaml
environment:
ANSIBLE_NOCOLOR: 'true'
ANSIBLE_FORCE_COLOR: 'false'
ANSIBLE_STDOUT_CALLBACK: community.general.yaml
playbook: |
- hosts: testhost
gather_facts: false
vars:
data: |
line 1
line 2
line 3
tasks:
- name: Test to_yaml
debug:
msg: "{{ '{{' }}'{{ '{{' }}'{{ '}}' }} data | to_yaml {{ '{{' }}'{{ '}}' }}'{{ '}}' }}"
# The above should be: msg: "{{ data | to_yaml }}"
# Unfortunately, the way Ansible handles templating, we need to do some funny 'escaping' tricks...
expected_output: [
"",
"PLAY [testhost] ****************************************************************",
"",
"TASK [Test to_yaml] ************************************************************",
"ok: [testhost] => ",
" msg: |-",
" 'line 1",
" ",
" line 2",
" ",
" line 3",
" ",
" '",
"",
"PLAY RECAP *********************************************************************",
"testhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 "
]

View File

@@ -2,3 +2,4 @@
dependencies:
- setup_pkg_mgr
- setup_openssl
- setup_remote_tmp_dir

View File

@@ -7,7 +7,7 @@
vars:
consul_version: 1.5.0
consul_uri: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/consul/consul_{{ consul_version }}_{{ ansible_system | lower }}_{{ consul_arch }}.zip
consul_cmd: '{{ output_dir }}/consul'
consul_cmd: '{{ remote_tmp_dir }}/consul'
block:
- name: register pyOpenSSL version
command: '{{ ansible_python_interpreter }} -c ''import OpenSSL; print(OpenSSL.__version__)'''
@@ -27,19 +27,19 @@
block:
- name: Generate privatekey
community.crypto.openssl_privatekey:
path: '{{ output_dir }}/privatekey.pem'
path: '{{ remote_tmp_dir }}/privatekey.pem'
- name: Generate CSR
community.crypto.openssl_csr:
path: '{{ output_dir }}/csr.csr'
privatekey_path: '{{ output_dir }}/privatekey.pem'
path: '{{ remote_tmp_dir }}/csr.csr'
privatekey_path: '{{ remote_tmp_dir }}/privatekey.pem'
subject:
commonName: localhost
- name: Generate selfsigned certificate
register: selfsigned_certificate
community.crypto.openssl_certificate:
path: '{{ output_dir }}/cert.pem'
csr_path: '{{ output_dir }}/csr.csr'
privatekey_path: '{{ output_dir }}/privatekey.pem'
community.crypto.x509_certificate:
path: '{{ remote_tmp_dir }}/cert.pem'
csr_path: '{{ remote_tmp_dir }}/csr.csr'
privatekey_path: '{{ remote_tmp_dir }}/privatekey.pem'
provider: selfsigned
selfsigned_digest: sha256
- name: Install unzip
@@ -59,21 +59,21 @@
- name: Download consul binary
unarchive:
src: '{{ consul_uri }}'
dest: '{{ output_dir }}'
dest: '{{ remote_tmp_dir }}'
remote_src: true
register: result
until: result is success
- vars:
remote_dir: '{{ echo_output_dir.stdout }}'
remote_dir: '{{ echo_remote_tmp_dir.stdout }}'
block:
- command: echo {{ output_dir }}
register: echo_output_dir
- command: echo {{ remote_tmp_dir }}
register: echo_remote_tmp_dir
- name: Create configuration file
template:
src: consul_config.hcl.j2
dest: '{{ output_dir }}/consul_config.hcl'
dest: '{{ remote_tmp_dir }}/consul_config.hcl'
- name: Start Consul (dev mode enabled)
shell: nohup {{ consul_cmd }} agent -dev -config-file {{ output_dir }}/consul_config.hcl </dev/null >/dev/null 2>&1 &
shell: nohup {{ consul_cmd }} agent -dev -config-file {{ remote_tmp_dir }}/consul_config.hcl </dev/null >/dev/null 2>&1 &
- name: Create some data
command: '{{ consul_cmd }} kv put data/value{{ item }} foo{{ item }}'
loop:
@@ -83,5 +83,5 @@
- import_tasks: consul_session.yml
always:
- name: Kill consul process
shell: kill $(cat {{ output_dir }}/consul.pid)
shell: kill $(cat {{ remote_tmp_dir }}/consul.pid)
ignore_errors: true

View File

@@ -3,4 +3,3 @@ needs/root
skip/macos
skip/osx
skip/freebsd
disabled # FIXME

View File

@@ -6,7 +6,7 @@
host: copr.fedorainfracloud.org
state: enabled
name: '@copr/integration_tests'
chroot: centos-stream-x86_64
chroot: fedora-rawhide-x86_64
register: result
- name: assert that the copr project was enabled
@@ -21,7 +21,7 @@
copr:
state: enabled
name: '@copr/integration_tests'
chroot: centos-stream-x86_64
chroot: fedora-rawhide-x86_64
register: result
- name: assert that the copr project was enabled
@@ -46,7 +46,7 @@
copr:
state: disabled
name: '@copr/integration_tests'
chroot: centos-stream-x86_64
chroot: fedora-rawhide-x86_64
register: result
- name: assert that the copr project was disabled
@@ -61,4 +61,4 @@
host: copr.fedorainfracloud.org
state: absent
name: '@copr/integration_tests'
chroot: centos-stream-x86_64
chroot: fedora-rawhide-x86_64

View File

@@ -1,2 +1,3 @@
dependencies:
- prepare_tests
- setup_remote_tmp_dir

View File

@@ -5,7 +5,7 @@
####################################################################
- name: record the output directory
set_fact: deploy_helper_test_root={{output_dir}}/deploy_helper_test_root
set_fact: deploy_helper_test_root={{remote_tmp_dir}}/deploy_helper_test_root
- name: State=query with default parameters
deploy_helper: path={{ deploy_helper_test_root }} state=query

View File

@@ -70,9 +70,9 @@
- 'uuid3.stdout == uuid4.stdout' # unchanged
- when:
- (grow | bool and (fstype != "vfat" or resize_vfat)) or
- ((grow | bool and (fstype != "vfat" or resize_vfat)) or
(fstype == "xfs" and ansible_system == "Linux" and
ansible_distribution not in ["CentOS", "Ubuntu"])
ansible_distribution not in ["CentOS", "Ubuntu", "openSUSE Leap"]))
block:
- name: Check that resizefs does nothing if device size is not changed
filesystem:

View File

@@ -0,0 +1,2 @@
dependencies:
- setup_remote_tmp_dir

View File

@@ -8,9 +8,6 @@
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: Test random_mac filter bad argument type
debug:
var: "0 | community.general.random_mac"

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_pkg_mgr
- setup_remote_tmp_dir
- prepare_tests

View File

@@ -122,7 +122,7 @@
gem:
name: gist
state: present
install_dir: "{{ output_dir }}/gems"
install_dir: "{{ remote_tmp_dir }}/gems"
ignore_errors: yes
register: install_gem_fail_result
@@ -141,12 +141,12 @@
name: gist
state: present
user_install: no
install_dir: "{{ output_dir }}/gems"
install_dir: "{{ remote_tmp_dir }}/gems"
register: install_gem_result
- name: Find gems in custom directory
find:
paths: "{{ output_dir }}/gems/gems"
paths: "{{ remote_tmp_dir }}/gems/gems"
file_type: directory
contains: gist
register: gem_search
@@ -163,12 +163,12 @@
name: gist
state: absent
user_install: no
install_dir: "{{ output_dir }}/gems"
install_dir: "{{ remote_tmp_dir }}/gems"
register: install_gem_result
- name: Find gems in custom directory
find:
paths: "{{ output_dir }}/gems/gems"
paths: "{{ remote_tmp_dir }}/gems/gems"
file_type: directory
contains: gist
register: gem_search

View File

@@ -0,0 +1,2 @@
dependencies:
- setup_remote_tmp_dir

View File

@@ -6,7 +6,7 @@
name: "{{ option_name }}"
value: "{{ option_value }}"
scope: "file"
file: "{{ output_dir }}/gitconfig_file"
file: "{{ remote_tmp_dir }}/gitconfig_file"
state: present
register: result
@@ -14,7 +14,7 @@
git_config:
name: "{{ option_name }}"
scope: "file"
file: "{{ output_dir }}/gitconfig_file"
file: "{{ remote_tmp_dir }}/gitconfig_file"
state: present
register: get_result
@@ -26,4 +26,3 @@
- set_result.diff.after == option_value + "\n"
- get_result is not changed
- get_result.config_value == option_value
...

View File

@@ -8,6 +8,5 @@
- name: set up without value (file)
file:
path: "{{ output_dir }}/gitconfig_file"
path: "{{ remote_tmp_dir }}/gitconfig_file"
state: absent
...

View File

@@ -9,5 +9,4 @@
- name: set up with value (file)
copy:
src: gitconfig
dest: "{{ output_dir }}/gitconfig_file"
...
dest: "{{ remote_tmp_dir }}/gitconfig_file"

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_pkg_mgr
- setup_remote_tmp_dir
- prepare_tests

View File

@@ -6,14 +6,14 @@
- name: set where to extract the repo
set_fact:
checkout_dir: "{{ output_dir }}/hg_project_test"
checkout_dir: "{{ remote_tmp_dir }}/hg_project_test"
- name: set what repo to use
set_fact:
repo: "http://hg.pf.osdn.net/view/a/ak/akasurde/hg_project_test"
- name: clean out the output_dir
shell: rm -rf {{ output_dir }}/*
- name: clean out the remote_tmp_dir
shell: rm -rf {{ remote_tmp_dir }}/*
- name: verify that mercurial is installed so this test can continue
shell: which hg

View File

@@ -32,54 +32,66 @@
# that:
# - upgrade_option_result.changed
- name: Install xz package using homebrew
homebrew:
name: xz
state: present
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: xz_result
- vars:
package_name: gnu-tar
- assert:
that:
- xz_result.changed
block:
- name: Make sure {{ package_name }} package is not installed
homebrew:
name: "{{ package_name }}"
state: absent
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
- name: Again install xz package using homebrew
homebrew:
name: xz
state: present
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: xz_result
- name: Install {{ package_name }} package using homebrew
homebrew:
name: "{{ package_name }}"
state: present
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: package_result
- assert:
that:
- not xz_result.changed
- assert:
that:
- package_result.changed
- name: Uninstall xz package using homebrew
homebrew:
name: xz
state: absent
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: xz_result
- name: Again install {{ package_name }} package using homebrew
homebrew:
name: "{{ package_name }}"
state: present
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: package_result
- assert:
that:
- xz_result.changed
- assert:
that:
- not package_result.changed
- name: Again uninstall xz package using homebrew
homebrew:
name: xz
state: absent
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: xz_result
- name: Uninstall {{ package_name }} package using homebrew
homebrew:
name: "{{ package_name }}"
state: absent
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: package_result
- assert:
that:
- not xz_result.changed
- assert:
that:
- package_result.changed
- name: Again uninstall {{ package_name }} package using homebrew
homebrew:
name: "{{ package_name }}"
state: absent
update_homebrew: no
become: yes
become_user: "{{ brew_stat.stat.pw_name }}"
register: package_result
- assert:
that:
- not package_result.changed

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_pkg_mgr
- setup_remote_tmp_dir
- prepare_tests

View File

@@ -14,15 +14,23 @@
- debug: var=install_pycdlib
- set_fact:
output_dir_test: '{{ output_dir }}/test_iso_create'
output_test_dir: '{{ remote_tmp_dir }}/test_iso_create'
# - include_tasks: prepare_dest_dir.yml
- name: Copy files and directories
copy:
src: '{{ item }}'
dest: '{{ remote_tmp_dir }}/{{ item }}'
loop:
- test1.cfg
- test_dir
- name: Test check mode
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
dest_iso: "{{ output_dir_test }}/test.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
dest_iso: "{{ output_test_dir }}/test.iso"
interchange_level: 3
register: iso_result
check_mode: yes
@@ -30,7 +38,7 @@
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test.iso"
path: "{{ output_test_dir }}/test.iso"
register: iso_file
- debug: var=iso_file
- assert:
@@ -41,15 +49,15 @@
- name: Create iso file with a specified file
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
dest_iso: "{{ output_dir_test }}/test.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
dest_iso: "{{ output_test_dir }}/test.iso"
interchange_level: 3
register: iso_result
- debug: var=iso_result
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test.iso"
path: "{{ output_test_dir }}/test.iso"
register: iso_file
- assert:
@@ -60,16 +68,16 @@
- name: Create iso file with a specified file and folder
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
- "{{ role_path }}/files/test_dir"
dest_iso: "{{ output_dir_test }}/test1.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
- "{{ remote_tmp_dir }}/test_dir"
dest_iso: "{{ output_test_dir }}/test1.iso"
interchange_level: 3
register: iso_result
- debug: var=iso_result
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test1.iso"
path: "{{ output_test_dir }}/test1.iso"
register: iso_file
- assert:
@@ -80,15 +88,15 @@
- name: Create iso file with volume identification string
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
dest_iso: "{{ output_dir_test }}/test2.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
dest_iso: "{{ output_test_dir }}/test2.iso"
vol_ident: "OEMDRV"
register: iso_result
- debug: var=iso_result
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test2.iso"
path: "{{ output_test_dir }}/test2.iso"
register: iso_file
- assert:
@@ -99,15 +107,15 @@
- name: Create iso file with Rock Ridge extention
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
dest_iso: "{{ output_dir_test }}/test3.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
dest_iso: "{{ output_test_dir }}/test3.iso"
rock_ridge: "1.09"
register: iso_result
- debug: var=iso_result
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test3.iso"
path: "{{ output_test_dir }}/test3.iso"
register: iso_file
- assert:
@@ -118,15 +126,15 @@
- name: Create iso file with Joliet extention
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
dest_iso: "{{ output_dir_test }}/test4.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
dest_iso: "{{ output_test_dir }}/test4.iso"
joliet: 3
register: iso_result
- debug: var=iso_result
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test4.iso"
path: "{{ output_test_dir }}/test4.iso"
register: iso_file
- assert:
@@ -137,15 +145,15 @@
- name: Create iso file with UDF enabled
iso_create:
src_files:
- "{{ role_path }}/files/test1.cfg"
dest_iso: "{{ output_dir_test }}/test5.iso"
- "{{ remote_tmp_dir }}/test1.cfg"
dest_iso: "{{ output_test_dir }}/test5.iso"
udf: True
register: iso_result
- debug: var=iso_result
- name: Check if iso file created
stat:
path: "{{ output_dir_test }}/test5.iso"
path: "{{ output_test_dir }}/test5.iso"
register: iso_file
- assert:

View File

@@ -3,10 +3,10 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: Make sure our testing sub-directory does not exist
file:
path: '{{ output_dir_test }}'
path: '{{ output_test_dir }}'
state: absent
- name: Create our testing sub-directory
file:
path: '{{ output_dir_test }}'
path: '{{ output_test_dir }}'
state: directory

View File

@@ -1,3 +1,4 @@
shippable/posix/group1
destructive
skip/aix
skip/osx # FIXME

View File

@@ -2,3 +2,4 @@ dependencies:
- setup_pkg_mgr
- prepare_tests
- setup_epel
- setup_remote_tmp_dir

View File

@@ -23,7 +23,7 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact:
output_dir_test: '{{ output_dir }}/test_iso_extract'
output_test_dir: '{{ remote_tmp_dir }}/test_iso_extract'
- name: Install 7zip
import_tasks: 7zip.yml

View File

@@ -19,15 +19,15 @@
- name: Make sure our testing sub-directory does not exist
file:
path: '{{ output_dir_test }}'
path: '{{ output_test_dir }}'
state: absent
- name: Create our testing sub-directory
file:
path: '{{ output_dir_test }}'
path: '{{ output_test_dir }}'
state: directory
- name: copy the iso to the test dir
copy:
src: test.iso
dest: '{{ output_dir_test }}'
dest: '{{ output_test_dir }}'

View File

@@ -19,8 +19,8 @@
- name: Extract the iso
iso_extract:
image: '{{ output_dir_test }}/test.iso'
dest: '{{ output_dir_test }}'
image: '{{ output_test_dir }}/test.iso'
dest: '{{ output_test_dir }}'
files:
- 1.txt
- 2.txt
@@ -32,8 +32,8 @@
- name: Extract the iso again
iso_extract:
image: '{{ output_dir_test }}/test.iso'
dest: '{{ output_dir_test }}'
image: '{{ output_test_dir }}/test.iso'
dest: '{{ output_test_dir }}'
files:
- 1.txt
- 2.txt

View File

@@ -1,15 +1,15 @@
---
test_pkcs12_path: testpkcs.p12
test_keystore_path: keystore.jks
test_keystore2_path: "{{ output_dir }}/keystore2.jks"
test_keystore2_path: "{{ remote_tmp_dir }}/keystore2.jks"
test_keystore2_password: changeit
test_cert_path: "{{ output_dir }}/cert.pem"
test_key_path: "{{ output_dir }}/key.pem"
test_csr_path: "{{ output_dir }}/req.csr"
test_cert2_path: "{{ output_dir }}/cert2.pem"
test_key2_path: "{{ output_dir }}/key2.pem"
test_csr2_path: "{{ output_dir }}/req2.csr"
test_pkcs_path: "{{ output_dir }}/cert.p12"
test_pkcs2_path: "{{ output_dir }}/cert2.p12"
test_cert_path: "{{ remote_tmp_dir }}/cert.pem"
test_key_path: "{{ remote_tmp_dir }}/key.pem"
test_csr_path: "{{ remote_tmp_dir }}/req.csr"
test_cert2_path: "{{ remote_tmp_dir }}/cert2.pem"
test_key2_path: "{{ remote_tmp_dir }}/key2.pem"
test_csr2_path: "{{ remote_tmp_dir }}/req2.csr"
test_pkcs_path: "{{ remote_tmp_dir }}/cert.p12"
test_pkcs2_path: "{{ remote_tmp_dir }}/cert2.p12"
test_ssl: setupSSLServer.py
test_ssl_port: 21500

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_java_keytool
- setup_openssl
- setup_remote_tmp_dir

View File

@@ -9,15 +9,15 @@
- name: prep pkcs12 file
ansible.builtin.copy:
src: "{{ test_pkcs12_path }}"
dest: "{{ output_dir }}/{{ test_pkcs12_path }}"
dest: "{{ remote_tmp_dir }}/{{ test_pkcs12_path }}"
- name: import pkcs12
community.general.java_cert:
pkcs12_path: "{{ output_dir }}/{{ test_pkcs12_path }}"
pkcs12_path: "{{ remote_tmp_dir }}/{{ test_pkcs12_path }}"
pkcs12_password: changeit
pkcs12_alias: default
cert_alias: default
keystore_path: "{{ output_dir }}/{{ test_keystore_path }}"
keystore_path: "{{ remote_tmp_dir }}/{{ test_keystore_path }}"
keystore_pass: changeme_keystore
keystore_create: yes
state: present
@@ -30,11 +30,11 @@
- name: import pkcs12 with wrong password
community.general.java_cert:
pkcs12_path: "{{ output_dir }}/{{ test_pkcs12_path }}"
pkcs12_path: "{{ remote_tmp_dir }}/{{ test_pkcs12_path }}"
pkcs12_password: wrong_pass
pkcs12_alias: default
cert_alias: default_new
keystore_path: "{{ output_dir }}/{{ test_keystore_path }}"
keystore_path: "{{ remote_tmp_dir }}/{{ test_keystore_path }}"
keystore_pass: changeme_keystore
keystore_create: yes
state: present
@@ -49,9 +49,9 @@
- name: test fail on mutually exclusive params
community.general.java_cert:
cert_path: ca.crt
pkcs12_path: "{{ output_dir }}/{{ test_pkcs12_path }}"
pkcs12_path: "{{ remote_tmp_dir }}/{{ test_pkcs12_path }}"
cert_alias: default
keystore_path: "{{ output_dir }}/{{ test_keystore_path }}"
keystore_path: "{{ remote_tmp_dir }}/{{ test_keystore_path }}"
keystore_pass: changeme_keystore
keystore_create: yes
state: present
@@ -65,7 +65,7 @@
- name: test fail on missing required params
community.general.java_cert:
keystore_path: "{{ output_dir }}/{{ test_keystore_path }}"
keystore_path: "{{ remote_tmp_dir }}/{{ test_keystore_path }}"
keystore_pass: changeme_keystore
state: absent
ignore_errors: true
@@ -78,7 +78,7 @@
- name: delete object based on cert_alias parameter
community.general.java_cert:
keystore_path: "{{ output_dir }}/{{ test_keystore_path }}"
keystore_path: "{{ remote_tmp_dir }}/{{ test_keystore_path }}"
keystore_pass: changeme_keystore
cert_alias: default
state: absent
@@ -98,8 +98,8 @@
path: "{{ item }}"
state: absent
loop:
- "{{ output_dir }}/{{ test_pkcs12_path }}"
- "{{ output_dir }}/{{ test_keystore_path }}"
- "{{ remote_tmp_dir }}/{{ test_pkcs12_path }}"
- "{{ remote_tmp_dir }}/{{ test_keystore_path }}"
- "{{ test_keystore2_path }}"
- "{{ test_cert_path }}"
- "{{ test_key_path }}"

View File

@@ -239,13 +239,17 @@
- name: Copy the ssl server script
copy:
src: "setupSSLServer.py"
dest: "{{ output_dir }}"
dest: "{{ remote_tmp_dir }}"
- name: Create an SSL server that we will use for testing URL imports
command: python {{ output_dir }}/setupSSLServer.py {{ output_dir }} {{ test_ssl_port }}
command: "{{ ansible_python.executable }} {{ remote_tmp_dir }}/setupSSLServer.py {{ remote_tmp_dir }} {{ test_ssl_port }}"
async: 10
poll: 0
- name: "Wait for one second to make sure that the serve script has actually been started"
pause:
seconds: 1
- name: |
Download the original cert.pem from our temporary server. The current cert should contain
cert2.pem. Importing this cert should return a status of changed

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_java_keytool
- setup_openssl
- setup_remote_tmp_dir

View File

@@ -7,7 +7,7 @@
block:
- name: Create private keys
community.crypto.openssl_privatekey:
path: "{{ output_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key' }}"
path: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key' }}"
size: 2048 # this should work everywhere
# The following is more efficient, but might not work everywhere:
# type: ECC
@@ -21,8 +21,8 @@
- name: Create CSRs
community.crypto.openssl_csr:
path: "{{ output_dir ~ '/' ~ item.name ~ '.csr' }}"
privatekey_path: "{{ output_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key' }}"
path: "{{ remote_tmp_dir ~ '/' ~ item.name ~ '.csr' }}"
privatekey_path: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key' }}"
privatekey_passphrase: "{{ item.passphrase | default(omit) }}"
commonName: "{{ item.commonName }}"
loop:
@@ -41,9 +41,9 @@
- name: Create certificates
community.crypto.x509_certificate:
path: "{{ output_dir ~ '/' ~ item.name ~ '.pem' }}"
csr_path: "{{ output_dir ~ '/' ~ item.name ~ '.csr' }}"
privatekey_path: "{{ output_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key' }}"
path: "{{ remote_tmp_dir ~ '/' ~ item.name ~ '.pem' }}"
csr_path: "{{ remote_tmp_dir ~ '/' ~ item.name ~ '.csr' }}"
privatekey_path: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key' }}"
privatekey_passphrase: "{{ item.passphrase | default(omit) }}"
provider: selfsigned
loop:
@@ -60,67 +60,113 @@
passphrase: hunter2
commonName: example.org
- name: Create a Java key store for the given certificates (check mode)
community.general.java_keystore: &create_key_store_data
name: example
certificate: "{{ lookup('file', output_dir ~ '/' ~ item.name ~ '.pem') }}"
private_key: "{{ lookup('file', output_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.key') }}"
private_key_passphrase: "{{ item.passphrase | default(omit) }}"
password: changeit
dest: "{{ output_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.jks' }}"
- name: Read certificates
slurp:
src: "{{ remote_tmp_dir ~ '/' ~ item.name ~ '.pem' }}"
loop: &create_key_store_loop
- name: cert
- name: cert-pw
passphrase: hunter2
register: certificates
- name: Read certificate keys
slurp:
src: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | d(item.name)) ~ '.key' }}"
loop: *create_key_store_loop
register: certificate_keys
- name: Create a Java key store for the given certificates (check mode)
community.general.java_keystore: &create_key_store_data
name: example
certificate: "{{ certificates.results[loop_index].content | b64decode }}"
private_key: "{{ certificate_keys.results[loop_index].content | b64decode }}"
private_key_passphrase: "{{ item.passphrase | default(omit) }}"
password: changeit
dest: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.jks' }}"
loop: *create_key_store_loop
loop_control:
index_var: loop_index
check_mode: yes
register: result_check
- name: Create a Java key store for the given certificates
community.general.java_keystore: *create_key_store_data
loop: *create_key_store_loop
loop_control:
index_var: loop_index
register: result
- name: Create a Java key store for the given certificates (idempotency, check mode)
community.general.java_keystore: *create_key_store_data
loop: *create_key_store_loop
loop_control:
index_var: loop_index
check_mode: yes
register: result_idem_check
- name: Create a Java key store for the given certificates (idempotency)
community.general.java_keystore: *create_key_store_data
loop: *create_key_store_loop
loop_control:
index_var: loop_index
register: result_idem
- name: Create a Java key store for the given certificates (certificate changed, check mode)
community.general.java_keystore: *create_key_store_data
- name: Read certificates (new)
slurp:
src: "{{ remote_tmp_dir ~ '/' ~ item.name ~ '.pem' }}"
loop: &create_key_store_loop_new_certs
- name: cert2
keyname: cert
- name: cert2-pw
keyname: cert-pw
passphrase: hunter2
register: certificates_new
- name: Read certificate keys (new)
slurp:
src: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | d(item.name)) ~ '.key' }}"
loop: *create_key_store_loop_new_certs
register: certificate_keys_new
- name: Create a Java key store for the given certificates (certificate changed, check mode)
community.general.java_keystore: &create_key_store_data_new_certs
name: example
certificate: "{{ certificates_new.results[loop_index].content | b64decode }}"
private_key: "{{ certificate_keys_new.results[loop_index].content | b64decode }}"
private_key_passphrase: "{{ item.passphrase | default(omit) }}"
password: changeit
dest: "{{ remote_tmp_dir ~ '/' ~ (item.keyname | default(item.name)) ~ '.jks' }}"
loop: *create_key_store_loop_new_certs
loop_control:
index_var: loop_index
check_mode: yes
register: result_change_check
- name: Create a Java key store for the given certificates (certificate changed)
community.general.java_keystore: *create_key_store_data
community.general.java_keystore: *create_key_store_data_new_certs
loop: *create_key_store_loop_new_certs
loop_control:
index_var: loop_index
register: result_change
- name: Create a Java key store for the given certificates (password changed, check mode)
community.general.java_keystore:
<<: *create_key_store_data
<<: *create_key_store_data_new_certs
password: hunter2
loop: *create_key_store_loop_new_certs
loop_control:
index_var: loop_index
check_mode: yes
register: result_pw_change_check
when: false # FIXME: module currently crashes
- name: Create a Java key store for the given certificates (password changed)
community.general.java_keystore:
<<: *create_key_store_data
<<: *create_key_store_data_new_certs
password: hunter2
loop: *create_key_store_loop_new_certs
loop_control:
index_var: loop_index
register: result_pw_change
when: false # FIXME: module currently crashes

View File

@@ -0,0 +1,2 @@
dependencies:
- setup_remote_tmp_dir

View File

@@ -16,7 +16,7 @@
- name: Install test smtpserver
copy:
src: '{{ item }}'
dest: '{{ output_dir }}/{{ item }}'
dest: '{{ remote_tmp_dir }}/{{ item }}'
loop:
- smtpserver.py
- smtpserver.crt
@@ -25,7 +25,7 @@
# FIXME: Verify the mail after it was send would be nice
# This would require either dumping the content, or registering async task output
- name: Start test smtpserver
shell: '{{ ansible_python.executable }} {{ output_dir }}/smtpserver.py 10025:10465'
shell: '{{ ansible_python.executable }} {{ remote_tmp_dir }}/smtpserver.py 10025:10465'
async: 30
poll: 0
register: smtpserver

View File

@@ -2,3 +2,4 @@
dependencies:
- setup_pkg_mgr
- setup_openssl
- setup_remote_tmp_dir

View File

@@ -6,7 +6,7 @@
vars:
nomad_version: 0.12.4
nomad_uri: https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_{{ ansible_system | lower }}_{{ nomad_arch }}.zip
nomad_cmd: '{{ output_dir }}/nomad'
nomad_cmd: '{{ remote_tmp_dir }}/nomad'
block:
- name: register pyOpenSSL version
@@ -36,21 +36,21 @@
block:
- name: Generate privatekey
community.crypto.openssl_privatekey:
path: '{{ output_dir }}/privatekey.pem'
path: '{{ remote_tmp_dir }}/privatekey.pem'
- name: Generate CSR
community.crypto.openssl_csr:
path: '{{ output_dir }}/csr.csr'
privatekey_path: '{{ output_dir }}/privatekey.pem'
path: '{{ remote_tmp_dir }}/csr.csr'
privatekey_path: '{{ remote_tmp_dir }}/privatekey.pem'
subject:
commonName: localhost
- name: Generate selfsigned certificate
register: selfsigned_certificate
community.crypto.openssl_certificate:
path: '{{ output_dir }}/cert.pem'
csr_path: '{{ output_dir }}/csr.csr'
privatekey_path: '{{ output_dir }}/privatekey.pem'
community.crypto.x509_certificate:
path: '{{ remote_tmp_dir }}/cert.pem'
csr_path: '{{ remote_tmp_dir }}/csr.csr'
privatekey_path: '{{ remote_tmp_dir }}/privatekey.pem'
provider: selfsigned
selfsigned_digest: sha256
@@ -75,17 +75,17 @@
- name: Download nomad binary
unarchive:
src: '{{ nomad_uri }}'
dest: '{{ output_dir }}'
dest: '{{ remote_tmp_dir }}'
remote_src: true
register: result
until: result is success
- vars:
remote_dir: '{{ echo_output_dir.stdout }}'
remote_dir: '{{ echo_remote_tmp_dir.stdout }}'
block:
- command: echo {{ output_dir }}
register: echo_output_dir
- command: echo {{ remote_tmp_dir }}
register: echo_remote_tmp_dir
- name: Run tests integration
block:

View File

@@ -1,3 +1,4 @@
dependencies:
- setup_pkg_mgr
- setup_gnutar
- setup_remote_tmp_dir

Some files were not shown because too many files have changed in this diff Show More