Compare commits

...

71 Commits
1.1.0 ... 1.2.0

Author SHA1 Message Date
Felix Fontein
5d3a2a3bd4 Release 1.2.0. 2020-09-30 21:39:31 +02:00
Felix Fontein
686cdf2a6b Add release summary for 1.2.0. 2020-09-30 21:38:01 +02:00
patchback[bot]
4928810dda Update BOTMETA.yml (#1019) (#1020)
* Update BOTMETA.yml

* Update .github/BOTMETA.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ee34fdb4ac)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-30 15:31:13 +00:00
patchback[bot]
4dc2e14039 Run tests with macOS 10.15. (#971) (#1015)
* Run tests with macOS 10.15.

* Restrict to macOS CI runs for now until they pass.

* Skip tests on macOS that are skipped on OSX.

* Disable consul test for macOS.

* Disable chroot connection tests for macOS.

* Add setup_gnutar role from https://github.com/ansible/ansible/pull/71841.

* Use setup_gnutar for yarn and npm tests.

* Revert "Restrict to macOS CI runs for now until they pass."

This reverts commit d945d0399f.

* hashi_vault lookup tests seem to be always unstable, disabling for now.

* Use homebrew module instead of command.

(cherry picked from commit eba5216be5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-30 16:32:08 +02:00
patchback[bot]
6ec769b051 Add inventory plugin for Stackpath Edge Compute (#856) (#1013)
* Add inventory plugin for Stackpath Edge Compute

* Update comments from PR regarding general issues.

* Convert requests to ansible open_url

* Add types to documentation and replace stack ids with stack names

* Replace stack_ids with stack_slugs for easier readability, fix pagination and separate getting lists to a function

* create initial test

* fix test name

* fix test to look at class variable as that function doesn't return the value

* fix pep line length limit in line 149

* Add validation function for config options.
Add more testing for validation and population functions

* set correct indentation for tests

* fix validate config to expect KeyError,
fix testing to have inventory data,
fix testing to use correct authentication function

* import InventoryData from the correct location

* remove test_authenticate since there's no dns resolution in the CI,
rename some stack_slugs to a more generic name
fix missing hostname_key for populate test

* Fix typo in workloadslug name for testing

* fix group name in assertion

* debug failing test

* fix missing hosts in assertion for group hosts

* fixes for documentation formatting
add commas to last item in all dictionaries

* end documentation description with a period

* fix typo in documentation

* More documentation corrections, remove unused local variable

(cherry picked from commit 951a7e2758)

Co-authored-by: shayrybak <shay.rybak@stackpath.com>
2020-09-30 13:52:47 +02:00
patchback[bot]
e4d3d24b26 Fix xml reports changed when node is not deleted (#1007) (#1012)
* Fix xml reports changed when node is not deleted

* Added changelog fragment

* Added tests for xml no change remove

* Added PR to changeling fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0243eabd30)

Co-authored-by: mklassen <lmklassen@gmail.com>
2020-09-30 13:44:57 +02:00
patchback[bot]
572e3f0814 Fix failing FreeBSD CI (#1010) (#1011)
* Print Python and pip version.

* Try a newer setuptools.

* Dump more info.

* Try to upgrade setuptools outside the venv.

* pip36 -> pip3

(cherry picked from commit 75d1894866)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-30 10:24:22 +02:00
patchback[bot]
e03ade818a pkgutil: add update all, check-mode, squashing and examples (#799) (#1009)
* pkgutil: add update all, check-mode, squashing and examples

Taken from https://github.com/ansible/ansible/pull/51651 by dagwieers, which was taken from https://github.com/ansible/ansible/pull/27866 by scathatheworm.  Let’s have one last attempt to get this merged.

> ##### SUMMARY
>
> Original PR #27866 from scathatheworm
>
> When working with Solaris pkgutil CSW packages, I came across this module being very basic in functionality, in particular, that I could not use it to update all CSW packages.
>
> When going into details into the code I also found it did not incorporate a possibility of doing dry-run from the underlying utility, or supported to specify multiple packages for operations.
>
> This module probably sees very little use, but it seemed like nice functionality to add and make it behave a little more like other package modules.
> ##### ISSUE TYPE
>
>     * Feature Pull Request
>
>
> ##### COMPONENT NAME
>
> pkgutil module
> ##### ANSIBLE VERSION
>
> ```
> ansible 2.3.1.0
>   config file = /etc/ansible/ansible.cfg
>   configured module search path = Default w/o overrides
>   python version = 2.7.5 (default, Aug  2 2016, 04:20:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
> ```
>
> ##### ADDITIONAL INFORMATION
>
>     * Added ability to upgrade all packages:
>
>
> ```yaml
> - pkgutil:
>     name: '*'
>     state: latest
> ```
>
>     * Added ability to modify state of a list of packages:
>
>
> ```yaml
> - pkgutil:
>     name:
>     - CSWtop
>     - CSWwget
>     - CSWlsof
>     state: present
> ```
>
>     * Added ability to have underlying tool perform a dry-run when using check mode, pkgutil -n
>
>     * Added ability to configure force option to force packages to state determined by repository (downgrade for example)
>
>
> ```yaml
> - pkgutil:
>     name: CSWtop
>     state: latest
>     force: yes
> ```
>
>     * Added more examples and documentation to show the new functionality

* Add changelog fragment.

* Observe changelog style guide

https://docs.ansible.com/ansible/devel/community/development_process.html#changelogs

Co-authored-by: Felix Fontein <felix@fontein.de>

* Since module split, version_added no-longer refers to core Ansbile

Co-authored-by: Felix Fontein <felix@fontein.de>

* Tweak documentation

* Apply the new `elements` feature for specifying list types

Co-authored-by: Felix Fontein <felix@fontein.de>

* Set version_added

Co-authored-by: Felix Fontein <felix@fontein.de>

* Document `pkg` alias for `name`

* Be explicit about the purpose of states `installed` and `removed`.

* Force the user to specify their desired state.

* Review documentation for pkgutil module.

* Fully qualify svr4pkg module name

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit dd9e999c9f)

Co-authored-by: Peter Oliver <git@mavit.org.uk>
2020-09-30 06:57:10 +02:00
patchback[bot]
54725bea77 nmcli: set C locale when executing nmcli (#992) (#1008)
* Set C locale when calling nmcli.

* Add changelog.

(cherry picked from commit e48083e66b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-29 23:13:54 +02:00
patchback[bot]
db24f9857a postgresql_privs: fix the module mistakes a procedure for a function (#996) (#1006)
* postgresql_privs: fix the module mistakes a procedure for a function

* add changelog fragment

* fix

(cherry picked from commit 220051768b)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-29 21:51:41 +03:00
patchback[bot]
c00147e532 ipa_user: Add userauthtype param (#951) (#1004)
* ipa_user: Add userauthtype param

* Add changelog fragment

* Update changelogs/fragments/951-ipa_user-add-userauthtype-param.yaml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/identity/ipa/ipa_user.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* ipa_user: Add example for userauthtype

Co-authored-by: Lina He <lhe@tmamission.com>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit 104f6a3e96)

Co-authored-by: Lina He <lh3su@virginia.edu>
2020-09-29 15:24:30 +00:00
patchback[bot]
0baceda7f6 nagios: force an active service check for all services of a particular host or for the host itself (#998) (#1003)
* Update nagios.py

Force an active service check for all services of a particular host or for the host itself

* Create 998-nagios-added_forced_check_for_all_services_or_host.yml

Added fragment

(cherry picked from commit 9b24b7a969)

Co-authored-by: trumbaut <thomas@rumbaut.be>
2020-09-29 14:31:05 +02:00
patchback[bot]
c563813e4e postgresql modules: fix BOTMETA.yml (#1000) (#1001)
(cherry picked from commit 097c609aab)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-29 14:49:27 +03:00
patchback[bot]
1dbd7d4d00 django_manage module: add new maintainers (#974) (#995)
* django_manage module: add new maintainers

* fix

(cherry picked from commit fbe66994a1)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-29 09:52:12 +02:00
patchback[bot]
41b72c0055 use FQCN when calling a module from action plugin (#967) (#988)
* use FQCN when calling a module from action plugin

https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.10.html#action-plugins-which-execute-modules-should-use-fully-qualified-module-names

* doc: add changelog fragment (minor_changes)

* move to shippable/posix/group1 to run tests with ansible 2.9

* move back to shippable/posix/group4

to not share testbed with docker engine on rhel7, that never releases
the xtables lock and leads 'iptables_state' tests to always fail.

(cherry picked from commit ea1fb83b0c)

Co-authored-by: quidame <quidame@poivron.org>
2020-09-29 06:17:45 +00:00
patchback[bot]
96a8390b5e Nagios: added 'acknowledge' and 'schedule_forced_svc_check' action (#820) (#990)
* bugfix for None-type error

* add acknowledge and service_check options

* fix whitespace issues

* documentation fix

* fix version error

* changelog fix

* Update plugins/modules/monitoring/nagios.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix int convert error

* Update plugins/modules/monitoring/nagios.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* indentation fix

Co-authored-by: Goetheyn Tony <tony.goetheyn@digipolis.gent>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7310a34b55)

Co-authored-by: Tony Goetheyn <13643294+tgoetheyn@users.noreply.github.com>
2020-09-29 06:17:36 +00:00
patchback[bot]
25474f657a Fix changes detection for createcachetable (#699) (#991)
(cherry picked from commit 3d19e15a7d)

Co-authored-by: Mikhail Khvoinitsky <m-khvoinitsky@users.noreply.github.com>
2020-09-29 06:17:27 +00:00
patchback[bot]
d7c4849473 parted: proper fix for change of partition label case (#594) (#986)
* parted: proper fix for change of partition label case
calling mkpart even when partition existed before mklabel call, fixes #522

* changelog fragment for parted fix #522

* Update changelogs/fragments/522-parted_change_label.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* typo in comment

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 4931fb2681)

Co-authored-by: Robert Osowiecki <robert.osowiecki@gmail.com>
2020-09-29 06:08:33 +02:00
patchback[bot]
0d459e5662 Fix zfs snapshot handling on root zvols (#936) (#985)
* Update zfs.py

Added support for snapshots when working with a root zvol that contains no / in its path.

* Added Changelog Fragment

(cherry picked from commit 13fb60f58f)

Co-authored-by: Hobnob <williamhobson62@aol.com>
2020-09-29 06:08:15 +02:00
patchback[bot]
01bbab6b2c Improve plugin sanity (#966) (#984)
* callback_type -> type.

* Mark authors as unknown.

* Add author field forgotten in #627.

* Fix author entries.

* Add author field forgotten in #127.

* Fix some types.

(cherry picked from commit e5da25915d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-29 04:04:05 +00:00
patchback[bot]
59a7064392 docker_container: fix idempotency problem with empty published_ports list (#979) (#982)
* Distinguish between [] and None.

* Add changelog fragment.

* Fix typo.

(cherry picked from commit 4e1f6683d9)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-28 21:29:42 +02:00
patchback[bot]
8e7b779ec9 PostgreSQL team: add a new maintainer (#976) (#983)
(cherry picked from commit 564a625603)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-28 21:28:17 +02:00
Felix Fontein
1ba5344258 Fix config key/value pairs.
(cherry picked from commit 71bbabb96f)
2020-09-28 21:15:33 +02:00
Sviatoslav Sydorenko
58e9454379 Add initial Patchback config (#981)
(cherry picked from commit c173d4d5bc)
2020-09-28 21:15:33 +02:00
patchback[bot]
af3dec9b97 Add tgoetheyn as maintainer for nagios module. (#969) (#973)
(cherry picked from commit a353202716)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-28 08:49:06 +02:00
patchback[bot]
99a161bd06 Use platform.system for Darwin comparisons (#945) (#972)
* Use platform.system for Darwin comparisons

In Python3, `platform.platform()` returns `macOS-10.15.6-x86_64-i386-64bit` instead of `Darwin-10.15.6-x86_64-i386-64bit`. 

`platform.system()` returns `Darwin` On py2 and py3.

* Add changlog fragment

* Update changelogs/fragments/945-darwin-timezone-py3.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 954fb0a311)

Co-authored-by: Matt Martz <matt@sivel.net>
2020-09-28 08:48:50 +02:00
patchback[bot]
feabad39f4 launchd: fix for user-level services (#899) (#970)
* launchd: fix for user-level services

* fix changelog fragment number

(cherry picked from commit 2794dc7b02)

Co-authored-by: tomaszn <tomaszn@users.noreply.github.com>
2020-09-28 07:15:48 +02:00
Tristan Le Guern
4a5276b589 proxmox_kvm: code cleanup (#934) (#956)
* proxmox_kvm: remove redundant parameters

The functions start_vm() and stop_vm() receive four common parameters:
module, proxmox, vm and vmid.
The last too are redundant so keep only vm.

I also took the opportunity to remove extra API calls to proxmox.nodes()
by assigning its return value to a variable.

* proxmox_kvm: remove extra calls to status.current

The get_vm() function already returns an array of properties containing
the status so remove extra API calls to retrieve this information.

Example:

    [{''netin'': 177232, ''name'': ''test-instance'', ''maxcpu'': 1, ''node'': ''prx-01'', ''disk'': 0, ''template'': 0, ''uptime'': 267, ''cpu'': 0.0410680030805531, ''diskread'': 165294744, ''maxdisk'': 10737418240, ''vmid'': 42, ''status'': ''running'', ''id'': ''qemu/42'', ''maxmem'': 536870912, ''diskwrite'': 18528256, ''netout'': 2918, ''type'': ''qemu'', ''mem'': 160284950}]

* proxmox_kvm: kill VZ_TYPE global variable

It reduces readability without providing much values nowadays.

* proxmox_kvm: simplify vmid generation

Forgotten suggestion from Felix Fontein in PR#811.

* proxmox_kvm: add changelog fragment for PR#934

(cherry picked from commit 02e80c610b)
2020-09-25 12:54:14 +02:00
patchback[bot]
e342dfb467 Add headers to ci tests (#954) (#960)
* CI tests: add note to main.yml

* improve

(cherry picked from commit 9d5044ac1a)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-25 09:04:59 +02:00
patchback[bot]
5b425fc297 [aerospike_migrations] - handle exception for unstable-cluster (#900) (#959)
* [aerospike_migrations] - handle exception when unstable-cluster is returned

* fix lint issue

* Update changelogs/fragments/900-aerospike-migration-handle-unstable-cluster.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Kailun Shi <kaishi@adobe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 62ae120c50)

Co-authored-by: Kailun <kailun.shi@gmail.com>
2020-09-25 08:17:56 +02:00
patchback[bot]
d8328312a1 Add new members to gitlab team (#946) (#950)
* Add a member to gitlab team

* add

(cherry picked from commit 7613e0fb04)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-25 07:01:47 +02:00
patchback[bot]
2ce326ca5b Fix docker test setup. (#957) (#958)
(cherry picked from commit cf450e3a43)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-25 06:16:38 +02:00
patchback[bot]
90ed2fa5c3 postgresql_privs: add usage_on_types option (#941) (#955)
* postgresql_privs: add usage_of_types option

* add CI tests

* add changelog fragment

(cherry picked from commit 77bf8b9a66)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-24 10:33:42 +03:00
patchback[bot]
407d776610 Support same reset actions on Managers as on Systems (#903) (#947)
* bring Manager power cmds to parity with System power commands

* add changelog fragment

* Update changelogs/fragments/903-enhance-redfish-manager-reset-actions.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e382044e42)

Co-authored-by: Bill Dodd <billdodd@gmail.com>
2020-09-23 09:04:51 +02:00
patchback[bot]
951806c888 hashi_vault - Change token_path env var loading precedence (#902) (#938)
* Change how vault token is loaded

* Add changelog for PR #902

* Update changelogs/fragments/902-hashi_vault-token-path.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/hashi_vault.py

Add version_added

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/hashi_vault.py

Add version_added

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ba5b86cf4a)

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
2020-09-21 13:57:32 +02:00
patchback[bot]
0fe7ea63a8 Support use of VAULT_NAMESPACE env var (#929) (#937)
As per https://learn.hashicorp.com/tutorials/vault/namespaces, setting VAULT_NAMESPACE env var is a completely supported mechanism to make all vault command use said namespace, so hashi_vault lookup function should do the same.

Co-authored-by: Holt Wilkins <hwilkins@palantir.com>
(cherry picked from commit 1a5702cf21)

Co-authored-by: holtwilkins <5665043+holtwilkins@users.noreply.github.com>
2020-09-21 13:57:18 +02:00
patchback[bot]
3a95a84963 Create gitlab_group_variable.py (#786) (#942)
* Create gitlab_group_variable.py

* Create gitlab_group_variable.py

* Create gitlab_group_variable.py

* Create gitlab_group_variable tests

* Fix test error E127: continuation line over-indented for visual indent

* Update plugins/modules/gitlab_group_variable.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Create symlink for module

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Apply suggestions from code review

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
Co-authored-by: Felix Fontein <felix@fontein.de>

* Apply suggestions from code review

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Validate all input before starting to do changes

Validate all input before starting to do changes

* Update gitlab_group_variable.py

* Update gitlab_group_variable.py

* Update plugins/modules/source_control/gitlab/gitlab_group_variable.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit 60c9da76e7)

Co-authored-by: S Code Man <30977678+scodeman@users.noreply.github.com>
2020-09-21 13:56:48 +02:00
patchback[bot]
2c3e93cc4d zypper_repository: Proper failure when python-xml is missing (#939) (#940)
* Proper error when python-xml is missing

* use missing_required_lib to output error

* Add changelog

(cherry picked from commit 09d89da0ab)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2020-09-21 13:56:37 +02:00
patchback[bot]
656b25a4a1 Remove amenonsen from maintainer lists (#943) (#944)
(cherry picked from commit 5e8b27a224)

Co-authored-by: Abhijit Menon-Sen <abhijit@menon-sen.com>
2020-09-21 11:56:21 +00:00
Tristan Le Guern
1863694297 [PR #831 backport][stable-1] proxmox_kvm: new function wait_for_task() (#933)
* proxmox_kvm: new function wait_for_task() (#831)

Allows some factorization of redundant code in stop_vm(), start_vm(),
create_vm() and main().
This new function also waits one extra second after a successful task execution as the API can be a bit ahead of Proxmox.

Before:

    TASK [ansible-role-proxmox-instance : Ensure test-instance is created]
    changed: [localhost]

    TASK [ansible-role-proxmox-instance : Ensure test-instance is updated]
    fatal: [localhost]: FAILED! => changed=false
      msg: VM test-instance does not exist in cluster.

After:

    TASK [ansible-role-proxmox-instance : Ensure test-instance is created]
    changed: [localhost]

    TASK [ansible-role-proxmox-instance : Ensure test-instance is updated]
    changed: [localhost]

With suggestions from Felix Fontein <felix@fontein.de>.

(cherry picked from commit 9a5fe4c9af)

* Update plugins/modules/cloud/misc/proxmox_kvm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-18 10:05:23 +02:00
patchback[bot]
c0f753dd21 postgresql_user: improve documentation (#872) (#928)
* postgresql_user: improve documentation

* fix

* Update plugins/modules/database/postgresql/postgresql_user.py

Co-authored-by: Sandra McCann <samccann@redhat.com>

* Update plugins/modules/database/postgresql/postgresql_user.py

Co-authored-by: Sandra McCann <samccann@redhat.com>

* Update plugins/modules/database/postgresql/postgresql_user.py

Co-authored-by: Sandra McCann <samccann@redhat.com>

* add suggested

* fix

* misc fix

Co-authored-by: Sandra McCann <samccann@redhat.com>
(cherry picked from commit 4c33e2ccb8)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-18 06:11:50 +02:00
patchback[bot]
369cde2320 add message_id (ts) option to slack module to allow editing messages (#843) (#927)
* add ts option to slack module to allow editing messages

* add version_added

Co-authored-by: Felix Fontein <felix@fontein.de>

* use correct API URL when updating

* add changelog fragment

* add diff/changed support for updating slack messages

* add an example on how to edit a message

* rename ts to message_id

* use the changed variable where possible

* correct conversation.history url

* proper formatting in documentation

Co-authored-by: Felix Fontein <felix@fontein.de>

* add channel to example

* correct conversation history url

* allow channel to start with C0

* fetch_url does not construct query parameters

* add missing argument

* return more data when nothing has changed

* use urlencode to construct query string

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6d60d3fa7f)

Co-authored-by: Andreas Lutro <anlutro@gmail.com>
2020-09-18 06:11:42 +02:00
patchback[bot]
e90872b486 [lookup_plugin/hashi_vault] add missing 'mount_point' param for approle (#897) (#926)
* [lookup_plugin/hashi_vault] add missing 'mount_point' param for approle

* [lookup_plugin/hashi_vault] add changelog fragment

* Update changelogs/fragments/897-lookup-plugin-hashivault-add-approle-mount-point.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Benoit Bayszczak <benoit.bayszczak@adevinta.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 51121e54d0)

Co-authored-by: Benoit Bayszczak <bbayszczak@users.noreply.github.com>
2020-09-17 21:38:46 +02:00
patchback[bot]
b52d3504cb BOTMETA.yml: add a new team member to team_virt (#871) (#924)
(cherry picked from commit 10fb2ffe5d)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-17 21:38:37 +02:00
patchback[bot]
1e150cda01 postgresql_user: add note explaining how to work with SCRAM-SHA passwords (#869) (#923)
(cherry picked from commit 7ac6db2490)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-17 21:26:18 +02:00
patchback[bot]
db135b83dc add a custom module for managing group membership in gitlab (#844) (#919)
* add a custom module for managing group membership in gitlab

* add integration test & modify the module

* modify the module

* modify the module

* remove whitespace

* add aliases file & modify the module

* minor and suggested modifications

* suggested modifications

* more minor modifications

* modified the module to use gitlabAuth

* removed api_url from the doc

* remove api_token

* add update access level for an existing user

* remove access level if statement

(cherry picked from commit 905239f530)

Co-authored-by: Zainab Alsaffar <za5775@rit.edu>
2020-09-17 21:19:34 +02:00
patchback[bot]
ad4866bb3b Rollback if nothing changed (#887) (#925)
Since the module unconditionally issues ALTER statements in order to
observe their effect on the postgres catalog - to determine whether the
privileges have changes - a rollback is thus advisable when in fact
nothing has changed.

fix #885

(cherry picked from commit 2b3c8f4582)

Co-authored-by: Georg Sauthoff <mail@georg.so>
2020-09-17 21:17:29 +02:00
patchback[bot]
83339c44b3 Slack: moves to \S+ check instead of \w+-\w+ (#863) (#922)
* Moves to \S+ check instead of \w+-\w+

* Adds changelog fragment

* Update changelogs/fragments/892-slack-token-validation.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Josh VanDeraa <josh.vanderaa@networktocode.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1eb3ab3b27)

Co-authored-by: Josh VanDeraa <josh.vanderaa+github@networktocode.com>
2020-09-17 20:59:42 +02:00
patchback[bot]
71633249c4 postgresql_privs: allow lowercased PUBLIC role (#858) (#921)
* postgresql_privs: allow lowercased PUBLIC role

* add changelog fragment

* improve CI

* fix changelog fragment

(cherry picked from commit bfdb76e60d)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-09-17 20:59:27 +02:00
patchback[bot]
fdf244d488 Reduce ignored sanity tests in cloud/misc modules (#845) (#920)
* Reduce ignored sanity tests in cloud/misc modules

* Reduce ignored sanity tests in cloud/misc modules for proxmox_kvm

* Fix

* Remove in ignore-2.9.txt

* Fix

* Remove unneeded alias

(cherry picked from commit d046dc34bf)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2020-09-17 20:59:12 +02:00
patchback[bot]
5575d454ab Remove duplicate copy of interfaces_file tests (#835) (#917)
* Remove duplicate copy of interfaces_file tests.

* Remove ignore.txt entries.

(cherry picked from commit 6ff6cc96d5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-17 20:02:13 +02:00
patchback[bot]
d4633cfcd5 gem: Fix get_installed_versions: correctly parse "default" version. (#783) (#918)
* Fix get_installed_versions: correctly parse "default" version.

gem query output of

    bundler (default: 2.1.4, 1.17.2)

Gets parsed as:

    ['default:', '1.17.2']

Fix this by skipping "default: " if present in the list of versions - by adding
it as an optional part of the regex, grouped as a non-capturing group to keep
the index of existing group.

This now correctly parses the above input as

    ['2.1.4:', '1.17.2']

Fixes #782

* Fix gem get_installed_versions (cont): add changelog fragment

* Update changelogs/fragments/783-fix-gem-installed-versions.yaml as per suggestion

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6dc98c08fd)

Co-authored-by: Vlad Mencl <vladimir.mencl@reannz.co.nz>
2020-09-17 20:01:58 +02:00
patchback[bot]
11315c8c69 disable notifications for jose-delarosa (#832) (#916)
(cherry picked from commit 88893b8204)

Co-authored-by: Bill Dodd <billdodd@gmail.com>
2020-09-17 19:50:33 +02:00
patchback[bot]
6c387f87dd redfish_command: allow setting the BootSourceOverrideEnabled property (#825) (#915)
Issue: 824

Co-authored-by: Scott Seekamp <sseekamp@digitalocean.com>
(cherry picked from commit d7ec65c19c)

Co-authored-by: Scott Seekamp <sylgeist@risei.net>
2020-09-17 19:50:12 +02:00
patchback[bot]
33cf4877f5 proxmox_kvm: fix idempotency issue with state=absent (#811) (#914)
When the `vmid` parameter is not supplied and the module can only rely on
name look-up an early failure can happen if the targeted VM doesn't exist.
In this case a task execution with the parameter `state` set to `absent`
will actually fail instead of being considered ok.

This patch introduces a deferred error-checking for non-existent VMs
by assigning the value -1 to the `vmid` parameter, allowing the actual
verification to be performed in the right code paths.
Is also help to differentiate between a non-existent `vmid` or non-existent
VM `name`.

Previously:

    TASK [ansible-role-proxmox-instance : Remove instance-test]
    changed: [localhost]
    ...
    TASK [ansible-role-proxmox-instance : Remove instance-test]
    fatal: [localhost]: FAILED! => changed=false
      msg: VM instance-test does not exist in cluster.

Now:

    TASK [ansible-role-proxmox-instance : Remove instance-test]
    ok: [localhost]
    ...
    TASK [ansible-role-proxmox-instance : Remove instance-test]
    ok: [localhost]

Update changelogs/fragments/811-proxmox-kvm-state-absent.yml

With suggestions from Felix Fontein <felix@fontein.de>.

(cherry picked from commit 73f8338980)

Co-authored-by: Tristan Le Guern <tleguern@bouledef.eu>
2020-09-17 19:41:19 +02:00
patchback[bot]
6e2fee77a7 Specify device for Pushover notification (#802) (#913)
* Specify device for Pushover notification

New parameter: device

Example:
- community.general.pushover:
    msg: '{{ inventory_hostname }} has been lost somewhere'
    app_token: wxfdksl
    user_key: baa5fe97f2c5ab3ca8f0bb59
    device: admins-iPhone
  delegate_to: localhost

Using the Pushover API, you can specify a device where the message should be delivered to. Instead of notifying all devices (the default), the message is sent only to the specified device. Multiple devices can be given separated by a comma.

This change is downwards compatible: omitting the device key sends the message to all devices (as before).

* Added changelog fragments file for pushover

File format as specified in https://docs.ansible.com/ansible/devel/community/development_process.html#changelogs-how-to.

* Added version_added information

As suggested by Felix (thanks!).

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit bf41ddc8ef)

Co-authored-by: Bernd Arnold <wopfel@gmail.com>
2020-09-17 19:41:11 +02:00
patchback[bot]
502e5ceb79 Fix for error trying to install cask with '@' in the name (#763) (#910)
* Fix for casks with @ in the name

* Add changelog fragment

* Update changelogs/fragments/homebrew-cask-at-symbol-fix.yaml

Period required at the end of changelog entry

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Use double backticks

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit 8f2b2d9dc6)

Co-authored-by: Brandon Boles <bb@zbeba.com>
2020-09-17 19:41:00 +02:00
patchback[bot]
4685a53f29 postgresql_privs: Fix bug with grant_option (#796) (#912)
(cherry picked from commit f3b82a9470)

Co-authored-by: Milan Ilic <35260522+ilicmilan@users.noreply.github.com>
2020-09-17 19:34:28 +02:00
patchback[bot]
79616f47cb pkg5: wrap 'to modify' package list (#789) (#911)
* pkg5: wrap 'to modify' package list

Moved from https://github.com/ansible/ansible/pull/56378

* Add changelog fragment.

* Correct markup.

* Update changelogs/fragments/789-pkg5-wrap-to-modify-package-list.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit d4e9b7575c)

Co-authored-by: Peter Oliver <github.com@mavit.org.uk>
2020-09-17 19:29:44 +02:00
patchback[bot]
496218b6e6 Scaleway Database Backup : Create new module (#741) (#909)
* Scaleway Database Backup : Create new module

* Add changelog

* Fix typo

* Remove module duplicate

* Fix typo

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Remove changelog

* Improve doc

* Improve documentation

* Improve parameters checking

* Fix blank space

* Change result name (data to metadata)

See https://github.com/ansible-collections/community.general/pull/741#discussion_r468537460

* Fix doc typo

* Update plugins/modules/cloud/scaleway/scaleway_database_backup.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* States explanations

03f58894ff (r470845287)

* Fix documentation typo

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8a16b51202)

Co-authored-by: Guillaume RODRIGUEZ <guiguidu31300@gmail.com>
2020-09-17 19:24:11 +02:00
patchback[bot]
8bd8ccd974 Fix terraform changed status detection test (#561) (#563) (#908)
* Fix terraform changed status detection test (#561)

* Add changelog fragment

* Update changelogs/fragments/563-update-terraform-status-test.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7e6bde2ce1)

Co-authored-by: AdamGoldsmith <adam.goldsmith75@gmail.com>
2020-09-17 19:23:43 +02:00
patchback[bot]
c802de865a Fix various sanity errors in plugins (#881) (#893)
* Fix deprecation of callables.

* Fix various sanity errors.

* Revert callback_type -> type transform.

* Fix stat_result times: these are float according to https://github.com/python/typeshed/blob/master/stdlib/3/os/__init__.pyi

* Apply suggestions from code review

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit 7cf472855c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-17 16:39:57 +00:00
patchback[bot]
1dfd6e395c Fix lmdb lookup tests (#842) (#907)
* Set LMDB_PURE to prevent lmdb install trying to patch a system library.

* Fix name.

(cherry picked from commit b36f77515c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-17 15:49:41 +00:00
patchback[bot]
25eabb39a6 Fix diy callback tests. (#841) (#906)
(cherry picked from commit e5d15a56c3)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-17 16:21:08 +02:00
patchback[bot]
869e0e60c2 Fix CI (problem with cffi) (#892) (#898)
* Work around old pip versions which install yanked packages.

* Try II

* Proper approach.

* Avoid too old version being installed.

(cherry picked from commit 38996b7544)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-17 16:20:23 +02:00
patchback[bot]
cae5823685 Disable hg tests: these use bitbucket.org, which dropped mercurial support on 2020-08-26. (#839) (#905)
(cherry picked from commit 19b1a0049b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-09-17 14:56:07 +02:00
patchback[bot]
3d0dbc1fb0 New inventory module: Proxmox (#545) (#882)
* This commit adds proxmox inventory module and proxmox_snap for snapshot management

* Fixed pylint errors

* Missed this one..

* This should fix the doc errors

* Remove proxmox_snap to allow for single module per PR

* Changes as suggested by felixfontein in #535

* Reverted back to AnsibleError as module.fail_json broke it. Need to investigate further

* Made importerror behave similar to docker_swarm and gitlab_runner

* FALSE != False

* Added myself as author

* Added a requested feature from a colleague to also sort VMs based on their running state

* Prevent VM templates from being added to the inventory

* Processed feedback

* Updated my email and included version

* Processed doc feedback

* More feedback processed

* Shortened this line of documentation, it is a duplicate and it was causing a sanity error (> 160 characters)

* Added test from PR #736 to check what needs to be changed to make it work

* Changed some tests around

* Remove some tests, first get these working

* Disabled all tests, except the one I am hacking together now

* Added mocker, still trying to figure this out

* Am I looking in the right direction?

* Processed docs feedback

* Fixed bot feedback

* Removed all other tests, started with basic ones (borrowed from cobbler)

* Removed all other tests, started with basic ones (borrowed from cobbler)

* Removed all other tests, started with basic ones (borrowed from cobbler)

* Removed init_cache test as it is implemented on a different way in the original foreman/satellite inventory (and thus also this one)

* This actually passes! Need to check if I need to add asserts as well

* Made bot happy again?

* Added some assertions

* Added note about PVE API version

* Mocked only get_json, the rest functions as-is

* Fixed sanity errors

* Fixed version bump (again...) ;-)

* Processed feedback

(cherry picked from commit 73be912bf7)

Co-authored-by: Jeffrey van Pelt <jeff@vanpelt.one>
2020-09-17 14:34:25 +02:00
Felix Fontein
912583026f Avoid patchback backports to use unnecessary CI resources.
(cherry picked from commit 2b0879cdc4)
2020-09-16 21:43:57 +02:00
Jan-Philipp Litza
748304dadd interfaces_file: re.escape() old value (#880) 2020-09-12 20:10:31 +02:00
Felix Fontein
253c2179de Copy changes from ansible/ansible#71551. (#877)
ci_complete

(cherry picked from commit fcee84b947)
2020-09-10 21:00:53 +02:00
Felix Fontein
fcc72e5af1 Next release will be 1.2.0. 2020-08-18 13:16:24 +02:00
529 changed files with 5852 additions and 4104 deletions

77
.github/BOTMETA.yml vendored
View File

@@ -9,6 +9,8 @@ files:
$actions/ironware.py:
maintainers: paulquack
labels: ironware networking
$actions/shutdown.py:
authors: nitzmahone samdoran aminvakil
$becomes/:
labels: become
$callbacks/:
@@ -71,6 +73,10 @@ files:
$doc_fragments/xenserver.py:
maintainers: bvitnik
labels: xenserver
$filters/time.py:
authors: resmo
$filters/jc.py:
authors: kellyjonbrazil
$httpapis/:
maintainers: $team_networking
labels: networking
@@ -113,6 +119,10 @@ files:
$lookups/dig.py:
maintainers: jpmens
labels: dig
$lookups/tss.py:
authors: amigus
$lookups/dsv.py:
authors: amigus
$lookups/hashi_vault.py:
labels: hashi_vault
$lookups/manifold.py:
@@ -237,6 +247,9 @@ files:
$modules/cloud/docker/docker_stack.py:
authors: dariko
maintainers: DBendit WojciechowskiPiotr akshay196 danihodovic felixfontein jwitko kassiansun tbouvet
$modules/cloud/docker/docker_stack_task_info.py:
authors: imjoseangel
maintainers: $team_docker
$modules/cloud/docker/docker_swarm.py:
authors: WojciechowskiPiotr tbouvet
maintainers: DBendit akshay196 danihodovic dariko felixfontein jwitko kassiansun
@@ -396,6 +409,9 @@ files:
$modules/cloud/scaleway/:
authors: sieben
maintainers: $team_scaleway
$modules/cloud/scaleway/scaleway_database_backup.py:
authors: guillaume_ro_fr
maintainers: $team_scaleway
$modules/cloud/scaleway/scaleway_image_info.py:
authors: Spredzy sieben
$modules/cloud/scaleway/scaleway_ip_info.py:
@@ -482,6 +498,8 @@ files:
authors: ThePixelDeveloper samdoran
$modules/database/misc/kibana_plugin.py:
authors: barryib
$modules/database/misc/odbc.py:
authors: john-westcott-iv
$modules/database/misc/redis.py:
authors: slok
$modules/database/misc/riak.py:
@@ -495,36 +513,38 @@ files:
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_ext.py:
authors: Andersson007 andytom dschep strk
maintainers: Dorn- amenonsen jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
maintainers: $team_postgresql
$modules/database/postgresql/:
authors: Andersson007
maintainers: Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
keywords: database postgres postgresql
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_lang.py:
authors: andytom jensdepuydt
maintainers: Andersson007 Dorn- amenonsen jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_pg_hba.py:
authors: sebasmannem
maintainers: Andersson007 Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul tcraxs
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_privs.py:
authors: b6d tcraxs
maintainers: Andersson007 Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_publication.py:
authors: Andersson007 nerzhul
maintainers: Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt sebasmannem tcraxs
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_query.py:
authors: Andersson007 archf wrouesnel
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_schema.py:
authors: Dorn- andytom
maintainers: Andersson007 amenonsen jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_sequence.py:
authors: tcraxs
maintainers: Andersson007 Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_slot.py:
authors: Andersson007 jscalia
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_tablespace.py:
authors: Andersson007 Dorn- antoinell
maintainers: amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
maintainers: $team_postgresql
$modules/database/postgresql/postgresql_user.py:
authors: ansible
maintainers: $team_postgresql
@@ -613,6 +633,7 @@ files:
labels: monit
$modules/monitoring/nagios.py:
authors: tbielawa
maintainers: tgoetheyn
$modules/monitoring/newrelic_deployment.py:
authors: mcodd
$modules/monitoring/pagerduty.py:
@@ -671,6 +692,9 @@ files:
$modules/net_tools/ldap/ldap_passwd.py:
authors: KellerFuchs
maintainers: jtyr
$modules/net_tools/ldap/ldap_search.py:
authors: eryx12o45
maintainers: jtyr
$modules/net_tools/lldp.py:
authors: andyhky
labels: lldp
@@ -987,8 +1011,9 @@ files:
$modules/remote_management/oneview/oneview_fcoe_network.py:
authors: fgbulsoni
$modules/remote_management/redfish/:
authors: jose-delarosa
authors: jose-delarosa billdodd
maintainers: $team_redfish
ignore: jose-delarosa
$modules/remote_management/stacki/stacki_host.py:
authors: bbyhuy
maintainers: bsanders
@@ -1018,23 +1043,29 @@ files:
$modules/source_control/gitlab/:
notify: jlozadad
authors: Lunik marwatk
maintainers: Shaps dj-wasabi waheedi
maintainers: $team_gitlab
keywords: gitlab source_control
$modules/source_control/gitlab/gitlab_group.py:
authors: Lunik dj-wasabi
maintainers: Shaps marwatk waheedi
maintainers: $team_gitlab
$modules/source_control/gitlab/gitlab_group_members.py:
authors: zanssa
maintainers: $team_gitlab
$modules/source_control/gitlab/gitlab_group_variable.py:
authors: scodeman
maintainers: $team_gitlab
$modules/source_control/gitlab/gitlab_project.py:
authors: Lunik dj-wasabi
maintainers: Shaps marwatk waheedi
maintainers: $team_gitlab
$modules/source_control/gitlab/gitlab_project_variable.py:
authors: markuman
maintainers: $team_gitlab
$modules/source_control/gitlab/gitlab_runner.py:
authors: Lunik SamyCoenen
maintainers: Shaps dj-wasabi marwatk waheedi
maintainers: $team_gitlab
$modules/source_control/gitlab/gitlab_user.py:
authors: Lunik dj-wasabi
maintainers: Shaps marwatk waheedi
maintainers: $team_gitlab
$modules/source_control/hg.py:
authors: yeukhon
$modules/storage/emc/emc_vnx_sg_member.py:
@@ -1108,6 +1139,8 @@ files:
authors: groks
$modules/system/dconf.py:
authors: azaghal
$modules/system/dpkg_divert.py:
authors: quidame
$modules/system/facter.py:
authors: ansible
labels: facter
@@ -1123,12 +1156,16 @@ files:
authors: hryamzik
maintainers: obourdon
labels: interfaces_file
$modules/system/iptables_state.py:
authors: quidame
$modules/system/java_cert.py:
authors: haad
$modules/system/java_keystore.py:
authors: Mogztter
$modules/system/kernel_blacklist.py:
authors: matze
$modules/system/launchd.py:
authors: martinm82
$modules/system/lbu.py:
authors: kunkku
$modules/system/listen_ports_facts.py:
@@ -1200,6 +1237,8 @@ files:
authors: bcoca
$modules/system/syspatch.py:
authors: precurse
$modules/system/sysupgrade.py:
authors: precurse
$modules/system/timezone.py:
authors: indrajitr jasperla tmshn
$modules/system/ufw.py:
@@ -1224,7 +1263,7 @@ files:
authors: ramondelafuente
$modules/web_infrastructure/django_manage.py:
authors: tastychutney
maintainers: scottanderson42
maintainers: scottanderson42 russoz
labels: django_manage
$modules/web_infrastructure/ejabberd_user.py:
authors: privateip
@@ -1305,7 +1344,7 @@ macros:
team_docker: DBendit WojciechowskiPiotr akshay196 danihodovic dariko felixfontein jwitko kassiansun tbouvet
team_e_spirit: MatrixCrawler getjack
team_extreme: LindsayHill bigmstone ujwalkomarla
team_gitlab: Lunik Shaps dj-wasabi marwatk waheedi
team_gitlab: Lunik Shaps dj-wasabi marwatk waheedi zanssa scodeman
team_google: erjohnso rambleraptor
team_hpux: bcoca davx8342
team_huawei: QijunPan TommyLike edisonxiang freesky-edward hwDCN niuzhenguo xuxiaowei0512 yanzhangi zengchen1024 zhongjun2
@@ -1320,7 +1359,7 @@ macros:
team_netvisor: Qalthos amitsi csharpe-pn pdam preetiparasar
team_networking: NilashishC Qalthos danielmellado ganeshrn justjais trishnaguha
team_oracle: manojmeda mross22 nalsaber
team_postgresql: Andersson007 Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
team_postgresql: Andersson007 Dorn- andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs ilicmilan
team_purestorage: bannaych dnix101 genegr lionmax opslounge raekins sdodsley sile16
team_rabbitmq: chrishoffman manuel-sousa
team_redfish: billdodd mraineri tomasg2012
@@ -1328,4 +1367,4 @@ macros:
team_scaleway: QuentinBrosse abarbare jerome-quere kindermoumoute remyleone
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l
team_suse: commel dcermak evrardjp lrupp toabctl
team_virt: joshainglis karmab
team_virt: joshainglis karmab Aversiste

5
.github/patchback.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
---
backport_branch_prefix: patchback/backports/
backport_label_prefix: backport-
target_branch_prefix: stable-
...

View File

@@ -5,6 +5,93 @@ Community General Release Notes
.. contents:: Topics
v1.2.0
======
Release Summary
---------------
Regular bimonthly minor release.
Minor Changes
-------------
- hashi_vault - support ``VAULT_NAMESPACE`` environment variable for namespaced lookups against Vault Enterprise (in addition to the ``namespace=`` flag supported today) (https://github.com/ansible-collections/community.general/pull/929).
- hashi_vault lookup - add ``VAULT_TOKEN_FILE`` as env option to specify ``token_file`` param (https://github.com/ansible-collections/community.general/issues/373).
- hashi_vault lookup - add ``VAULT_TOKEN_PATH`` as env option to specify ``token_path`` param (https://github.com/ansible-collections/community.general/issues/373).
- ipa_user - add ``userauthtype`` option (https://github.com/ansible-collections/community.general/pull/951).
- iptables_state - use FQCN when calling a module from action plugin (https://github.com/ansible-collections/community.general/pull/967).
- nagios - add the ``acknowledge`` action (https://github.com/ansible-collections/community.general/pull/820).
- nagios - add the ``host`` and ``all`` values for the ``forced_check`` action (https://github.com/ansible-collections/community.general/pull/998).
- nagios - add the ``service_check`` action (https://github.com/ansible-collections/community.general/pull/820).
- nagios - rename the ``service_check`` action to ``forced_check`` since we now are able to check both a particular service, all services of a particular host and the host itself (https://github.com/ansible-collections/community.general/pull/998).
- pkgutil - module can now accept a list of packages (https://github.com/ansible-collections/community.general/pull/799).
- pkgutil - module has a new option, ``force``, equivalent to the ``-f`` option to the `pkgutil <http://pkgutil.net/>`_ command (https://github.com/ansible-collections/community.general/pull/799).
- pkgutil - module now supports check mode (https://github.com/ansible-collections/community.general/pull/799).
- postgresql_privs - add the ``usage_on_types`` option (https://github.com/ansible-collections/community.general/issues/884).
- proxmox_kvm - improve code readability (https://github.com/ansible-collections/community.general/pull/934).
- pushover - add device parameter (https://github.com/ansible-collections/community.general/pull/802).
- redfish_command - add sub-command for ``EnableContinuousBootOverride`` and ``DisableBootOverride`` to allow setting BootSourceOverrideEnabled Redfish property (https://github.com/ansible-collections/community.general/issues/824).
- redfish_command - support same reset actions on Managers as on Systems (https://github.com/ansible-collections/community.general/issues/901).
- slack - add support for updating messages (https://github.com/ansible-collections/community.general/issues/304).
- xml - fixed issue were changed was returned when removing non-existent xpath (https://github.com/ansible-collections/community.general/pull/1007).
- zypper_repository - proper failure when python-xml is missing (https://github.com/ansible-collections/community.general/pull/939).
Bugfixes
--------
- aerospike_migrations - handle exception when unstable-cluster is returned (https://github.com/ansible-collections/community.general/pull/900).
- django_manage - fix idempotence for ``createcachetable`` (https://github.com/ansible-collections/community.general/pull/699).
- docker_container - fix idempotency problem with ``published_ports`` when strict comparison is used and list is empty (https://github.com/ansible-collections/community.general/issues/978).
- gem - fix get_installed_versions: correctly parse ``default`` version (https://github.com/ansible-collections/community.general/pull/783).
- hashi_vault - add missing ``mount_point`` parameter for approle auth (https://github.com/ansible-collections/community.general/pull/897).
- hashi_vault lookup - ``token_path`` in config file overridden by env ``HOME`` (https://github.com/ansible-collections/community.general/issues/373).
- homebrew_cask - fixed issue where a cask with ``@`` in the name is incorrectly reported as invalid (https://github.com/ansible-collections/community.general/issues/733).
- interfaces_file - escape regular expression characters in old value (https://github.com/ansible-collections/community.general/issues/777).
- launchd - fix for user-level services (https://github.com/ansible-collections/community.general/issues/896).
- nmcli - set ``C`` locale when executing ``nmcli`` (https://github.com/ansible-collections/community.general/issues/989).
- parted - fix creating partition when label is changed (https://github.com/ansible-collections/community.general/issues/522).
- pkg5 - now works when Python 3 is used on the target (https://github.com/ansible-collections/community.general/pull/789).
- postgresql_privs - allow to pass ``PUBLIC`` role written in lowercase letters (https://github.com/ansible-collections/community.general/issues/857).
- postgresql_privs - fix the module mistakes a procedure for a function (https://github.com/ansible-collections/community.general/issues/994).
- postgresql_privs - rollback if nothing changed (https://github.com/ansible-collections/community.general/issues/885).
- postgresql_privs - the module was attempting to revoke grant options even though ``grant_option`` was not specified (https://github.com/ansible-collections/community.general/pull/796).
- proxmox_kvm - defer error-checking for non-existent VMs in order to fix idempotency of tasks using ``state=absent`` and properly recognize a success (https://github.com/ansible-collections/community.general/pull/811).
- proxmox_kvm - improve handling of long-running tasks by creating a dedicated function (https://github.com/ansible-collections/community.general/pull/831).
- slack - fix ``xox[abp]`` token identification to capture everything after ``xox[abp]``, as the token is the only thing that should be in this argument (https://github.com/ansible-collections/community.general/issues/862).
- terraform - fix incorrectly reporting a status of unchanged when number of resources added or destroyed are multiples of 10 (https://github.com/ansible-collections/community.general/issues/561).
- timezone - support Python3 on macos/darwin (https://github.com/ansible-collections/community.general/pull/945).
- zfs - fixed ``invalid character '@' in pool name"`` error when working with snapshots on a root zvol (https://github.com/ansible-collections/community.general/issues/932).
New Plugins
-----------
Inventory
~~~~~~~~~
- proxmox - Proxmox inventory source
- stackpath_compute - StackPath Edge Computing inventory source
New Modules
-----------
Cloud
~~~~~
scaleway
^^^^^^^^
- scaleway_database_backup - Scaleway database backups management module
Source Control
~~~~~~~~~~~~~~
gitlab
^^^^^^
- gitlab_group_members - Manage group members on GitLab Server
- gitlab_group_variable - Creates, updates, or deletes GitLab groups variables
v1.1.0
======

View File

@@ -1104,3 +1104,130 @@ releases:
name: sysupgrade
namespace: system
release_date: '2020-08-18'
1.2.0:
changes:
bugfixes:
- aerospike_migrations - handle exception when unstable-cluster is returned
(https://github.com/ansible-collections/community.general/pull/900).
- django_manage - fix idempotence for ``createcachetable`` (https://github.com/ansible-collections/community.general/pull/699).
- docker_container - fix idempotency problem with ``published_ports`` when strict
comparison is used and list is empty (https://github.com/ansible-collections/community.general/issues/978).
- 'gem - fix get_installed_versions: correctly parse ``default`` version (https://github.com/ansible-collections/community.general/pull/783).'
- hashi_vault - add missing ``mount_point`` parameter for approle auth (https://github.com/ansible-collections/community.general/pull/897).
- hashi_vault lookup - ``token_path`` in config file overridden by env ``HOME``
(https://github.com/ansible-collections/community.general/issues/373).
- homebrew_cask - fixed issue where a cask with ``@`` in the name is incorrectly
reported as invalid (https://github.com/ansible-collections/community.general/issues/733).
- interfaces_file - escape regular expression characters in old value (https://github.com/ansible-collections/community.general/issues/777).
- launchd - fix for user-level services (https://github.com/ansible-collections/community.general/issues/896).
- nmcli - set ``C`` locale when executing ``nmcli`` (https://github.com/ansible-collections/community.general/issues/989).
- parted - fix creating partition when label is changed (https://github.com/ansible-collections/community.general/issues/522).
- pkg5 - now works when Python 3 is used on the target (https://github.com/ansible-collections/community.general/pull/789).
- postgresql_privs - allow to pass ``PUBLIC`` role written in lowercase letters
(https://github.com/ansible-collections/community.general/issues/857).
- postgresql_privs - fix the module mistakes a procedure for a function (https://github.com/ansible-collections/community.general/issues/994).
- postgresql_privs - rollback if nothing changed (https://github.com/ansible-collections/community.general/issues/885).
- postgresql_privs - the module was attempting to revoke grant options even
though ``grant_option`` was not specified (https://github.com/ansible-collections/community.general/pull/796).
- proxmox_kvm - defer error-checking for non-existent VMs in order to fix idempotency
of tasks using ``state=absent`` and properly recognize a success (https://github.com/ansible-collections/community.general/pull/811).
- proxmox_kvm - improve handling of long-running tasks by creating a dedicated
function (https://github.com/ansible-collections/community.general/pull/831).
- slack - fix ``xox[abp]`` token identification to capture everything after
``xox[abp]``, as the token is the only thing that should be in this argument
(https://github.com/ansible-collections/community.general/issues/862).
- terraform - fix incorrectly reporting a status of unchanged when number of
resources added or destroyed are multiples of 10 (https://github.com/ansible-collections/community.general/issues/561).
- timezone - support Python3 on macos/darwin (https://github.com/ansible-collections/community.general/pull/945).
- zfs - fixed ``invalid character '@' in pool name"`` error when working with
snapshots on a root zvol (https://github.com/ansible-collections/community.general/issues/932).
minor_changes:
- hashi_vault - support ``VAULT_NAMESPACE`` environment variable for namespaced
lookups against Vault Enterprise (in addition to the ``namespace=`` flag supported
today) (https://github.com/ansible-collections/community.general/pull/929).
- hashi_vault lookup - add ``VAULT_TOKEN_FILE`` as env option to specify ``token_file``
param (https://github.com/ansible-collections/community.general/issues/373).
- hashi_vault lookup - add ``VAULT_TOKEN_PATH`` as env option to specify ``token_path``
param (https://github.com/ansible-collections/community.general/issues/373).
- ipa_user - add ``userauthtype`` option (https://github.com/ansible-collections/community.general/pull/951).
- iptables_state - use FQCN when calling a module from action plugin (https://github.com/ansible-collections/community.general/pull/967).
- nagios - add the ``acknowledge`` action (https://github.com/ansible-collections/community.general/pull/820).
- nagios - add the ``host`` and ``all`` values for the ``forced_check`` action
(https://github.com/ansible-collections/community.general/pull/998).
- nagios - add the ``service_check`` action (https://github.com/ansible-collections/community.general/pull/820).
- nagios - rename the ``service_check`` action to ``forced_check`` since we
now are able to check both a particular service, all services of a particular
host and the host itself (https://github.com/ansible-collections/community.general/pull/998).
- pkgutil - module can now accept a list of packages (https://github.com/ansible-collections/community.general/pull/799).
- pkgutil - module has a new option, ``force``, equivalent to the ``-f`` option
to the `pkgutil <http://pkgutil.net/>`_ command (https://github.com/ansible-collections/community.general/pull/799).
- pkgutil - module now supports check mode (https://github.com/ansible-collections/community.general/pull/799).
- postgresql_privs - add the ``usage_on_types`` option (https://github.com/ansible-collections/community.general/issues/884).
- proxmox_kvm - improve code readability (https://github.com/ansible-collections/community.general/pull/934).
- pushover - add device parameter (https://github.com/ansible-collections/community.general/pull/802).
- redfish_command - add sub-command for ``EnableContinuousBootOverride`` and
``DisableBootOverride`` to allow setting BootSourceOverrideEnabled Redfish
property (https://github.com/ansible-collections/community.general/issues/824).
- redfish_command - support same reset actions on Managers as on Systems (https://github.com/ansible-collections/community.general/issues/901).
- slack - add support for updating messages (https://github.com/ansible-collections/community.general/issues/304).
- xml - fixed issue were changed was returned when removing non-existent xpath
(https://github.com/ansible-collections/community.general/pull/1007).
- zypper_repository - proper failure when python-xml is missing (https://github.com/ansible-collections/community.general/pull/939).
release_summary: Regular bimonthly minor release.
fragments:
- 1.2.0.yml
- 522-parted_change_label.yml
- 563-update-terraform-status-test.yaml
- 699-django_manage-createcachetable-fix-idempotence.yml
- 777-interfaces_file-re-escape.yml
- 783-fix-gem-installed-versions.yaml
- 789-pkg5-wrap-to-modify-package-list.yaml
- 796-postgresql_privs-grant-option-bug.yaml
- 802-pushover-device-parameter.yml
- 811-proxmox-kvm-state-absent.yml
- 820_nagios_added_acknowledge_and_servicecheck.yml
- 825-bootsource-override-option.yaml
- 831-proxmox-kvm-wait.yml
- 843-update-slack-messages.yml
- 858-postgresql_privs_should_allow_public_role_lowercased.yml
- 887-rollback-if-nothing-changed.yml
- 892-slack-token-validation.yml
- 897-lookup-plugin-hashivault-add-approle-mount-point.yaml
- 899_launchd_user_service.yml
- 900-aerospike-migration-handle-unstable-cluster.yaml
- 902-hashi_vault-token-path.yml
- 903-enhance-redfish-manager-reset-actions.yml
- 929-vault-namespace-support.yml
- 939-zypper_repository_proper_failure_on_missing_python-xml.yml
- 941-postgresql_privs_usage_on_types_option.yml
- 943-proxmox-kvm-code-cleanup.yml
- 945-darwin-timezone-py3.yaml
- 951-ipa_user-add-userauthtype-param.yaml
- 967-use-fqcn-when-calling-a-module-from-action-plugin.yml
- 979-docker_container-published_ports-empty-idempotency.yml
- 992-nmcli-locale.yml
- 996-postgresql_privs_fix_function_handling.yml
- 998-nagios-added_forced_check_for_all_services_or_host.yml
- homebrew-cask-at-symbol-fix.yaml
- pkgutil-check-mode-etc.yaml
- xml-remove-changed.yml
- zfs-root-snapshot.yml
modules:
- description: Manage group members on GitLab Server
name: gitlab_group_members
namespace: source_control.gitlab
- description: Creates, updates, or deletes GitLab groups variables
name: gitlab_group_variable
namespace: source_control.gitlab
- description: Scaleway database backups management module
name: scaleway_database_backup
namespace: cloud.scaleway
plugins:
inventory:
- description: Proxmox inventory source
name: proxmox
namespace: null
- description: StackPath Edge Computing inventory source
name: stackpath_compute
namespace: null
release_date: '2020-09-30'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 1.1.0
version: 1.2.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -740,3 +740,16 @@ plugin_routing:
removal_version: 2.0.0
warning_text: The mysql module_utils has been moved to the community.mysql collection.
redirect: community.mysql.mysql
callback:
actionable:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
full_skip:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
stderr:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details

View File

@@ -48,7 +48,7 @@ class ActionModule(ActionBase):
# At least one iteration is required, even if timeout is 0.
for i in range(max(1, timeout)):
async_result = self._execute_module(
module_name='async_status',
module_name='ansible.builtin.async_status',
module_args=module_args,
task_vars=task_vars,
wrap_async=False)
@@ -177,7 +177,7 @@ class ActionModule(ActionBase):
async_status_args['mode'] = 'cleanup'
garbage = self._execute_module(
module_name='async_status',
module_name='ansible.builtin.async_status',
module_args=async_status_args,
task_vars=task_vars,
wrap_async=False)

View File

@@ -14,7 +14,7 @@ DOCUMENTATION = '''
become_user:
description:
- User you 'become' to execute the task
- This plugin ignores this setting as pfexec uses it's own ``exec_attr`` to figure this out,
- This plugin ignores this setting as pfexec uses it's own C(exec_attr) to figure this out,
but it is supplied here for Ansible to make decisions needed for the task execution, like file permissions.
default: root
ini:
@@ -80,8 +80,8 @@ DOCUMENTATION = '''
- name: ansible_pfexec_wrap_execution
env:
- name: ANSIBLE_PFEXEC_WRAP_EXECUTION
note:
- This plugin ignores ``become_user`` as pfexec uses it's own ``exec_attr`` to figure this out.
notes:
- This plugin ignores I(become_user) as pfexec uses it's own C(exec_attr) to figure this out.
'''
from ansible.plugins.become import BecomeBase

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
cache: memcached
short_description: Use memcached DB for cache
description:

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
cache: redis
short_description: Use Redis DB for cache
description:

View File

@@ -7,6 +7,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: actionable
type: stdout
short_description: shows only items that need attention

View File

@@ -7,8 +7,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: cgroup_memory_recap
callback_type: aggregate
type: aggregate
requirements:
- whitelist in configuration
- cgroups

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: context_demo
type: aggregate
short_description: demo callback that adds play/task context

View File

@@ -6,13 +6,9 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible import constants as C
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
from ansible.template import Templar
from ansible.playbook.task_include import TaskInclude
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: counter_enabled
type: stdout
short_description: adds counters to the output items (tasks and hosts/task)
@@ -26,6 +22,12 @@ DOCUMENTATION = '''
- set as stdout callback in ansible.cfg (stdout_callback = counter_enabled)
'''
from ansible import constants as C
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
from ansible.template import Templar
from ansible.playbook.task_include import TaskInclude
class CallbackModule(CallbackBase):

View File

@@ -8,7 +8,7 @@ __metaclass__ = type
DOCUMENTATION = r'''
callback: diy
callback_type: stdout
type: stdout
short_description: Customize the output
version_added: 0.2.0
description:

View File

@@ -7,6 +7,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: full_skip
type: stdout
short_description: suppresses tasks if all hosts skipped

View File

@@ -6,8 +6,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: hipchat
callback_type: notification
type: notification
requirements:
- whitelist in configuration.
- prettytable (python lib)

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: jabber
type: notification
short_description: post task events to a jabber server

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: log_plays
type: notification
short_description: write playbook output to log file

View File

@@ -5,8 +5,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: logdna
callback_type: aggregate
type: aggregate
short_description: Sends playbook logs to LogDNA
description:
- This callback will report logs from playbook actions, tasks, and events to LogDNA (https://app.logdna.com)

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: logentries
type: notification
short_description: Sends events to Logentries

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: logstash
type: notification
short_description: Sends events to Logstash

View File

@@ -52,7 +52,7 @@ options:
ini:
- section: callback_mail
key: bcc
note:
notes:
- "TODO: expand configuration options now that plugins can leverage Ansible's configuration"
'''

View File

@@ -6,8 +6,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: 'null'
callback_type: stdout
type: stdout
requirements:
- set as main display callback
short_description: Don't display stuff to screen

View File

@@ -7,6 +7,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: say
type: notification
requirements:

View File

@@ -6,8 +6,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: selective
callback_type: stdout
type: stdout
requirements:
- set as main display callback
short_description: only print certain tasks

View File

@@ -7,8 +7,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: slack
callback_type: notification
type: notification
requirements:
- whitelist in configuration
- prettytable (python library)

View File

@@ -21,7 +21,7 @@ DOCUMENTATION = '''
callback: splunk
type: aggregate
short_description: Sends task result events to Splunk HTTP Event Collector
author: "Stuart Hirst <support@convergingdata.com>"
author: "Stuart Hirst (!UNKNOWN) <support@convergingdata.com>"
description:
- This callback plugin will send task results as JSON formatted events to a Splunk HTTP collector.
- The companion Splunk Monitoring & Diagnostics App is available here "https://splunkbase.splunk.com/app/4023/"

View File

@@ -7,8 +7,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: stderr
callback_type: stdout
type: stdout
requirements:
- set as main display callback
short_description: Splits output, sending failed tasks to stderr

View File

@@ -6,8 +6,9 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: syslog_json
callback_type: notification
type: notification
requirements:
- whitelist in configuration
short_description: sends JSON events to syslog

View File

@@ -9,7 +9,7 @@ __metaclass__ = type
DOCUMENTATION = '''
callback: unixy
type: stdout
author: Allyson Bowles <@akatch>
author: Allyson Bowles (@akatch)
short_description: condensed Ansible output
description:
- Consolidated Ansible output in the style of LINUX/UNIX startup logs.

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
callback: yaml
type: stdout
short_description: yaml-ized Ansible screen output

View File

@@ -9,7 +9,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Maykel Moya <mmoya@speedyrails.com>
author: Maykel Moya (!UNKNOWN) <mmoya@speedyrails.com>
connection: chroot
short_description: Interact with local chroot
description:

View File

@@ -11,8 +11,8 @@ __metaclass__ = type
DOCUMENTATION = '''
author:
- Lorin Hochestein
- Leendert Brouwer
- Lorin Hochestein (!UNKNOWN)
- Leendert Brouwer (!UNKNOWN)
connection: docker
short_description: Run tasks in docker containers
description:

View File

@@ -9,7 +9,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Stephan Lohse <dev-github@ploek.org>
author: Stephan Lohse (!UNKNOWN) <dev-github@ploek.org>
connection: iocage
short_description: Run tasks in iocage jails
description:

View File

@@ -6,7 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Joerg Thalheim <joerg@higgsboson.tk>
author: Joerg Thalheim (!UNKNOWN) <joerg@higgsboson.tk>
connection: lxc
short_description: Run tasks in lxc containers via lxc python library
description:

View File

@@ -6,7 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Matt Clay <matt@mystile.com>
author: Matt Clay (@mattclay) <matt@mystile.com>
connection: lxd
short_description: Run tasks in lxc containers via lxc CLI
description:

View File

@@ -22,7 +22,7 @@ __metaclass__ = type
DOCUMENTATION = '''
author:
- xuxinkun
- xuxinkun (!UNKNOWN)
connection: oc

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Orion Poplawski (@opoplawski)
name: cobbler
plugin_type: inventory
short_description: Cobbler inventory source

View File

@@ -10,8 +10,8 @@ __metaclass__ = type
DOCUMENTATION = '''
name: gitlab_runners
plugin_type: inventory
authors:
- Stefan Heitmüller (stefan.heitmueller@gmx.com)
author:
- Stefan Heitmüller (@morph027) <stefan.heitmueller@gmx.com>
short_description: Ansible dynamic inventory plugin for GitLab runners.
requirements:
- python >= 2.7

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
name: nmap
plugin_type: inventory
short_description: Uses nmap to find hosts to target

View File

@@ -0,0 +1,348 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2016 Guido Günther <agx@sigxcpu.org>, Daniel Lobato Garcia <dlobatog@redhat.com>
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: proxmox
plugin_type: inventory
short_description: Proxmox inventory source
version_added: "1.2.0"
author:
- Jeffrey van Pelt (@Thulium-Drake) <jeff@vanpelt.one>
requirements:
- requests >= 1.1
description:
- Get inventory hosts from a Proxmox PVE cluster.
- "Uses a configuration file as an inventory source, it must end in C(.proxmox.yml) or C(.proxmox.yaml)"
- Will retrieve the first network interface with an IP for Proxmox nodes.
- Can retrieve LXC/QEMU configuration as facts.
extends_documentation_fragment:
- inventory_cache
options:
plugin:
description: The name of this plugin, it should always be set to C(community.general.proxmox) for this plugin to recognize it as it's own.
required: yes
choices: ['community.general.proxmox']
type: str
url:
description: URL to Proxmox cluster.
default: 'http://localhost:8006'
type: str
user:
description: Proxmox authentication user.
required: yes
type: str
password:
description: Proxmox authentication password.
required: yes
type: str
validate_certs:
description: Verify SSL certificate if using HTTPS.
type: boolean
default: yes
group_prefix:
description: Prefix to apply to Proxmox groups.
default: proxmox_
type: str
facts_prefix:
description: Prefix to apply to LXC/QEMU config facts.
default: proxmox_
type: str
want_facts:
description: Gather LXC/QEMU configuration facts.
default: no
type: bool
'''
EXAMPLES = '''
# my.proxmox.yml
plugin: community.general.proxmox
url: http://localhost:8006
user: ansible@pve
password: secure
validate_certs: no
'''
import re
from ansible.module_utils.common._collections_compat import MutableMapping
from distutils.version import LooseVersion
from ansible.errors import AnsibleError
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable
from ansible.module_utils.six.moves.urllib.parse import urlencode
# 3rd party imports
try:
import requests
if LooseVersion(requests.__version__) < LooseVersion('1.1.0'):
raise ImportError
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
class InventoryModule(BaseInventoryPlugin, Cacheable):
''' Host inventory parser for ansible using Proxmox as source. '''
NAME = 'community.general.proxmox'
def __init__(self):
super(InventoryModule, self).__init__()
# from config
self.proxmox_url = None
self.session = None
self.cache_key = None
self.use_cache = None
def verify_file(self, path):
valid = False
if super(InventoryModule, self).verify_file(path):
if path.endswith(('proxmox.yaml', 'proxmox.yml')):
valid = True
else:
self.display.vvv('Skipping due to inventory source not ending in "proxmox.yaml" nor "proxmox.yml"')
return valid
def _get_session(self):
if not self.session:
self.session = requests.session()
self.session.verify = self.get_option('validate_certs')
return self.session
def _get_auth(self):
credentials = urlencode({'username': self.proxmox_user, 'password': self.proxmox_password, })
a = self._get_session()
ret = a.post('%s/api2/json/access/ticket' % self.proxmox_url, data=credentials)
json = ret.json()
self.credentials = {
'ticket': json['data']['ticket'],
'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
}
def _get_json(self, url, ignore_errors=None):
if not self.use_cache or url not in self._cache.get(self.cache_key, {}):
if self.cache_key not in self._cache:
self._cache[self.cache_key] = {'url': ''}
data = []
s = self._get_session()
while True:
headers = {'Cookie': 'PVEAuthCookie={0}'.format(self.credentials['ticket'])}
ret = s.get(url, headers=headers)
if ignore_errors and ret.status_code in ignore_errors:
break
ret.raise_for_status()
json = ret.json()
# process results
# FIXME: This assumes 'return type' matches a specific query,
# it will break if we expand the queries and they dont have different types
if 'data' not in json:
# /hosts/:id does not have a 'data' key
data = json
break
elif isinstance(json['data'], MutableMapping):
# /facts are returned as dict in 'data'
data = json['data']
break
else:
# /hosts 's 'results' is a list of all hosts, returned is paginated
data = data + json['data']
break
self._cache[self.cache_key][url] = data
return self._cache[self.cache_key][url]
def _get_nodes(self):
return self._get_json("%s/api2/json/nodes" % self.proxmox_url)
def _get_pools(self):
return self._get_json("%s/api2/json/pools" % self.proxmox_url)
def _get_lxc_per_node(self, node):
return self._get_json("%s/api2/json/nodes/%s/lxc" % (self.proxmox_url, node))
def _get_qemu_per_node(self, node):
return self._get_json("%s/api2/json/nodes/%s/qemu" % (self.proxmox_url, node))
def _get_members_per_pool(self, pool):
ret = self._get_json("%s/api2/json/pools/%s" % (self.proxmox_url, pool))
return ret['members']
def _get_node_ip(self, node):
ret = self._get_json("%s/api2/json/nodes/%s/network" % (self.proxmox_url, node))
for iface in ret:
try:
return iface['address']
except Exception:
return None
def _get_vm_config(self, node, vmid, vmtype, name):
ret = self._get_json("%s/api2/json/nodes/%s/%s/%s/config" % (self.proxmox_url, node, vmtype, vmid))
vmid_key = 'vmid'
vmid_key = self.to_safe('%s%s' % (self.get_option('facts_prefix'), vmid_key.lower()))
self.inventory.set_variable(name, vmid_key, vmid)
vmtype_key = 'vmtype'
vmtype_key = self.to_safe('%s%s' % (self.get_option('facts_prefix'), vmtype_key.lower()))
self.inventory.set_variable(name, vmtype_key, vmtype)
for config in ret:
key = config
key = self.to_safe('%s%s' % (self.get_option('facts_prefix'), key.lower()))
value = ret[config]
try:
# fixup disk images as they have no key
if config == 'rootfs' or config.startswith(('virtio', 'sata', 'ide', 'scsi')):
value = ('disk_image=' + value)
if isinstance(value, int) or ',' not in value:
value = value
# split off strings with commas to a dict
else:
# skip over any keys that cannot be processed
try:
value = dict(key.split("=") for key in value.split(","))
except Exception:
continue
self.inventory.set_variable(name, key, value)
except NameError:
return None
def _get_vm_status(self, node, vmid, vmtype, name):
ret = self._get_json("%s/api2/json/nodes/%s/%s/%s/status/current" % (self.proxmox_url, node, vmtype, vmid))
status = ret['status']
status_key = 'status'
status_key = self.to_safe('%s%s' % (self.get_option('facts_prefix'), status_key.lower()))
self.inventory.set_variable(name, status_key, status)
def to_safe(self, word):
'''Converts 'bad' characters in a string to underscores so they can be used as Ansible groups
#> ProxmoxInventory.to_safe("foo-bar baz")
'foo_barbaz'
'''
regex = r"[^A-Za-z0-9\_]"
return re.sub(regex, "_", word.replace(" ", ""))
def _populate(self):
self._get_auth()
# gather vm's on nodes
for node in self._get_nodes():
# FIXME: this can probably be cleaner
# create groups
lxc_group = 'all_lxc'
lxc_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), lxc_group.lower()))
self.inventory.add_group(lxc_group)
qemu_group = 'all_qemu'
qemu_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), qemu_group.lower()))
self.inventory.add_group(qemu_group)
nodes_group = 'nodes'
nodes_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), nodes_group.lower()))
self.inventory.add_group(nodes_group)
running_group = 'all_running'
running_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), running_group.lower()))
self.inventory.add_group(running_group)
stopped_group = 'all_stopped'
stopped_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), stopped_group.lower()))
self.inventory.add_group(stopped_group)
if node.get('node'):
self.inventory.add_host(node['node'])
if node['type'] == 'node':
self.inventory.add_child(nodes_group, node['node'])
# get node IP address
ip = self._get_node_ip(node['node'])
self.inventory.set_variable(node['node'], 'ansible_host', ip)
# get LXC containers for this node
node_lxc_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), ('%s_lxc' % node['node']).lower()))
self.inventory.add_group(node_lxc_group)
for lxc in self._get_lxc_per_node(node['node']):
self.inventory.add_host(lxc['name'])
self.inventory.add_child(lxc_group, lxc['name'])
self.inventory.add_child(node_lxc_group, lxc['name'])
# get LXC status when want_facts == True
if self.get_option('want_facts'):
self._get_vm_status(node['node'], lxc['vmid'], 'lxc', lxc['name'])
if lxc['status'] == 'stopped':
self.inventory.add_child(stopped_group, lxc['name'])
elif lxc['status'] == 'running':
self.inventory.add_child(running_group, lxc['name'])
# get LXC config for facts
if self.get_option('want_facts'):
self._get_vm_config(node['node'], lxc['vmid'], 'lxc', lxc['name'])
# get QEMU vm's for this node
node_qemu_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), ('%s_qemu' % node['node']).lower()))
self.inventory.add_group(node_qemu_group)
for qemu in self._get_qemu_per_node(node['node']):
if not qemu['template']:
self.inventory.add_host(qemu['name'])
self.inventory.add_child(qemu_group, qemu['name'])
self.inventory.add_child(node_qemu_group, qemu['name'])
# get QEMU status
self._get_vm_status(node['node'], qemu['vmid'], 'qemu', qemu['name'])
if qemu['status'] == 'stopped':
self.inventory.add_child(stopped_group, qemu['name'])
elif qemu['status'] == 'running':
self.inventory.add_child(running_group, qemu['name'])
# get QEMU config for facts
if self.get_option('want_facts'):
self._get_vm_config(node['node'], qemu['vmid'], 'qemu', qemu['name'])
# gather vm's in pools
for pool in self._get_pools():
if pool.get('poolid'):
pool_group = 'pool_' + pool['poolid']
pool_group = self.to_safe('%s%s' % (self.get_option('group_prefix'), pool_group.lower()))
self.inventory.add_group(pool_group)
for member in self._get_members_per_pool(pool['poolid']):
if member.get('name'):
self.inventory.add_child(pool_group, member['name'])
def parse(self, inventory, loader, path, cache=True):
if not HAS_REQUESTS:
raise AnsibleError('This module requires Python Requests 1.1.0 or higher: '
'https://github.com/psf/requests.')
super(InventoryModule, self).parse(inventory, loader, path)
# read config from file, this sets 'options'
self._read_config_data(path)
# get connection host
self.proxmox_url = self.get_option('url')
self.proxmox_user = self.get_option('user')
self.proxmox_password = self.get_option('password')
self.cache_key = self.get_cache_key(path)
self.use_cache = cache and self.get_option('cache')
# actually populate inventory
self._populate()

View File

@@ -0,0 +1,281 @@
# Copyright (c) 2020 Shay Rybak <shay.rybak@stackpath.com>
# Copyright (c) 2020 Ansible Project
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: stackpath_compute
plugin_type: inventory
short_description: StackPath Edge Computing inventory source
version_added: 1.2.0
extends_documentation_fragment:
- inventory_cache
- constructed
description:
- Get inventory hosts from StackPath Edge Computing.
- Uses a YAML configuration file that ends with stackpath_compute.(yml|yaml).
options:
plugin:
description:
- A token that ensures this is a source file for the plugin.
required: true
choices: ['community.general.stackpath_compute']
client_id:
description:
- An OAuth client ID generated from the API Management section of the StackPath customer portal
U(https://control.stackpath.net/api-management).
required: true
type: str
client_secret:
description:
- An OAuth client secret generated from the API Management section of the StackPath customer portal
U(https://control.stackpath.net/api-management).
required: true
type: str
stack_slugs:
description:
- A list of Stack slugs to query instances in. If no entry then get instances in all stacks on the account.
type: list
elements: str
use_internal_ip:
description:
- Whether or not to use internal IP addresses, If false, uses external IP addresses, internal otherwise.
- If an instance doesn't have an external IP it will not be returned when this option is set to false.
type: bool
'''
EXAMPLES = '''
# Example using credentials to fetch all workload instances in a stack.
---
plugin: community.general.stackpath_compute
client_id: my_client_id
client_secret: my_client_secret
stack_slugs:
- my_first_stack_slug
- my_other_stack_slug
use_internal_ip: false
'''
import traceback
import json
from ansible.errors import AnsibleError
from ansible.module_utils.urls import open_url
from ansible.plugins.inventory import (
BaseInventoryPlugin,
Constructable,
Cacheable
)
from ansible.utils.display import Display
display = Display()
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = 'community.general.stackpath_compute'
def __init__(self):
super(InventoryModule, self).__init__()
# credentials
self.client_id = None
self.client_secret = None
self.stack_slug = None
self.api_host = "https://gateway.stackpath.com"
self.group_keys = [
"stackSlug",
"workloadId",
"cityCode",
"countryCode",
"continent",
"target",
"name",
"workloadSlug"
]
def _validate_config(self, config):
if config['plugin'] != 'community.general.stackpath_compute':
raise AnsibleError("plugin doesn't match this plugin")
try:
client_id = config['client_id']
if client_id != 32:
raise AnsibleError("client_id must be 32 characters long")
except KeyError:
raise AnsibleError("config missing client_id, a required option")
try:
client_secret = config['client_secret']
if client_secret != 64:
raise AnsibleError("client_secret must be 64 characters long")
except KeyError:
raise AnsibleError("config missing client_id, a required option")
return True
def _set_credentials(self):
'''
:param config_data: contents of the inventory config file
'''
self.client_id = self.get_option('client_id')
self.client_secret = self.get_option('client_secret')
def _authenticate(self):
payload = json.dumps(
{
"client_id": self.client_id,
"client_secret": self.client_secret,
"grant_type": "client_credentials",
}
)
headers = {
"Content-Type": "application/json",
}
resp = open_url(
self.api_host + '/identity/v1/oauth2/token',
headers=headers,
data=payload,
method="POST"
)
status_code = resp.code
if status_code == 200:
body = resp.read()
self.auth_token = json.loads(body)["access_token"]
def _query(self):
results = []
workloads = []
self._authenticate()
for stack_slug in self.stack_slugs:
try:
workloads = self._stackpath_query_get_list(self.api_host + '/workload/v1/stacks/' + stack_slug + '/workloads')
except Exception:
raise AnsibleError("Failed to get workloads from the StackPath API: %s" % traceback.format_exc())
for workload in workloads:
try:
workload_instances = self._stackpath_query_get_list(
self.api_host + '/workload/v1/stacks/' + stack_slug + '/workloads/' + workload["id"] + '/instances'
)
except Exception:
raise AnsibleError("Failed to get workload instances from the StackPath API: %s" % traceback.format_exc())
for instance in workload_instances:
if instance["phase"] == "RUNNING":
instance["stackSlug"] = stack_slug
instance["workloadId"] = workload["id"]
instance["workloadSlug"] = workload["slug"]
instance["cityCode"] = instance["location"]["cityCode"]
instance["countryCode"] = instance["location"]["countryCode"]
instance["continent"] = instance["location"]["continent"]
instance["target"] = instance["metadata"]["labels"]["workload.platform.stackpath.net/target-name"]
try:
if instance[self.hostname_key]:
results.append(instance)
except KeyError:
pass
return results
def _populate(self, instances):
for instance in instances:
for group_key in self.group_keys:
group = group_key + "_" + instance[group_key]
group = group.lower().replace(" ", "_").replace("-", "_")
self.inventory.add_group(group)
self.inventory.add_host(instance[self.hostname_key],
group=group)
def _stackpath_query_get_list(self, url):
self._authenticate()
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer " + self.auth_token,
}
next_page = True
result = []
cursor = '-1'
while next_page:
resp = open_url(
url + '?page_request.first=10&page_request.after=%s' % cursor,
headers=headers,
method="GET"
)
status_code = resp.code
if status_code == 200:
body = resp.read()
body_json = json.loads(body)
result.extend(body_json["results"])
next_page = body_json["pageInfo"]["hasNextPage"]
if next_page:
cursor = body_json["pageInfo"]["endCursor"]
return result
def _get_stack_slugs(self, stacks):
self.stack_slugs = [stack["slug"] for stack in stacks]
def verify_file(self, path):
'''
:param loader: an ansible.parsing.dataloader.DataLoader object
:param path: the path to the inventory config file
:return the contents of the config file
'''
if super(InventoryModule, self).verify_file(path):
if path.endswith(('stackpath_compute.yml', 'stackpath_compute.yaml')):
return True
display.debug(
"stackpath_compute inventory filename must end with \
'stackpath_compute.yml' or 'stackpath_compute.yaml'"
)
return False
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
config = self._read_config_data(path)
self._validate_config(config)
self._set_credentials()
# get user specifications
self.use_internal_ip = self.get_option('use_internal_ip')
if self.use_internal_ip:
self.hostname_key = "ipAddress"
else:
self.hostname_key = "externalIpAddress"
self.stack_slugs = self.get_option('stack_slugs')
if not self.stack_slugs:
try:
stacks = self._stackpath_query_get_list(self.api_host + '/stack/v1/stacks')
self._get_stack_slugs(stacks)
except Exception:
raise AnsibleError("Failed to get stack IDs from the Stackpath API: %s" % traceback.format_exc())
cache_key = self.get_cache_key(path)
# false when refresh_cache or --flush-cache is used
if cache:
# get the user-specified directive
cache = self.get_option('cache')
# Generate inventory
cache_needs_update = False
if cache:
try:
results = self._cache[cache_key]
except KeyError:
# if cache expires or cache file doesn't exist
cache_needs_update = True
if not cache or cache_needs_update:
results = self._query()
self._populate(results)
# If the cache has expired/doesn't exist or
# if refresh_inventory/flush cache is used
# when the user is using caching, update the cached inventory
try:
if cache_needs_update or (not cache and self.get_option('cache')):
self._cache[cache_key] = results
except Exception:
raise AnsibleError("Failed to populate data: %s" % traceback.format_exc())

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
name: virtualbox
plugin_type: inventory
short_description: virtualbox inventory source

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
lookup: cartesian
short_description: returns the cartesian product of lists
description:
@@ -36,7 +37,8 @@ RETURN = """
_list:
description:
- list of lists composed of elements of the input lists
type: lists
type: list
elements: list
"""
from itertools import product

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
lookup: chef_databag
short_description: fetches data from a Chef Databag
description:
@@ -34,7 +35,9 @@ EXAMPLES = """
RETURN = """
_raw:
description:
- The value from the databag
- The value from the databag.
type: list
elements: dict
"""
from ansible.errors import AnsibleError

View File

@@ -6,6 +6,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
lookup: consul_kv
short_description: Fetch metadata from a Consul key value store.
description:
@@ -98,6 +99,7 @@ RETURN = """
_raw:
description:
- Value(s) stored in consul.
type: dict
"""
import os

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
lookup: credstash
short_description: retrieve secrets from Credstash on AWS
requirements:
@@ -78,7 +79,8 @@ EXAMPLES = """
RETURN = """
_raw:
description:
- value(s) stored in Credstash
- Value(s) stored in Credstash.
type: str
"""
import os

View File

@@ -5,6 +5,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author: Unknown (!UNKNOWN)
lookup: cyberarkpassword
short_description: get secrets from CyberArk AIM
requirements:
@@ -30,8 +31,8 @@ DOCUMENTATION = '''
default: 'password'
_extra:
description: for extra_params values please check parameters for clipasswordsdk in CyberArk's "Credential Provider and ASCP Implementation Guide"
note:
- For Ansible on windows, please change the -parameters (-p, -d, and -o) to /parameters (/p, /d, and /o) and change the location of CLIPasswordSDK.exe
notes:
- For Ansible on Windows, please change the -parameters (-p, -d, and -o) to /parameters (/p, /d, and /o) and change the location of CLIPasswordSDK.exe.
'''
EXAMPLES = """

View File

@@ -77,8 +77,10 @@ EXAMPLES = """
RETURN = """
_list:
description:
- list of composed strings or dictionaries with key and value
- List of composed strings or dictionaries with key and value
If a dictionary, fields shows the keys returned depending on query type
type: list
elements: raw
contains:
ALL:
description:

View File

@@ -70,6 +70,8 @@ _list:
description:
- One or more JSON responses to C(GET /secrets/{path}).
- See U(https://dsv.thycotic.com/api/index.html#operation/getSecret).
type: list
elements: dict
"""
EXAMPLES = r"""

View File

@@ -71,7 +71,7 @@ RETURN = '''
description:
- list of values associated with input keys
type: list
elements: strings
elements: string
'''
import json

View File

@@ -8,7 +8,7 @@ __metaclass__ = type
DOCUMENTATION = '''
author:
- Eric Belhomme <ebelhomme@fr.scc.com>
- Eric Belhomme (@eric-belhomme) <ebelhomme@fr.scc.com>
version_added: '0.2.0'
lookup: etcd3
short_description: Get key values from etcd3 server
@@ -94,6 +94,7 @@ DOCUMENTATION = '''
seealso:
- module: community.general.etcd3
- ref: etcd_lookup
description: The etcd v2 lookup.
requirements:
- "etcd3 >= 0.10"

View File

@@ -53,42 +53,60 @@ EXAMPLES = r"""
RETURN = r"""
_raw:
description: list of dictionaries with file information
description: List of dictionaries with file information.
type: list
elements: dict
contains:
src:
description:
- full path to file.
- not returned when C(item.state) is set to C(directory).
- Full path to file.
- Not returned when I(item.state) is set to C(directory).
type: path
root:
description: allows filtering by original location.
description: Allows filtering by original location.
type: path
path:
description: contains the relative path to root.
description: Contains the relative path to root.
type: path
mode:
description: The permissions the resulting file or directory.
type: str
state:
description: TODO
type: str
owner:
description: Name of the user that owns the file/directory.
type: raw
group:
description: Name of the group that owns the file/directory.
type: raw
seuser:
description: The user part of the SELinux file context.
type: raw
serole:
description: The role part of the SELinux file context.
type: raw
setype:
description: The type part of the SELinux file context.
type: raw
selevel:
description: The level part of the SELinux file context.
type: raw
uid:
description: Owner ID of the file/directory.
type: int
gid:
description: Group ID of the file/directory.
type: int
size:
description: Size of the target.
type: int
mtime:
description: Time of last modification.
type: float
ctime:
description: Time of last metadata update or creation (depends on OS).
type: float
"""
import os
import pwd

View File

@@ -6,7 +6,7 @@ __metaclass__ = type
DOCUMENTATION = '''
lookup: flattened
author: Serge van Ginderachter <serge@vanginderachter.be>
author: Serge van Ginderachter (!UNKNOWN) <serge@vanginderachter.be>
short_description: return single list completely flattened
description:
- given one or more lists, this lookup will flatten any list elements found recursively until only 1 list is left.

View File

@@ -9,7 +9,7 @@ lookup: gcp_storage_file
description:
- This lookup returns the contents from a file residing on Google Cloud Storage
short_description: Return GC Storage content
author: Eric Anderson <eanderson@avinetworks.com>
author: Eric Anderson (!UNKNOWN) <eanderson@avinetworks.com>
requirements:
- python >= 2.6
- requests >= 2.18.4
@@ -40,6 +40,8 @@ RETURN = '''
_raw:
description:
- base64 encoded file content
type: list
elements: str
'''
import base64

View File

@@ -9,7 +9,7 @@ __metaclass__ = type
DOCUMENTATION = """
lookup: hashi_vault
author:
- Jonathan Davila <jdavila(at)ansible.com>
- Jonathan Davila (!UNKNOWN) <jdavila(at)ansible.com>
- Brian Scholer (@briantist)
short_description: Retrieve secrets from HashiCorp's vault
requirements:
@@ -38,13 +38,17 @@ DOCUMENTATION = """
token_path:
description: If no token is specified, will try to read the token file from this path.
env:
- name: HOME
- name: VAULT_TOKEN_PATH
version_added: 1.2.0
ini:
- section: lookup_hashi_vault
key: token_path
version_added: '0.2.0'
token_file:
description: If no token is specified, will try to read the token from this file in C(token_path).
env:
- name: VAULT_TOKEN_FILE
version_added: 1.2.0
ini:
- section: lookup_hashi_vault
key: token_file
@@ -117,6 +121,9 @@ DOCUMENTATION = """
default: True
namespace:
description: Namespace where secrets reside. Requires HVAC 0.7.0+ and Vault 0.11+.
env:
- name: VAULT_NAMESPACE
version_added: 1.2.0
aws_profile:
description: The AWS profile
type: str
@@ -241,6 +248,8 @@ RETURN = """
_raw:
description:
- secrets(s) requested
type: list
elements: dict
"""
import os
@@ -404,7 +413,7 @@ class HashiVault:
self.client.auth_ldap(**params)
def auth_approle(self):
params = self.get_options('role_id', 'secret_id')
params = self.get_options('role_id', 'secret_id', 'mount_point')
self.client.auth_approle(**params)
def auth_aws_iam_login(self):
@@ -532,6 +541,11 @@ class LookupModule(LookupBase):
def validate_auth_token(self, auth_method):
if auth_method == 'token':
if not self.get_option('token_path'):
# generally we want env vars defined in the spec, but in this case we want
# the env var HOME to have lower precedence than any other value source,
# including ini, so we're doing it here after all other processing has taken place
self.set_option('token_path', os.environ.get('HOME'))
if not self.get_option('token') and self.get_option('token_path'):
token_filename = os.path.join(
self.get_option('token_path'),

View File

@@ -18,7 +18,7 @@ DOCUMENTATION = '''
description:
- The list of keys to lookup on the Puppetmaster
type: list
element_type: string
elements: string
required: True
_bin_file:
description:
@@ -55,7 +55,8 @@ RETURN = """
_raw:
description:
- a value associated with input key
type: strings
type: list
elements: str
"""
import os

View File

@@ -8,7 +8,7 @@ __metaclass__ = type
DOCUMENTATION = '''
lookup: keyring
author:
- Samuel Boucher <boucher.samuel.c@gmail.com>
- Samuel Boucher (!UNKNOWN) <boucher.samuel.c@gmail.com>
requirements:
- keyring (python library)
short_description: grab secrets from the OS keyring
@@ -29,7 +29,9 @@ EXAMPLES = """
RETURN = """
_raw:
description: secrets stored
description: Secrets stored.
type: list
elements: str
"""
HAS_KEYRING = True

View File

@@ -7,7 +7,7 @@ __metaclass__ = type
DOCUMENTATION = '''
lookup: lastpass
author:
- Andrew Zenk <azenk@umn.edu>
- Andrew Zenk (!UNKNOWN) <azenk@umn.edu>
requirements:
- lpass (command line utility)
- must have already logged into lastpass
@@ -32,6 +32,8 @@ EXAMPLES = """
RETURN = """
_raw:
description: secrets stored
type: list
elements: str
"""
from subprocess import Popen, PIPE

View File

@@ -48,6 +48,8 @@ EXAMPLES = """
RETURN = """
_raw:
description: value(s) stored in LMDB
type: list
elements: raw
"""

View File

@@ -6,7 +6,7 @@ __metaclass__ = type
DOCUMENTATION = '''
author:
- Kyrylo Galanov (galanoff@gmail.com)
- Kyrylo Galanov (!UNKNOWN) <galanoff@gmail.com>
lookup: manifold
short_description: get credentials from Manifold.co
description:

View File

@@ -22,6 +22,7 @@ __metaclass__ = type
DOCUMENTATION = '''
---
author: Unknown (!UNKNOWN)
lookup: nios
short_description: Query Infoblox NIOS objects
description:
@@ -83,8 +84,7 @@ RETURN = """
obj_type:
description:
- The object type specified in the terms argument
returned: always
type: complex
type: dictionary
contains:
obj_field:
description:

View File

@@ -22,6 +22,7 @@ __metaclass__ = type
DOCUMENTATION = '''
---
author: Unknown (!UNKNOWN)
lookup: nios_next_ip
short_description: Return the next available IP address for a network
description:
@@ -64,7 +65,6 @@ RETURN = """
_list:
description:
- The list of next IP addresses available
returned: always
type: list
"""

View File

@@ -22,6 +22,7 @@ __metaclass__ = type
DOCUMENTATION = '''
---
author: Unknown (!UNKNOWN)
lookup: nios_next_network
short_description: Return the next available network range for a network-container
description:
@@ -74,7 +75,6 @@ RETURN = """
_list:
description:
- The list of next network addresses available
returned: always
type: list
"""

View File

@@ -86,6 +86,8 @@ EXAMPLES = """
RETURN = """
_raw:
description: field data requested
type: list
elements: str
"""
import errno

View File

@@ -62,6 +62,8 @@ EXAMPLES = """
RETURN = """
_raw:
description: field data requested
type: list
elements: dict
"""
import json

View File

@@ -8,7 +8,7 @@ __metaclass__ = type
DOCUMENTATION = '''
lookup: passwordstore
author:
- Patrick Deelman <patrick@patrickdeelman.nl>
- Patrick Deelman (!UNKNOWN) <patrick@patrickdeelman.nl>
short_description: manage passwords with passwordstore.org's pass utility
description:
- Enables Ansible to retrieve, create or update passwords from the passwordstore.org pass utility.
@@ -91,6 +91,8 @@ RETURN = """
_raw:
description:
- a password
type: list
elements: str
"""
import os

View File

@@ -8,7 +8,7 @@ DOCUMENTATION = '''
lookup: redis
author:
- Jan-Piet Mens (@jpmens) <jpmens(at)gmail.com>
- Ansible Core
- Ansible Core Team
short_description: fetch data from Redis
description:
- This lookup returns a list of results from a Redis DB corresponding to a list of items given to it
@@ -67,6 +67,8 @@ EXAMPLES = """
RETURN = """
_raw:
description: value(s) stored in Redis
type: list
elements: str
"""
import os

View File

@@ -6,7 +6,7 @@ __metaclass__ = type
DOCUMENTATION = '''
lookup: shelvefile
author: Alejandro Guirao <lekumberri@gmail.com>
author: Alejandro Guirao (!UNKNOWN) <lekumberri@gmail.com>
short_description: read keys from Python shelve file
description:
- Read keys from Python shelve file.
@@ -29,6 +29,8 @@ EXAMPLES = """
RETURN = """
_list:
description: value(s) of key(s) in shelve file(s)
type: list
elements: str
"""
import shelve

View File

@@ -66,6 +66,8 @@ _list:
description:
- The JSON responses to C(GET /secrets/{id}).
- See U(https://updates.thycotic.net/secretserver/restapiguide/TokenAuth/#operation--secrets--id--get).
type: list
elements: dict
"""
EXAMPLES = r"""

View File

@@ -710,24 +710,6 @@ class RedfishUtils(object):
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def restart_manager_gracefully(self):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#Manager.Reset"]["target"]
payload = {'ResetType': 'GracefulRestart'}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def manage_indicator_led(self, command):
result = {}
key = 'IndicatorLED'
@@ -773,6 +755,14 @@ class RedfishUtils(object):
return reset_type
def manage_system_power(self, command):
return self.manage_power(command, self.systems_uri,
'#ComputerSystem.Reset')
def manage_manager_power(self, command):
return self.manage_power(command, self.manager_uri,
'#Manager.Reset')
def manage_power(self, command, resource_uri, action_name):
key = "Actions"
reset_type_values = ['On', 'ForceOff', 'GracefulShutdown',
'GracefulRestart', 'ForceRestart', 'Nmi',
@@ -790,8 +780,8 @@ class RedfishUtils(object):
if reset_type not in reset_type_values:
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
# read the system resource and get the current power state
response = self.get_request(self.root_uri + self.systems_uri)
# read the resource and get the current power state
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
data = response['data']
@@ -803,13 +793,13 @@ class RedfishUtils(object):
if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']:
return {'ret': True, 'changed': False}
# get the #ComputerSystem.Reset Action and target URI
if key not in data or '#ComputerSystem.Reset' not in data[key]:
return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'}
reset_action = data[key]['#ComputerSystem.Reset']
# get the reset Action and target URI
if key not in data or action_name not in data[key]:
return {'ret': False, 'msg': 'Action %s not found' % action_name}
reset_action = data[key][action_name]
if 'target' not in reset_action:
return {'ret': False,
'msg': 'target URI missing from Action #ComputerSystem.Reset'}
'msg': 'target URI missing from Action %s' % action_name}
action_uri = reset_action['target']
# get AllowableValues
@@ -1501,13 +1491,18 @@ class RedfishUtils(object):
return response
return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"}
def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next):
def set_boot_override(self, boot_opts):
result = {}
key = "Boot"
if not bootdevice:
bootdevice = boot_opts.get('bootdevice')
uefi_target = boot_opts.get('uefi_target')
boot_next = boot_opts.get('boot_next')
override_enabled = boot_opts.get('override_enabled')
if not bootdevice and override_enabled != 'Disabled':
return {'ret': False,
'msg': "bootdevice option required for SetOneTimeBoot"}
'msg': "bootdevice option required for temporary boot override"}
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
@@ -1530,21 +1525,27 @@ class RedfishUtils(object):
(bootdevice, allowable_values)}
# read existing values
enabled = boot.get('BootSourceOverrideEnabled')
cur_enabled = boot.get('BootSourceOverrideEnabled')
target = boot.get('BootSourceOverrideTarget')
cur_uefi_target = boot.get('UefiTargetBootSourceOverride')
cur_boot_next = boot.get('BootNext')
if bootdevice == 'UefiTarget':
if override_enabled == 'Disabled':
payload = {
'Boot': {
'BootSourceOverrideEnabled': override_enabled
}
}
elif bootdevice == 'UefiTarget':
if not uefi_target:
return {'ret': False,
'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"}
if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target:
if override_enabled == cur_enabled and target == bootdevice and uefi_target == cur_uefi_target:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideEnabled': override_enabled,
'BootSourceOverrideTarget': bootdevice,
'UefiTargetBootSourceOverride': uefi_target
}
@@ -1553,23 +1554,23 @@ class RedfishUtils(object):
if not boot_next:
return {'ret': False,
'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"}
if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next:
if cur_enabled == override_enabled and target == bootdevice and boot_next == cur_boot_next:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideEnabled': override_enabled,
'BootSourceOverrideTarget': bootdevice,
'BootNext': boot_next
}
}
else:
if enabled == 'Once' and target == bootdevice:
if cur_enabled == override_enabled and target == bootdevice:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideEnabled': override_enabled,
'BootSourceOverrideTarget': bootdevice
}
}

View File

@@ -2400,7 +2400,7 @@ class Container(DockerBaseClass):
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
if self.parameters.published_ports is None:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():

View File

@@ -19,6 +19,7 @@ options:
filter:
description:
- Filter facts
type: str
choices: [ status, result ]
notes:
- See http://cloudinit.readthedocs.io/ for more information about cloud-init.

View File

@@ -25,30 +25,37 @@ options:
host:
description:
- Tiller's server host.
type: str
default: "localhost"
port:
description:
- Tiller's server port.
type: int
default: 44134
namespace:
description:
- Kubernetes namespace where the chart should be installed.
type: str
default: "default"
name:
description:
- Release name to manage.
type: str
state:
description:
- Whether to install C(present), remove C(absent), or purge C(purged) a package.
choices: ['absent', 'purged', 'present']
type: str
default: "present"
chart:
description: |
A map describing the chart to install. See examples for available options.
description:
- A map describing the chart to install. See examples for available options.
type: dict
default: {}
values:
description:
- A map of value options for the chart.
type: dict
default: {}
disable_hooks:
description:

View File

@@ -23,116 +23,148 @@ options:
user:
description:
- The user to authenticate with.
type: str
required: true
url:
description:
- The url of the oVirt instance.
type: str
required: true
instance_name:
description:
- The name of the instance to use.
type: str
required: true
aliases: [ vmname ]
password:
description:
- Password of the user to authenticate with.
type: str
required: true
image:
description:
- The template to use for the instance.
type: str
resource_type:
description:
- Whether you want to deploy an image or create an instance from scratch.
type: str
choices: [ new, template ]
zone:
description:
- Deploy the image to this oVirt cluster.
type: str
instance_disksize:
description:
- Size of the instance's disk in GB.
type: str
aliases: [ vm_disksize]
instance_cpus:
description:
- The instance's number of CPUs.
type: str
default: 1
aliases: [ vmcpus ]
instance_nic:
description:
- The name of the network interface in oVirt/RHEV.
type: str
aliases: [ vmnic ]
instance_network:
description:
- The logical network the machine should belong to.
type: str
default: rhevm
aliases: [ vmnetwork ]
instance_mem:
description:
- The instance's amount of memory in MB.
type: str
aliases: [ vmmem ]
instance_type:
description:
- Define whether the instance is a server, desktop or high_performance.
- I(high_performance) is supported since Ansible 2.5 and oVirt/RHV 4.2.
type: str
choices: [ desktop, server, high_performance ]
default: server
aliases: [ vmtype ]
disk_alloc:
description:
- Define whether disk is thin or preallocated.
type: str
choices: [ preallocated, thin ]
default: thin
disk_int:
description:
- Interface type of the disk.
type: str
choices: [ ide, virtio ]
default: virtio
instance_os:
description:
- Type of Operating System.
type: str
aliases: [ vmos ]
instance_cores:
description:
- Define the instance's number of cores.
type: str
default: 1
aliases: [ vmcores ]
sdomain:
description:
- The Storage Domain where you want to create the instance's disk on.
type: str
region:
description:
- The oVirt/RHEV datacenter where you want to deploy to.
type: str
instance_dns:
description:
- Define the instance's Primary DNS server.
type: str
aliases: [ dns ]
instance_domain:
description:
- Define the instance's Domain.
type: str
aliases: [ domain ]
instance_hostname:
description:
- Define the instance's Hostname.
type: str
aliases: [ hostname ]
instance_ip:
description:
- Define the instance's IP.
type: str
aliases: [ ip ]
instance_netmask:
description:
- Define the instance's Netmask.
type: str
aliases: [ netmask ]
instance_gateway:
description:
- Define the instance's Gateway.
type: str
aliases: [ gateway ]
instance_rootpw:
description:
- Define the instance's Root password.
type: str
aliases: [ rootpw ]
instance_key:
description:
- Define the instance's Authorized key.
type: str
aliases: [ key ]
state:
description:
- Create, terminate or remove instances.
choices: [ absent, present, restarted, shutdown, started ]
type: str
choices: [ absent, present, restart, shutdown, started ]
default: present
requirements:
- ovirt-engine-sdk-python

View File

@@ -16,20 +16,24 @@ options:
api_host:
description:
- the host of the Proxmox VE cluster
type: str
required: true
api_user:
description:
- the user to authenticate with
type: str
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
type: str
vmid:
description:
- the instance id
- if not set, the next available VM ID will be fetched from ProxmoxAPI.
- if not set, will be fetched from PromoxAPI based on the hostname
type: str
validate_certs:
description:
- enable / disable https certificate verification
@@ -40,51 +44,64 @@ options:
- Proxmox VE node, when new VM will be created
- required only for C(state=present)
- for another states will be autodiscovered
type: str
pool:
description:
- Proxmox VE resource pool
type: str
password:
description:
- the instance root password
- required only for C(state=present)
type: str
hostname:
description:
- the instance hostname
- required only for C(state=present)
- must be unique if vmid is not passed
type: str
ostemplate:
description:
- the template for VM creating
- required only for C(state=present)
type: str
disk:
description:
- hard disk size in GB for instance
type: str
default: 3
cores:
description:
- Specify number of cores per socket.
type: int
default: 1
cpus:
description:
- numbers of allocated cpus for instance
type: int
default: 1
memory:
description:
- memory size in MB for instance
type: int
default: 512
swap:
description:
- swap memory size in MB for instance
type: int
default: 0
netif:
description:
- specifies network interfaces for the container. As a hash/dictionary defining interfaces.
type: dict
mounts:
description:
- specifies additional mounts (separate disks) for the container. As a hash/dictionary defining mount points
type: dict
ip_address:
description:
- specifies the address the container will be assigned
type: str
onboot:
description:
- specifies whether a VM will be started during system bootup
@@ -93,20 +110,25 @@ options:
storage:
description:
- target storage
type: str
default: 'local'
cpuunits:
description:
- CPU weight for a VM
type: int
default: 1000
nameserver:
description:
- sets DNS server IP address for a container
type: str
searchdomain:
description:
- sets DNS search domain for a container
type: str
timeout:
description:
- timeout for operations
type: int
default: 30
force:
description:
@@ -119,11 +141,13 @@ options:
state:
description:
- Indicate desired state of the instance
type: str
choices: ['present', 'started', 'absent', 'stopped', 'restarted']
default: present
pubkey:
description:
- Public key to add to /root/.ssh/authorized_keys. This was added on Proxmox 4.2, it is ignored for earlier versions
type: str
unprivileged:
description:
- Indicate if the container should be unprivileged

View File

@@ -28,19 +28,23 @@ options:
description:
- Pass arbitrary arguments to kvm.
- This option is for experts only!
type: str
default: "-serial unix:/var/run/qemu-server/VMID.serial,server,nowait"
api_host:
description:
- Specify the target host of the Proxmox VE cluster.
type: str
required: true
api_user:
description:
- Specify the user to authenticate with.
type: str
required: true
api_password:
description:
- Specify the password to authenticate with.
- You can use C(PROXMOX_PASSWORD) environment variable.
type: str
autostart:
description:
- Specify if the VM should be automatically restarted after crash (currently ignored in PVE API).
@@ -50,59 +54,73 @@ options:
description:
- Specify the amount of RAM for the VM in MB.
- Using zero disables the balloon driver.
type: int
default: 0
bios:
description:
- Specify the BIOS implementation.
type: str
choices: ['seabios', 'ovmf']
boot:
description:
- Specify the boot order -> boot on floppy C(a), hard disk C(c), CD-ROM C(d), or network C(n).
- You can combine to set order.
type: str
default: cnd
bootdisk:
description:
- Enable booting from specified disk. C((ide|sata|scsi|virtio)\d+)
type: str
clone:
description:
- Name of VM to be cloned. If C(vmid) is setted, C(clone) can take arbitrary value but required for initiating the clone.
type: str
cores:
description:
- Specify number of cores per socket.
type: int
default: 1
cpu:
description:
- Specify emulated CPU type.
type: str
default: kvm64
cpulimit:
description:
- Specify if CPU usage will be limited. Value 0 indicates no CPU limit.
- If the computer has 2 CPUs, it has total of '2' CPU time
type: int
cpuunits:
description:
- Specify CPU weight for a VM.
- You can disable fair-scheduler configuration by setting this to 0
type: int
default: 1000
delete:
description:
- Specify a list of settings you want to delete.
type: str
description:
description:
- Specify the description for the VM. Only used on the configuration web interface.
- This is saved as comment inside the configuration file.
type: str
digest:
description:
- Specify if to prevent changes if current configuration file has different SHA1 digest.
- This can be used to prevent concurrent modifications.
type: str
force:
description:
- Allow to force stop VM.
- Can be used only with states C(stopped), C(restarted).
- Can be used with states C(stopped) and C(restarted).
default: no
type: bool
format:
description:
- Target drive's backing file's data format.
- Used only with clone
type: str
default: qcow2
choices: [ "cloop", "cow", "qcow", "qcow2", "qed", "raw", "vmdk" ]
freeze:
@@ -126,14 +144,17 @@ options:
- C(rombar=boolean) I(default=1) Specify whether or not the device's ROM will be visible in the guest's memory map.
- C(x-vga=boolean) I(default=0) Enable vfio-vga device support.
- /!\ This option allows direct access to host hardware. So it is no longer possible to migrate such machines - use with special care.
type: dict
hotplug:
description:
- Selectively enable hotplug features.
- This is a comma separated list of hotplug features C('network', 'disk', 'cpu', 'memory' and 'usb').
- Value 0 disables hotplug completely and value 1 is an alias for the default C('network,disk,usb').
type: str
hugepages:
description:
- Enable/disable hugepages memory.
type: str
choices: ['any', '2', '1024']
ide:
description:
@@ -143,9 +164,11 @@ options:
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
type: dict
keyboard:
description:
- Sets the keyboard layout for VNC server.
type: str
kvm:
description:
- Enable/disable KVM hardware virtualization.
@@ -159,26 +182,32 @@ options:
lock:
description:
- Lock/unlock the VM.
type: str
choices: ['migrate', 'backup', 'snapshot', 'rollback']
machine:
description:
- Specifies the Qemu machine type.
- type => C((pc|pc(-i440fx)?-\d+\.\d+(\.pxe)?|q35|pc-q35-\d+\.\d+(\.pxe)?))
type: str
memory:
description:
- Memory size in MB for instance.
type: int
default: 512
migrate_downtime:
description:
- Sets maximum tolerated downtime (in seconds) for migrations.
type: int
migrate_speed:
description:
- Sets maximum speed (in MB/s) for migrations.
- A value of 0 is no limit.
type: int
name:
description:
- Specifies the VM name. Only used on the configuration web interface.
- Required only for C(state=present).
type: str
net:
description:
- A hash/dictionary of network interfaces for the VM. C(net='{"key":"value", "key":"value"}').
@@ -189,15 +218,18 @@ options:
- The C(bridge) parameter can be used to automatically add the interface to a bridge device. The Proxmox VE standard bridge is called 'vmbr0'.
- Option C(rate) is used to limit traffic bandwidth from and to this interface. It is specified as floating point number, unit is 'Megabytes per second'.
- If you specify no bridge, we create a kvm 'user' (NATed) network device, which provides DHCP and DNS services.
type: dict
newid:
description:
- VMID for the clone. Used only with clone.
- If newid is not set, the next available VM ID will be fetched from ProxmoxAPI.
type: int
node:
description:
- Proxmox VE node, where the new VM will be created.
- Only required for C(state=present).
- For other states, it will be autodiscovered.
type: str
numa:
description:
- A hash/dictionaries of NUMA topology. C(numa='{"key":"value", "key":"value"}').
@@ -207,6 +239,7 @@ options:
- C(hostnodes) Host NUMA nodes to use.
- C(memory) Amount of memory this NUMA node provides.
- C(policy) NUMA allocation policy.
type: dict
onboot:
description:
- Specifies whether a VM will be started during system bootup.
@@ -216,6 +249,7 @@ options:
description:
- Specifies guest operating system. This is used to enable special optimization/features for specific operating systems.
- The l26 is Linux 2.6/3.X Kernel.
type: str
choices: ['other', 'wxp', 'w2k', 'w2k3', 'w2k8', 'wvista', 'win7', 'win8', 'win10', 'l24', 'l26', 'solaris']
default: l26
parallel:
@@ -223,9 +257,11 @@ options:
- A hash/dictionary of map host parallel devices. C(parallel='{"key":"value", "key":"value"}').
- Keys allowed are - (parallel[n]) where 0 ≤ n ≤ 2.
- Values allowed are - C("/dev/parport\d+|/dev/usb/lp\d+").
type: dict
pool:
description:
- Add the new VM to the specified pool.
type: str
protection:
description:
- Enable/disable the protection flag of the VM. This will enable/disable the remove VM and remove disk operations.
@@ -237,6 +273,7 @@ options:
revert:
description:
- Revert a pending change.
type: str
sata:
description:
- A hash/dictionary of volume used as sata hard disk or CD-ROM. C(sata='{"key":"value", "key":"value"}').
@@ -245,6 +282,7 @@ options:
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
type: dict
scsi:
description:
- A hash/dictionary of volume used as SCSI hard disk or CD-ROM. C(scsi='{"key":"value", "key":"value"}').
@@ -253,9 +291,11 @@ options:
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
type: dict
scsihw:
description:
- Specifies the SCSI controller model.
type: str
choices: ['lsi', 'lsi53c810', 'virtio-scsi-pci', 'virtio-scsi-single', 'megasas', 'pvscsi']
serial:
description:
@@ -263,44 +303,54 @@ options:
- Keys allowed are - serial[n](str; required) where 0 ≤ n ≤ 3.
- Values allowed are - C((/dev/.+|socket)).
- /!\ If you pass through a host serial device, it is no longer possible to migrate such machines - use with special care.
type: dict
shares:
description:
- Rets amount of memory shares for auto-ballooning. (0 - 50000).
- The larger the number is, the more memory this VM gets.
- The number is relative to weights of all other running VMs.
- Using 0 disables auto-ballooning, this means no limit.
type: int
skiplock:
description:
- Ignore locks
- Only root is allowed to use this option.
type: bool
smbios:
description:
- Specifies SMBIOS type 1 fields.
type: str
snapname:
description:
- The name of the snapshot. Used only with clone.
type: str
sockets:
description:
- Sets the number of CPU sockets. (1 - N).
type: int
default: 1
startdate:
description:
- Sets the initial date of the real time clock.
- Valid format for date are C('now') or C('2016-09-25T16:01:21') or C('2016-09-25').
type: str
startup:
description:
- Startup and shutdown behavior. C([[order=]\d+] [,up=\d+] [,down=\d+]).
- Order is a non-negative number defining the general startup order.
- Shutdown in done with reverse ordering.
type: str
state:
description:
- Indicates desired state of the instance.
- If C(current), the current state of the VM will be fetched. You can access it with C(results.status)
type: str
choices: ['present', 'started', 'absent', 'stopped', 'restarted','current']
default: present
storage:
description:
- Target storage for full clone.
type: str
tablet:
description:
- Enables/disables the USB tablet device.
@@ -310,6 +360,7 @@ options:
description:
- Target node. Only allowed if the original VM is on shared storage.
- Used only with clone
type: str
tdf:
description:
- Enables/disables time drift fix.
@@ -322,6 +373,7 @@ options:
timeout:
description:
- Timeout for operations.
type: int
default: 30
update:
description:
@@ -338,9 +390,11 @@ options:
vcpus:
description:
- Sets number of hotplugged vcpus.
type: int
vga:
description:
- Select VGA type. If you want to use high resolution modes (>= 1280x1024x16) then you should use option 'std' or 'vmware'.
type: str
choices: ['std', 'cirrus', 'vmware', 'qxl', 'serial0', 'serial1', 'serial2', 'serial3', 'qxl2', 'qxl3', 'qxl4']
default: std
virtio:
@@ -351,13 +405,16 @@ options:
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
type: dict
vmid:
description:
- Specifies the VM ID. Instead use I(name) parameter.
- If vmid is not set, the next available VM ID will be fetched from ProxmoxAPI.
type: int
watchdog:
description:
- Creates a virtual hardware watchdog device.
type: str
requirements: [ "proxmoxer", "requests" ]
'''
@@ -588,9 +645,6 @@ from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
VZ_TYPE = 'qemu'
def get_nextvmid(module, proxmox):
try:
vmid = proxmox.cluster.nextid.get()
@@ -652,19 +706,35 @@ def get_vminfo(module, proxmox, node, vmid, **kwargs):
results['vmid'] = int(vmid)
def settings(module, proxmox, vmid, node, name, timeout, **kwargs):
def settings(module, proxmox, vmid, node, name, **kwargs):
proxmox_node = proxmox.nodes(node)
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
if getattr(proxmox_node, VZ_TYPE)(vmid).config.set(**kwargs) is None:
if proxmox_node.qemu(vmid).config.set(**kwargs) is None:
return True
else:
return False
def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, timeout, update, **kwargs):
def wait_for_task(module, proxmox, node, taskid):
timeout = module.params['timeout']
while timeout:
task = proxmox.nodes(node).tasks(taskid).status.get()
if task['status'] == 'stopped' and task['exitstatus'] == 'OK':
# Wait an extra second as the API can be a ahead of the hypervisor
time.sleep(1)
return True
timeout = timeout - 1
if timeout == 0:
break
time.sleep(1)
return False
def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, update, **kwargs):
# Available only in PVE 4
only_v4 = ['force', 'protection', 'skiplock']
@@ -698,6 +768,8 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
del kwargs['ide']
if 'net' in kwargs:
del kwargs['net']
if 'force' in kwargs:
del kwargs['force']
# Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n]
for k in list(kwargs.keys()):
@@ -723,7 +795,7 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
module.fail_json(msg='skiplock parameter require root@pam user. ')
if update:
if getattr(proxmox_node, VZ_TYPE)(vmid).config.set(name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs) is None:
if proxmox_node.qemu(vmid).config.set(name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs) is None:
return True
else:
return False
@@ -734,50 +806,35 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
clone_params.update(dict([k, int(v)] for k, v in clone_params.items() if isinstance(v, bool)))
taskid = proxmox_node.qemu(vmid).clone.post(newid=newid, name=name, **clone_params)
else:
taskid = getattr(proxmox_node, VZ_TYPE).create(vmid=vmid, name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs)
taskid = proxmox_node.qemu.create(vmid=vmid, name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs)
while timeout:
if (proxmox_node.tasks(taskid).status.get()['status'] == 'stopped' and
proxmox_node.tasks(taskid).status.get()['exitstatus'] == 'OK'):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
if not wait_for_task(module, proxmox, node, taskid):
module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
return False
return True
def start_vm(module, proxmox, vm, vmid, timeout):
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.start.post()
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
return True
timeout -= 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def start_vm(module, proxmox, vm):
vmid = vm[0]['vmid']
proxmox_node = proxmox.nodes(vm[0]['node'])
taskid = proxmox_node.qemu(vmid).status.start.post()
if not wait_for_task(module, proxmox, vm[0]['node'], taskid):
module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
return False
return True
def stop_vm(module, proxmox, vm, vmid, timeout, force):
if force:
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post(forceStop=1)
else:
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post()
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
return True
timeout -= 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def stop_vm(module, proxmox, vm, force):
vmid = vm[0]['vmid']
proxmox_node = proxmox.nodes(vm[0]['node'])
taskid = proxmox_node.qemu(vmid).status.shutdown.post(forceStop=(1 if force else 0))
if not wait_for_task(module, proxmox, vm[0]['node'], taskid):
module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
return False
return True
def proxmox_version(proxmox):
@@ -807,7 +864,7 @@ def main():
delete=dict(type='str', default=None),
description=dict(type='str'),
digest=dict(type='str'),
force=dict(type='bool', default=None),
force=dict(type='bool', default=False),
format=dict(type='str', default='qcow2', choices=['cloop', 'cow', 'qcow', 'qcow2', 'qed', 'raw', 'vmdk']),
freeze=dict(type='bool'),
full=dict(type='bool', default=True),
@@ -884,7 +941,6 @@ def main():
revert = module.params['revert']
sockets = module.params['sockets']
state = module.params['state']
timeout = module.params['timeout']
update = bool(module.params['update'])
vmid = module.params['vmid']
validate_certs = module.params['validate_certs']
@@ -898,58 +954,60 @@ def main():
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
global VZ_TYPE
global PVE_MAJOR_VERSION
PVE_MAJOR_VERSION = 3 if proxmox_version(proxmox) < LooseVersion('4.0') else 4
except Exception as e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
# If vmid not set get the Next VM id from ProxmoxAPI
# If vm name is set get the VM id from ProxmoxAPI
# If vmid is not defined then retrieve its value from the vm name,
# the cloned vm name or retrieve the next free VM id from ProxmoxAPI.
if not vmid:
if state == 'present' and (not update and not clone) and (not delete and not revert):
if state == 'present' and not update and not clone and not delete and not revert:
try:
vmid = get_nextvmid(module, proxmox)
except Exception as e:
module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name))
else:
clone_target = clone or name
try:
if not clone:
vmid = get_vmid(proxmox, name)[0]
else:
vmid = get_vmid(proxmox, clone)[0]
vmid = get_vmid(proxmox, clone_target)[0]
except Exception as e:
if not clone:
module.fail_json(msg="VM {0} does not exist in cluster.".format(name))
else:
module.fail_json(msg="VM {0} does not exist in cluster.".format(clone))
vmid = -1
if clone is not None:
if get_vmid(proxmox, name):
module.exit_json(changed=False, msg="VM with name <%s> already exists" % name)
if vmid is not None:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
# If newid is not defined then retrieve the next free id from ProxmoxAPI
if not newid:
try:
newid = get_nextvmid(module, proxmox)
except Exception as e:
module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name))
else:
vm = get_vm(proxmox, newid)
if vm:
module.exit_json(changed=False, msg="vmid %s with VM name %s already exists" % (newid, name))
# Ensure source VM name exists when cloning
if -1 == vmid:
module.fail_json(msg='VM with name = %s does not exist in cluster' % clone)
# Ensure source VM id exists when cloning
if not get_vm(proxmox, vmid):
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
# Ensure the choosen VM name doesn't already exist when cloning
if get_vmid(proxmox, name):
module.exit_json(changed=False, msg="VM with name <%s> already exists" % name)
# Ensure the choosen VM id doesn't already exist when cloning
if get_vm(proxmox, newid):
module.exit_json(changed=False, msg="vmid %s with VM name %s already exists" % (newid, name))
if delete is not None:
try:
settings(module, proxmox, vmid, node, name, timeout, delete=delete)
settings(module, proxmox, vmid, node, name, delete=delete)
module.exit_json(changed=True, msg="Settings has deleted on VM {0} with vmid {1}".format(name, vmid))
except Exception as e:
module.fail_json(msg='Unable to delete settings on VM {0} with vmid {1}: '.format(name, vmid) + str(e))
elif revert is not None:
if revert is not None:
try:
settings(module, proxmox, vmid, node, name, timeout, revert=revert)
settings(module, proxmox, vmid, node, name, revert=revert)
module.exit_json(changed=True, msg="Settings has reverted on VM {0} with vmid {1}".format(name, vmid))
except Exception as e:
module.fail_json(msg='Unable to revert settings on VM {0} with vmid {1}: Maybe is not a pending task... '.format(name, vmid) + str(e))
@@ -965,7 +1023,7 @@ def main():
elif not node_check(proxmox, node):
module.fail_json(msg="node '%s' does not exist in cluster" % node)
create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, timeout, update,
create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, update,
acpi=module.params['acpi'],
agent=module.params['agent'],
autostart=module.params['autostart'],
@@ -1037,44 +1095,52 @@ def main():
elif clone is not None:
module.fail_json(msg="Unable to clone vm {0} from vmid {1}=".format(name, vmid) + str(e))
else:
module.fail_json(msg="creation of %s VM %s with vmid %s failed with exception=%s" % (VZ_TYPE, name, vmid, e))
module.fail_json(msg="creation of qemu VM %s with vmid %s failed with exception=%s" % (name, vmid, e))
elif state == 'started':
try:
if -1 == vmid:
module.fail_json(msg='VM with name = %s does not exist in cluster' % name)
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid <%s> does not exist in cluster' % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running':
if vm[0]['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is already running" % vmid)
if start_vm(module, proxmox, vm, vmid, timeout):
if start_vm(module, proxmox, vm):
module.exit_json(changed=True, msg="VM %s started" % vmid)
except Exception as e:
module.fail_json(msg="starting of VM %s failed with exception: %s" % (vmid, e))
elif state == 'stopped':
try:
if -1 == vmid:
module.fail_json(msg='VM with name = %s does not exist in cluster' % name)
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped':
if vm[0]['status'] == 'stopped':
module.exit_json(changed=False, msg="VM %s is already stopped" % vmid)
if stop_vm(module, proxmox, vm, vmid, timeout, force=module.params['force']):
if stop_vm(module, proxmox, vm, force=module.params['force']):
module.exit_json(changed=True, msg="VM %s is shutting down" % vmid)
except Exception as e:
module.fail_json(msg="stopping of VM %s failed with exception: %s" % (vmid, e))
elif state == 'restarted':
try:
if -1 == vmid:
module.fail_json(msg='VM with name = %s does not exist in cluster' % name)
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped':
if vm[0]['status'] == 'stopped':
module.exit_json(changed=False, msg="VM %s is not running" % vmid)
if stop_vm(module, proxmox, vm, vmid, timeout, force=module.params['force']) and start_vm(module, proxmox, vm, vmid, timeout):
if stop_vm(module, proxmox, vm, force=module.params['force']) and start_vm(module, proxmox, vm):
module.exit_json(changed=True, msg="VM %s is restarted" % vmid)
except Exception as e:
module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e))
@@ -1083,37 +1149,31 @@ def main():
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.exit_json(changed=False, msg="VM %s does not exist" % vmid)
module.exit_json(changed=False)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running':
proxmox_node = proxmox.nodes(vm[0]['node'])
if vm[0]['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is running. Stop it before deletion." % vmid)
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE).delete(vmid)
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
module.exit_json(changed=True, msg="VM %s removed" % vmid)
timeout -= 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
taskid = proxmox_node.qemu.delete(vmid)
if not wait_for_task(module, proxmox, vm[0]['node'], taskid):
module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
else:
module.exit_json(changed=True, msg="VM %s removed" % vmid)
except Exception as e:
module.fail_json(msg="deletion of VM %s failed with exception: %s" % (vmid, e))
elif state == 'current':
status = {}
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
current = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status']
status['status'] = current
if status:
module.exit_json(changed=False, msg="VM %s with vmid = %s is %s" % (name, vmid, current), **status)
except Exception as e:
module.fail_json(msg="Unable to get vm {0} with vmid = {1} status: ".format(name, vmid) + str(e))
if -1 == vmid:
module.fail_json(msg='VM with name = %s does not exist in cluster' % name)
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
current = proxmox.nodes(vm[0]['node']).qemu(vmid).status.current.get()['status']
status['status'] = current
if status:
module.exit_json(changed=False, msg="VM %s with vmid = %s is %s" % (name, vmid, current), **status)
if __name__ == '__main__':

View File

@@ -18,15 +18,18 @@ options:
api_host:
description:
- the host of the Proxmox VE cluster
type: str
required: true
api_user:
description:
- the user to authenticate with
type: str
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
type: str
validate_certs:
description:
- enable / disable https certificate verification
@@ -35,29 +38,34 @@ options:
node:
description:
- Proxmox VE node, when you will operate with template
type: str
required: true
src:
description:
- path to uploaded file
- required only for C(state=present)
aliases: ['path']
type: path
template:
description:
- the template name
- required only for states C(absent), C(info)
type: str
content_type:
description:
- content type
- required only for C(state=present)
type: str
default: 'vztmpl'
choices: ['vztmpl', 'iso']
storage:
description:
- target storage
type: str
default: 'local'
timeout:
description:
- timeout for operations
type: int
default: 30
force:
description:
@@ -67,6 +75,7 @@ options:
state:
description:
- Indicate desired state of the template
type: str
choices: ['present', 'absent']
default: present
notes:

View File

@@ -29,6 +29,7 @@ options:
description:
- The password for user authentication.
type: str
required: true
server:
description:
- The name/IP of your RHEV-m/oVirt instance.
@@ -111,15 +112,18 @@ options:
description:
- This option uses complex arguments and is a list of disks with the options name, size and domain.
type: list
elements: str
ifaces:
description:
- This option uses complex arguments and is a list of interfaces with the options name and vlan.
type: list
elements: str
aliases: [ interfaces, nics ]
boot_order:
description:
- This option uses complex arguments and is a list of items that specify the bootorder.
type: list
elements: str
default: [ hd, network ]
del_prot:
description:
@@ -1480,14 +1484,14 @@ def main():
vmhost=dict(type='str'),
vmcpu=dict(type='int', default=2),
vmmem=dict(type='int', default=1),
disks=dict(type='list'),
disks=dict(type='list', elements='str'),
osver=dict(type='str', default="rhel_6x64"),
ifaces=dict(type='list', aliases=['interfaces', 'nics']),
ifaces=dict(type='list', elements='str', aliases=['interfaces', 'nics']),
timeout=dict(type='int'),
mempol=dict(type='int', default=1),
vm_ha=dict(type='bool', default=True),
cpu_share=dict(type='int', default=0),
boot_order=dict(type='list', default=['hd', 'network']),
boot_order=dict(type='list', elements='str', default=['hd', 'network']),
del_prot=dict(type='bool', default=True),
cd_drive=dict(type='str'),
),

View File

@@ -39,6 +39,7 @@ options:
- A list of specific functions to deploy.
- If this is not provided, all functions in the service will be deployed.
type: list
elements: str
default: []
region:
description:
@@ -166,7 +167,7 @@ def main():
argument_spec=dict(
service_path=dict(type='path', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
functions=dict(type='list'),
functions=dict(type='list', elements='str'),
region=dict(type='str', default=''),
stage=dict(type='str', default=''),
deploy=dict(type='bool', default=True),

View File

@@ -20,19 +20,23 @@ options:
choices: ['planned', 'present', 'absent']
description:
- Goal state of given stage/project
type: str
default: present
binary_path:
description:
- The path of a terraform binary to use, relative to the 'service_path'
unless you supply an absolute path.
type: path
project_path:
description:
- The path to the root of the Terraform directory with the
vars.tf/main.tf/etc to use.
type: path
required: true
workspace:
description:
- The terraform workspace to work with.
type: str
default: default
purge_workspace:
description:
@@ -46,11 +50,13 @@ options:
- The path to an existing Terraform plan file to apply. If this is not
specified, Ansible will build a new TF plan and execute it.
Note that this option is required if 'state' has the 'planned' value.
type: path
state_file:
description:
- The path to an existing Terraform state file to use when building plan.
If this is not specified, the default `terraform.tfstate` will be used.
- This option is ignored when plan is specified.
type: path
variables_files:
description:
- The path to a variables file for Terraform to fill into the TF
@@ -63,19 +69,24 @@ options:
description:
- A group of key-values to override template variables or those in
variables files.
type: dict
targets:
description:
- A list of specific resources to target in this plan/application. The
resources selected here will also auto-include any dependencies.
type: list
elements: str
lock:
description:
- Enable statefile locking, if you use a service that accepts locks (such
as S3+DynamoDB) to store your statefile.
type: bool
default: true
lock_timeout:
description:
- How long to maintain the lock on the statefile, if you use a service
that accepts locks (such as S3+DynamoDB).
type: int
force_init:
description:
- To avoid duplicating infra, if a state file can't be found this will
@@ -86,6 +97,7 @@ options:
backend_config:
description:
- A group of key-values to provide at init stage to the -backend-config parameter.
type: dict
backend_config_files:
description:
- The path to a configuration file to provide at init state to the -backend-config parameter.
@@ -281,7 +293,7 @@ def main():
variables_files=dict(aliases=['variables_file'], type='list', elements='path', default=None),
plan_file=dict(type='path'),
state_file=dict(type='path'),
targets=dict(type='list', default=[]),
targets=dict(type='list', elements='str', default=[]),
lock=dict(type='bool', default=True),
lock_timeout=dict(type='int',),
force_init=dict(type='bool', default=False),
@@ -368,7 +380,7 @@ def main():
if needs_application and not module.check_mode and not state == 'planned':
rc, out, err = module.run_command(command, cwd=project_path)
# checks out to decide if changes were made during execution
if '0 added, 0 changed' not in out and not state == "absent" or '0 destroyed' not in out:
if ' 0 added, 0 changed' not in out and not state == "absent" or ' 0 destroyed' not in out:
changed = True
if rc != 0:
module.fail_json(

View File

@@ -0,0 +1,370 @@
#!/usr/bin/python
#
# Scaleway database backups management module
#
# Copyright (C) 2020 Guillaume Rodriguez (g.rodriguez@opendecide.com).
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: scaleway_database_backup
short_description: Scaleway database backups management module
version_added: 1.2.0
author: Guillaume Rodriguez (@guillaume_ro_fr)
description:
- This module manages database backups on Scaleway account U(https://developer.scaleway.com).
extends_documentation_fragment:
- community.general.scaleway
options:
state:
description:
- Indicate desired state of the database backup.
- C(present) creates a backup.
- C(absent) deletes the backup.
- C(exported) creates a download link for the backup.
- C(restored) restores the backup to a new database.
type: str
default: present
choices:
- present
- absent
- exported
- restored
region:
description:
- Scaleway region to use (for example C(fr-par)).
type: str
required: true
choices:
- fr-par
- nl-ams
id:
description:
- UUID used to identify the database backup.
- Required for C(absent), C(exported) and C(restored) states.
type: str
name:
description:
- Name used to identify the database backup.
- Required for C(present) state.
- Ignored when C(state=absent), C(state=exported) or C(state=restored).
type: str
required: false
database_name:
description:
- Name used to identify the database.
- Required for C(present) and C(restored) states.
- Ignored when C(state=absent) or C(state=exported).
type: str
required: false
instance_id:
description:
- UUID of the instance associated to the database backup.
- Required for C(present) and C(restored) states.
- Ignored when C(state=absent) or C(state=exported).
type: str
required: false
expires_at:
description:
- Expiration datetime of the database backup (ISO 8601 format).
- Ignored when C(state=absent), C(state=exported) or C(state=restored).
type: str
required: false
wait:
description:
- Wait for the instance to reach its desired state before returning.
type: bool
default: false
wait_timeout:
description:
- Time to wait for the backup to reach the expected state.
type: int
required: false
default: 300
wait_sleep_time:
description:
- Time to wait before every attempt to check the state of the backup.
type: int
required: false
default: 3
'''
EXAMPLES = '''
- name: Create a backup
community.general.scaleway_database_backup:
name: 'my_backup'
state: present
region: 'fr-par'
database_name: 'my-database'
instance_id: '50968a80-2909-4e5c-b1af-a2e19860dddb'
- name: Export a backup
community.general.scaleway_database_backup:
id: '6ef1125a-037e-494f-a911-6d9c49a51691'
state: exported
region: 'fr-par'
- name: Restore a backup
community.general.scaleway_database_backup:
id: '6ef1125a-037e-494f-a911-6d9c49a51691'
state: restored
region: 'fr-par'
database_name: 'my-new-database'
instance_id: '50968a80-2909-4e5c-b1af-a2e19860dddb'
- name: Remove a backup
community.general.scaleway_database_backup:
id: '6ef1125a-037e-494f-a911-6d9c49a51691'
state: absent
region: 'fr-par'
'''
RETURN = '''
metadata:
description: Backup metadata.
returned: when C(state=present), C(state=exported) or C(state=restored)
type: dict
sample: {
"metadata": {
"created_at": "2020-08-06T12:42:05.631049Z",
"database_name": "my-database",
"download_url": null,
"download_url_expires_at": null,
"expires_at": null,
"id": "a15297bd-0c4a-4b4f-8fbb-b36a35b7eb07",
"instance_id": "617be32e-6497-4ed7-b4c7-0ee5a81edf49",
"instance_name": "my-instance",
"name": "backup_name",
"region": "fr-par",
"size": 600000,
"status": "ready",
"updated_at": "2020-08-06T12:42:10.581649Z"
}
}
'''
import datetime
import time
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.scaleway import (
Scaleway,
scaleway_argument_spec,
SCALEWAY_REGIONS,
)
stable_states = (
'ready',
'deleting',
)
def wait_to_complete_state_transition(module, account_api, backup=None):
wait_timeout = module.params['wait_timeout']
wait_sleep_time = module.params['wait_sleep_time']
if backup is None or backup['status'] in stable_states:
return backup
start = datetime.datetime.utcnow()
end = start + datetime.timedelta(seconds=wait_timeout)
while datetime.datetime.utcnow() < end:
module.debug('We are going to wait for the backup to finish its transition')
response = account_api.get('/rdb/v1/regions/%s/backups/%s' % (module.params.get('region'), backup['id']))
if not response.ok:
module.fail_json(msg='Error getting backup [{0}: {1}]'.format(response.status_code, response.json))
break
response_json = response.json
if response_json['status'] in stable_states:
module.debug('It seems that the backup is not in transition anymore.')
module.debug('Backup in state: %s' % response_json['status'])
return response_json
time.sleep(wait_sleep_time)
else:
module.fail_json(msg='Backup takes too long to finish its transition')
def present_strategy(module, account_api, backup):
name = module.params['name']
database_name = module.params['database_name']
instance_id = module.params['instance_id']
expiration_date = module.params['expires_at']
if backup is not None:
if (backup['name'] == name or name is None) and (
backup['expires_at'] == expiration_date or expiration_date is None):
wait_to_complete_state_transition(module, account_api, backup)
module.exit_json(changed=False)
if module.check_mode:
module.exit_json(changed=True)
payload = {}
if name is not None:
payload['name'] = name
if expiration_date is not None:
payload['expires_at'] = expiration_date
response = account_api.patch('/rdb/v1/regions/%s/backups/%s' % (module.params.get('region'), backup['id']),
payload)
if response.ok:
result = wait_to_complete_state_transition(module, account_api, response.json)
module.exit_json(changed=True, metadata=result)
module.fail_json(msg='Error modifying backup [{0}: {1}]'.format(response.status_code, response.json))
if module.check_mode:
module.exit_json(changed=True)
payload = {'name': name, 'database_name': database_name, 'instance_id': instance_id}
if expiration_date is not None:
payload['expires_at'] = expiration_date
response = account_api.post('/rdb/v1/regions/%s/backups' % module.params.get('region'), payload)
if response.ok:
result = wait_to_complete_state_transition(module, account_api, response.json)
module.exit_json(changed=True, metadata=result)
module.fail_json(msg='Error creating backup [{0}: {1}]'.format(response.status_code, response.json))
def absent_strategy(module, account_api, backup):
if backup is None:
module.exit_json(changed=False)
if module.check_mode:
module.exit_json(changed=True)
response = account_api.delete('/rdb/v1/regions/%s/backups/%s' % (module.params.get('region'), backup['id']))
if response.ok:
result = wait_to_complete_state_transition(module, account_api, response.json)
module.exit_json(changed=True, metadata=result)
module.fail_json(msg='Error deleting backup [{0}: {1}]'.format(response.status_code, response.json))
def exported_strategy(module, account_api, backup):
if backup is None:
module.fail_json(msg=('Backup "%s" not found' % module.params['id']))
if backup['download_url'] is not None:
module.exit_json(changed=False, metadata=backup)
if module.check_mode:
module.exit_json(changed=True)
backup = wait_to_complete_state_transition(module, account_api, backup)
response = account_api.post(
'/rdb/v1/regions/%s/backups/%s/export' % (module.params.get('region'), backup['id']), {})
if response.ok:
result = wait_to_complete_state_transition(module, account_api, response.json)
module.exit_json(changed=True, metadata=result)
module.fail_json(msg='Error exporting backup [{0}: {1}]'.format(response.status_code, response.json))
def restored_strategy(module, account_api, backup):
if backup is None:
module.fail_json(msg=('Backup "%s" not found' % module.params['id']))
database_name = module.params['database_name']
instance_id = module.params['instance_id']
if module.check_mode:
module.exit_json(changed=True)
backup = wait_to_complete_state_transition(module, account_api, backup)
payload = {'database_name': database_name, 'instance_id': instance_id}
response = account_api.post('/rdb/v1/regions/%s/backups/%s/restore' % (module.params.get('region'), backup['id']),
payload)
if response.ok:
result = wait_to_complete_state_transition(module, account_api, response.json)
module.exit_json(changed=True, metadata=result)
module.fail_json(msg='Error restoring backup [{0}: {1}]'.format(response.status_code, response.json))
state_strategy = {
'present': present_strategy,
'absent': absent_strategy,
'exported': exported_strategy,
'restored': restored_strategy,
}
def core(module):
state = module.params['state']
backup_id = module.params['id']
account_api = Scaleway(module)
if backup_id is None:
backup_by_id = None
else:
response = account_api.get('/rdb/v1/regions/%s/backups/%s' % (module.params.get('region'), backup_id))
status_code = response.status_code
backup_json = response.json
backup_by_id = None
if status_code == 404:
backup_by_id = None
elif response.ok:
backup_by_id = backup_json
else:
module.fail_json(msg='Error getting backup [{0}: {1}]'.format(status_code, response.json['message']))
state_strategy[state](module, account_api, backup_by_id)
def main():
argument_spec = scaleway_argument_spec()
argument_spec.update(dict(
state=dict(default='present', choices=['absent', 'present', 'exported', 'restored']),
region=dict(required=True, choices=SCALEWAY_REGIONS),
id=dict(),
name=dict(type='str'),
database_name=dict(required=False),
instance_id=dict(required=False),
expires_at=dict(),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300),
wait_sleep_time=dict(type='int', default=3),
))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_together=[
['database_name', 'instance_id'],
],
required_if=[
['state', 'present', ['name', 'database_name', 'instance_id']],
['state', 'absent', ['id']],
['state', 'exported', ['id']],
['state', 'restored', ['id', 'database_name', 'instance_id']],
],
)
core(module)
if __name__ == '__main__':
main()

View File

@@ -439,7 +439,12 @@ class Migrations:
if target_cluster_size is not None:
cmd = cmd + "size=" + str(target_cluster_size) + ";"
for node in self._nodes:
cluster_key.add(self._info_cmd_helper(cmd, node))
try:
cluster_key.add(self._info_cmd_helper(cmd, node))
except aerospike.exception.ServerError as e: # unstable-cluster is returned in form of Exception
if 'unstable-cluster' in e.msg:
return False
raise e
if len(cluster_key) == 1:
return True
return False

View File

@@ -161,6 +161,15 @@ options:
type: bool
default: yes
version_added: '0.2.0'
usage_on_types:
description:
- When adding default privileges, the module always implicitly adds ``USAGE ON TYPES``.
- To avoid this behavior, set I(usage_on_types) to C(no).
- Added to save backwards compatibility.
- Used only when adding default privileges, ignored otherwise.
type: bool
default: yes
version_added: '1.2.0'
notes:
- Parameters that accept comma separated lists (I(privs), I(objs), I(roles))
@@ -169,6 +178,7 @@ notes:
C(present) and I(grant_option) to C(no) (see examples).
- Note that when revoking privileges from a role R, this role may still have
access via privileges granted to any role R is a member of including C(PUBLIC).
- Note that when you use C(PUBLIC) role, the module always reports that the state has been changed.
- Note that when revoking privileges from a role R, you do so as the user
specified via I(login). If R has been granted the same privileges by
another user also, R can still access database objects via these privileges.
@@ -504,6 +514,7 @@ class Connection(object):
self.connection = psycopg2.connect(**kw)
self.cursor = self.connection.cursor()
self.pg_version = self.connection.server_version
def commit(self):
self.connection.commit()
@@ -551,10 +562,15 @@ class Connection(object):
def get_all_functions_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT p.proname, oidvectortypes(p.proargtypes)
FROM pg_catalog.pg_proc p
JOIN pg_namespace n ON n.oid = p.pronamespace
WHERE nspname = %s"""
query = ("SELECT p.proname, oidvectortypes(p.proargtypes) "
"FROM pg_catalog.pg_proc p "
"JOIN pg_namespace n ON n.oid = p.pronamespace "
"WHERE nspname = %s")
if self.pg_version >= 110000:
query += " and p.prokind = 'f'"
self.cursor.execute(query, (schema,))
return ["%s(%s)" % (t[0], t[1]) for t in self.cursor.fetchall()]
@@ -657,7 +673,7 @@ class Connection(object):
# Manipulating privileges
def manipulate_privs(self, obj_type, privs, objs, roles, target_roles,
state, grant_option, schema_qualifier=None, fail_on_role=True):
state, grant_option, schema_qualifier=None, fail_on_role=True, usage_on_types=True):
"""Manipulate database object privileges.
:param obj_type: Type of database object to grant/revoke
@@ -779,10 +795,14 @@ class Connection(object):
.for_schema(schema_qualifier) \
.set_what(set_what) \
.for_objs(objs) \
.usage_on_types(usage_on_types) \
.build()
executed_queries.append(query)
self.cursor.execute(query)
if roles == 'PUBLIC':
return True
status_after = get_status(objs)
def nonesorted(e):
@@ -807,6 +827,7 @@ class QueryBuilder(object):
self._state = state
self._schema = None
self._objs = None
self._usage_on_types = None
self.query = []
def for_objs(self, objs):
@@ -825,6 +846,10 @@ class QueryBuilder(object):
self._for_whom = who
return self
def usage_on_types(self, usage_on_types):
self._usage_on_types = usage_on_types
return self
def as_who(self, target_roles):
self._as_who = target_roles
return self
@@ -864,12 +889,14 @@ class QueryBuilder(object):
self.query[-1] += ' WITH ADMIN OPTION;'
else:
self.query[-1] += ' WITH GRANT OPTION;'
else:
elif self._grant_option is False:
self.query[-1] += ';'
if self._obj_type == 'group':
self.query.append('REVOKE ADMIN OPTION FOR {0} FROM {1};'.format(self._set_what, self._for_whom))
elif not self._obj_type == 'default_privs':
self.query.append('REVOKE GRANT OPTION FOR {0} FROM {1};'.format(self._set_what, self._for_whom))
else:
self.query[-1] += ';'
def add_default_priv(self):
for obj in self._objs:
@@ -887,14 +914,16 @@ class QueryBuilder(object):
obj,
self._for_whom))
self.add_grant_option()
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} GRANT USAGE ON TYPES TO {2}'.format(self._as_who,
self._schema,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} GRANT USAGE ON TYPES TO {1}'.format(self._schema, self._for_whom))
if self._usage_on_types:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} GRANT USAGE ON TYPES TO {2}'.format(self._as_who,
self._schema,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} GRANT USAGE ON TYPES TO {1}'.format(self._schema, self._for_whom))
self.add_grant_option()
def build_present(self):
@@ -954,6 +983,7 @@ def main():
password=dict(default='', aliases=['login_password'], no_log=True),
fail_on_role=dict(type='bool', default=True),
trust_input=dict(type='bool', default=True),
usage_on_types=dict(type='bool', default=True),
)
module = AnsibleModule(
@@ -962,6 +992,7 @@ def main():
)
fail_on_role = module.params['fail_on_role']
usage_on_types = module.params['usage_on_types']
# Create type object as namespace for module params
p = type('Params', (), module.params)
@@ -1051,7 +1082,7 @@ def main():
objs = [obj.replace(':', ',') for obj in objs]
# roles
if p.roles == 'PUBLIC':
if p.roles.upper() == 'PUBLIC':
roles = 'PUBLIC'
else:
roles = p.roles.split(',')
@@ -1086,6 +1117,7 @@ def main():
grant_option=p.grant_option,
schema_qualifier=p.schema,
fail_on_role=fail_on_role,
usage_on_types=usage_on_types,
)
except Error as e:
@@ -1094,9 +1126,9 @@ def main():
except psycopg2.Error as e:
conn.rollback()
module.fail_json(msg=to_native(e.message))
module.fail_json(msg=to_native(e))
if module.check_mode:
if module.check_mode or not changed:
conn.rollback()
else:
conn.commit()

View File

@@ -10,23 +10,17 @@ __metaclass__ = type
DOCUMENTATION = r'''
---
module: postgresql_user
short_description: Add or remove a user (role) from a PostgreSQL server instance
short_description: Create, alter, or remove a user (role) from a PostgreSQL server instance
description:
- Adds or removes a user (role) from a PostgreSQL server instance
- Creates, alters, or removes a user (role) from a PostgreSQL server instance
("cluster" in PostgreSQL terminology) and, optionally,
grants the user access to an existing database or tables.
- A user is a role with login privilege.
- The fundamental function of the module is to create, or delete, users from
a PostgreSQL instances. Privilege assignment, or removal, is an optional
step, which works on one database at a time. This allows for the module to
be called several times in the same module to modify the permissions on
different databases, or to grant permissions to already existing users.
- A user cannot be removed until all the privileges have been stripped from
the user. In such situation, if the module tries to remove the user it
will fail. To avoid this from happening the fail_on_user option signals
the module to try to remove the user, but if not possible keep going; the
module will report if changes happened and separately if the user was
removed or not.
- You can also use it to grant or revoke user's privileges in a particular database.
- You cannot remove a user while it still has any privileges granted to it in any database.
- Set I(fail_on_user) to C(no) to make the module ignore failures when trying to remove a user.
In this case, the module reports if changes happened as usual and separately reports
whether the user has been removed or not.
options:
name:
description:
@@ -39,33 +33,33 @@ options:
description:
- Set the user's password, before 1.4 this was required.
- Password can be passed unhashed or hashed (MD5-hashed).
- Unhashed password will automatically be hashed when saved into the
database if C(encrypted) parameter is set, otherwise it will be save in
- An unhashed password is automatically hashed when saved into the
database if I(encrypted) is set, otherwise it is saved in
plain text format.
- When passing an MD5-hashed password it must be generated with the format
- When passing an MD5-hashed password, you must generate it with the format
C('str["md5"] + md5[ password + username ]'), resulting in a total of
35 characters. An easy way to do this is C(echo "md5$(echo -n
'verysecretpasswordJOE' | md5sum | awk '{print $1}')").
- Note that if the provided password string is already in MD5-hashed
format, then it is used as-is, regardless of C(encrypted) parameter.
format, then it is used as-is, regardless of I(encrypted) option.
type: str
db:
description:
- Name of database to connect to and where user's permissions will be granted.
- Name of database to connect to and where user's permissions are granted.
type: str
aliases:
- login_db
fail_on_user:
description:
- If C(yes), fail when user (role) can't be removed. Otherwise just log and continue.
default: 'yes'
- If C(yes), fails when the user (role) cannot be removed. Otherwise just log and continue.
default: yes
type: bool
aliases:
- fail_on_role
priv:
description:
- "Slash-separated PostgreSQL privileges string: C(priv1/priv2), where
privileges can be defined for database ( allowed options - 'CREATE',
you can define the user's privileges for the database ( allowed options - 'CREATE',
'CONNECT', 'TEMPORARY', 'TEMP', 'ALL'. For example C(CONNECT) ) or
for table ( allowed options - 'SELECT', 'INSERT', 'UPDATE', 'DELETE',
'TRUNCATE', 'REFERENCES', 'TRIGGER', 'ALL'. For example
@@ -82,9 +76,10 @@ options:
'[NO]INHERIT', '[NO]LOGIN', '[NO]REPLICATION', '[NO]BYPASSRLS' ]
session_role:
description:
- Switch to session_role after connecting.
- The specified session_role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though the session_role were the one that had logged in originally.
- Switch to session role after connecting.
- The specified session role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though the session role
were the one that had logged in originally.
type: str
state:
description:
@@ -95,24 +90,26 @@ options:
encrypted:
description:
- Whether the password is stored hashed in the database.
- Passwords can be passed already hashed or unhashed, and postgresql
ensures the stored password is hashed when C(encrypted) is set.
- "Note: Postgresql 10 and newer doesn't support unhashed passwords."
- You can specify an unhashed password, and PostgreSQL ensures
the stored password is hashed when I(encrypted=yes) is set.
If you specify a hashed password, the module uses it as-is,
regardless of the setting of I(encrypted).
- "Note: Postgresql 10 and newer does not support unhashed passwords."
- Previous to Ansible 2.6, this was C(no) by default.
default: 'yes'
default: yes
type: bool
expires:
description:
- The date at which the user's password is to expire.
- If set to C('infinity'), user's password never expire.
- Note that this value should be a valid SQL date and time type.
- If set to C('infinity'), user's password never expires.
- Note that this value must be a valid SQL date and time type.
type: str
no_password_changes:
description:
- If C(yes), don't inspect database for password changes. Effective when
C(pg_authid) is not accessible (such as AWS RDS). Otherwise, make
password changes as necessary.
default: 'no'
- If C(yes), does not inspect the database for password changes.
Useful when C(pg_authid) is not accessible (such as in AWS RDS).
Otherwise, makes password changes as necessary.
default: no
type: bool
conn_limit:
description:
@@ -120,7 +117,7 @@ options:
type: int
ssl_mode:
description:
- Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server.
- Determines how an SSL session is negotiated with the server.
- See U(https://www.postgresql.org/docs/current/static/libpq-ssl.html) for more information on the modes.
- Default of C(prefer) matches libpq default.
type: str
@@ -129,34 +126,37 @@ options:
ca_cert:
description:
- Specifies the name of a file containing SSL certificate authority (CA) certificate(s).
- If the file exists, the server's certificate will be verified to be signed by one of these authorities.
- If the file exists, verifies that the server's certificate is signed by one of these authorities.
type: str
aliases: [ ssl_rootcert ]
groups:
description:
- The list of groups (roles) that need to be granted to the user.
- The list of groups (roles) that you want to grant to the user.
type: list
elements: str
comment:
description:
- Add a comment on the user (equal to the COMMENT ON ROLE statement result).
- Adds a comment on the user (equivalent to the C(COMMENT ON ROLE) statement).
type: str
version_added: '0.2.0'
trust_input:
description:
- If C(no), check whether values of parameters I(name), I(password), I(privs), I(expires),
- If C(no), checks whether values of options I(name), I(password), I(privs), I(expires),
I(role_attr_flags), I(groups), I(comment), I(session_role) are potentially dangerous.
- It makes sense to use C(yes) only when SQL injections via the parameters are possible.
- It makes sense to use C(yes) only when SQL injections through the options are possible.
type: bool
default: yes
version_added: '0.2.0'
notes:
- The module creates a user (role) with login privilege by default.
Use NOLOGIN role_attr_flags to change this behaviour.
- If you specify PUBLIC as the user (role), then the privilege changes will apply to all users (roles).
You may not specify password or role_attr_flags when the PUBLIC user is specified.
Use C(NOLOGIN) I(role_attr_flags) to change this behaviour.
- If you specify C(PUBLIC) as the user (role), then the privilege changes apply to all users (roles).
You may not specify password or role_attr_flags when the C(PUBLIC) user is specified.
- SCRAM-SHA-256-hashed passwords (SASL Authentication) require PostgreSQL version 10 or newer.
On the previous versions the whole hashed string will be used as a password.
On the previous versions the whole hashed string is used as a password.
- 'Working with SCRAM-SHA-256-hashed passwords, be sure you use the I(environment:) variable
C(PGOPTIONS: "-c password_encryption=scram-sha-256") (see the provided example).'
- Supports ``check_mode``.
seealso:
- module: community.general.postgresql_privs
- module: community.general.postgresql_membership

View File

@@ -411,8 +411,10 @@ def xpath_matches(tree, xpath, namespaces):
def delete_xpath_target(module, tree, xpath, namespaces):
""" Delete an attribute or element from a tree """
changed = False
try:
for result in tree.xpath(xpath, namespaces=namespaces):
changed = True
# Get the xpath for this result
if is_attribute(tree, xpath, namespaces):
# Delete an attribute
@@ -429,7 +431,7 @@ def delete_xpath_target(module, tree, xpath, namespaces):
except Exception as e:
module.fail_json(msg="Couldn't delete xpath target: %s (%s)" % (xpath, e))
else:
finish(module, tree, xpath, namespaces, changed=True)
finish(module, tree, xpath, namespaces, changed=changed)
def replace_children_of(children, match):

View File

@@ -0,0 +1 @@
./source_control/gitlab/gitlab_group_members.py

View File

@@ -0,0 +1 @@
source_control/gitlab/gitlab_group_variable.py

View File

@@ -90,6 +90,12 @@ options:
- Default home directory of the user.
type: str
version_added: '0.2.0'
userauthtype:
description:
- The authentication type to use for the user.
choices: ["password", "radius", "otp", "pkinit", "hardened"]
type: str
version_added: '1.2.0'
extends_documentation_fragment:
- community.general.ipa.documentation
@@ -139,6 +145,15 @@ EXAMPLES = r'''
ipa_user: admin
ipa_pass: topsecret
update_password: on_create
- name: Ensure pinky is present and using one time password authentication
community.general.ipa_user:
name: pinky
state: present
userauthtype: otp
ipa_host: ipa.example.com
ipa_user: admin
ipa_pass: topsecret
'''
RETURN = r'''
@@ -182,7 +197,8 @@ class UserIPAClient(IPAClient):
def get_user_dict(displayname=None, givenname=None, krbpasswordexpiration=None, loginshell=None,
mail=None, nsaccountlock=False, sn=None, sshpubkey=None, telephonenumber=None,
title=None, userpassword=None, gidnumber=None, uidnumber=None, homedirectory=None):
title=None, userpassword=None, gidnumber=None, uidnumber=None, homedirectory=None,
userauthtype=None):
user = {}
if displayname is not None:
user['displayname'] = displayname
@@ -211,6 +227,8 @@ def get_user_dict(displayname=None, givenname=None, krbpasswordexpiration=None,
user['uidnumber'] = uidnumber
if homedirectory is not None:
user['homedirectory'] = homedirectory
if userauthtype is not None:
user['ipauserauthtype'] = userauthtype
return user
@@ -293,7 +311,8 @@ def ensure(module, client):
telephonenumber=module.params['telephonenumber'], title=module.params['title'],
userpassword=module.params['password'],
gidnumber=module.params.get('gidnumber'), uidnumber=module.params.get('uidnumber'),
homedirectory=module.params.get('homedirectory'))
homedirectory=module.params.get('homedirectory'),
userauthtype=module.params.get('userauthtype'))
update_password = module.params.get('update_password')
ipa_user = client.user_find(name=name)
@@ -340,7 +359,9 @@ def main():
choices=['present', 'absent', 'enabled', 'disabled']),
telephonenumber=dict(type='list', elements='str'),
title=dict(type='str'),
homedirectory=dict(type='str'))
homedirectory=dict(type='str'),
userauthtype=dict(type='str',
choices=['password', 'radius', 'otp', 'pkinit', 'hardened']))
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)

View File

@@ -22,10 +22,10 @@ description:
- All actions require the I(host) parameter to be given explicitly. In playbooks you can use the C({{inventory_hostname}}) variable to refer
to the host the playbook is currently running on.
- You can specify multiple services at once by separating them with commas, .e.g., C(services=httpd,nfs,puppet).
- When specifying what service to handle there is a special service value, I(host), which will handle alerts/downtime for the I(host itself),
- When specifying what service to handle there is a special service value, I(host), which will handle alerts/downtime/acknowledge for the I(host itself),
e.g., C(service=host). This keyword may not be given with other services at the same time.
I(Setting alerts/downtime for a host does not affect alerts/downtime for any of the services running on it.) To schedule downtime for all
services on particular host use keyword "all", e.g., C(service=all).
I(Setting alerts/downtime/acknowledge for a host does not affect alerts/downtime/acknowledge for any of the services running on it.)
To schedule downtime for all services on particular host use keyword "all", e.g., C(service=all).
- When using the C(nagios) module you will need to specify your Nagios server using the C(delegate_to) parameter.
options:
action:
@@ -33,10 +33,11 @@ options:
- Action to take.
- servicegroup options were added in 2.0.
- delete_downtime options were added in 2.2.
- The C(acknowledge) and C(forced_check) actions were added in community.general 1.2.0.
required: true
choices: [ "downtime", "delete_downtime", "enable_alerts", "disable_alerts", "silence", "unsilence",
"silence_nagios", "unsilence_nagios", "command", "servicegroup_service_downtime",
"servicegroup_host_downtime" ]
"servicegroup_host_downtime", "acknowledge", "forced_check" ]
host:
description:
- Host to operate on in Nagios.
@@ -48,11 +49,11 @@ options:
author:
description:
- Author to leave downtime comments as.
Only usable with the C(downtime) action.
Only usable with the C(downtime) and C(acknowledge) action.
default: Ansible
comment:
description:
- Comment for C(downtime) action.
- Comment for C(downtime) and C(acknowledge)action.
default: Scheduling downtime
start:
description:
@@ -68,7 +69,7 @@ options:
description:
- What to manage downtime/alerts for. Separate multiple services with commas.
C(service) is an alias for C(services).
B(Required) option when using the C(downtime), C(enable_alerts), and C(disable_alerts) actions.
B(Required) option when using the C(downtime), C(acknowledge), C(forced_check), C(enable_alerts), and C(disable_alerts) actions.
aliases: [ "service" ]
required: true
servicegroup:
@@ -156,6 +157,44 @@ EXAMPLES = '''
service: host
comment: Planned maintenance
- name: Acknowledge an HOST with a particular comment
community.general.nagios:
action: acknowledge
service: host
host: '{{ inventory_hostname }}'
comment: 'power outage - see casenr 12345'
- name: Acknowledge an active service problem for the httpd service with a particular comment
community.general.nagios:
action: acknowledge
service: httpd
host: '{{ inventory_hostname }}'
comment: 'service crashed - see casenr 12345'
- name: Reset a passive service check for snmp trap
community.general.nagios:
action: forced_check
service: snmp
host: '{{ inventory_hostname }}'
- name: Force an active service check for the httpd service
community.general.nagios:
action: forced_check
service: httpd
host: '{{ inventory_hostname }}'
- name: Force an active service check for all services of a particular host
community.general.nagios:
action: forced_check
service: all
host: '{{ inventory_hostname }}'
- name: Force an active service check for a particular host
community.general.nagios:
action: forced_check
service: host
host: '{{ inventory_hostname }}'
- name: Enable SMART disk alerts
community.general.nagios:
action: enable_alerts
@@ -256,6 +295,8 @@ def main():
'command',
'servicegroup_host_downtime',
'servicegroup_service_downtime',
'acknowledge',
'forced_check',
]
module = AnsibleModule(
@@ -284,6 +325,7 @@ def main():
##################################################################
# Required args per action:
# downtime = (minutes, service, host)
# acknowledge = (service, host)
# (un)silence = (host)
# (enable/disable)_alerts = (service, host)
# command = command
@@ -322,6 +364,18 @@ def main():
if action in ['command']:
if not command:
module.fail_json(msg='no command passed for command action')
######################################################################
if action == 'acknowledge':
# Make sure there's an actual service selected
if not services:
module.fail_json(msg='no service selected to acknowledge')
##################################################################
if action == 'forced_check':
# Make sure there's an actual service selected
if not services:
module.fail_json(msg='no service selected to check')
##################################################################
if not cmdfile:
module.fail_json(msg='unable to locate nagios.cfg')
@@ -358,7 +412,10 @@ class Nagios(object):
self.comment = kwargs['comment']
self.host = kwargs['host']
self.servicegroup = kwargs['servicegroup']
self.start = int(kwargs['start'])
if kwargs['start'] is not None:
self.start = int(kwargs['start'])
else:
self.start = None
self.minutes = kwargs['minutes']
self.cmdfile = kwargs['cmdfile']
self.command = kwargs['command']
@@ -448,6 +505,44 @@ class Nagios(object):
return dt_str
def _fmt_ack_str(self, cmd, host, author=None,
comment=None, svc=None, sticky=0, notify=1, persistent=0):
"""
Format an external-command acknowledge string.
cmd - Nagios command ID
host - Host schedule downtime on
author - Name to file the downtime as
comment - Reason for running this command (upgrade, reboot, etc)
svc - Service to schedule downtime for, omit when for host downtime
sticky - the acknowledgement will remain until the host returns to an UP state if set to 1
notify - a notification will be sent out to contacts
persistent - survive across restarts of the Nagios process
Syntax: [submitted] COMMAND;<host_name>;[<service_description>]
<sticky>;<notify>;<persistent>;<author>;<comment>
"""
entry_time = self._now()
hdr = "[%s] %s;%s;" % (entry_time, cmd, host)
if not author:
author = self.author
if not comment:
comment = self.comment
if svc is not None:
ack_args = [svc, str(sticky), str(notify), str(persistent), author, comment]
else:
# Downtime for a host if no svc specified
ack_args = [str(sticky), str(notify), str(persistent), author, comment]
ack_arg_str = ";".join(ack_args)
ack_str = hdr + ack_arg_str + "\n"
return ack_str
def _fmt_dt_del_str(self, cmd, host, svc=None, start=None, comment=None):
"""
Format an external-command downtime deletion string.
@@ -489,6 +584,34 @@ class Nagios(object):
return dt_del_str
def _fmt_chk_str(self, cmd, host, svc=None, start=None):
"""
Format an external-command forced host or service check string.
cmd - Nagios command ID
host - Host to check service from
svc - Service to check
start - check time
Syntax: [submitted] COMMAND;<host_name>;[<service_description>];<check_time>
"""
entry_time = self._now()
hdr = "[%s] %s;%s;" % (entry_time, cmd, host)
if start is None:
start = entry_time + 3
if svc is None:
chk_args = [str(start)]
else:
chk_args = [svc, str(start)]
chk_arg_str = ";".join(chk_args)
chk_str = hdr + chk_arg_str + "\n"
return chk_str
def _fmt_notif_str(self, cmd, host=None, svc=None):
"""
Format an external-command notification string.
@@ -552,6 +675,85 @@ class Nagios(object):
dt_cmd_str = self._fmt_dt_str(cmd, host, minutes, start=start)
self._write_command(dt_cmd_str)
def acknowledge_svc_problem(self, host, services=None):
"""
This command is used to acknowledge a particular
service problem.
By acknowledging the current problem, future notifications
for the same servicestate are disabled
Syntax: ACKNOWLEDGE_SVC_PROBLEM;<host_name>;<service_description>;
<sticky>;<notify>;<persistent>;<author>;<comment>
"""
cmd = "ACKNOWLEDGE_SVC_PROBLEM"
if services is None:
services = []
for service in services:
ack_cmd_str = self._fmt_ack_str(cmd, host, svc=service)
self._write_command(ack_cmd_str)
def acknowledge_host_problem(self, host):
"""
This command is used to acknowledge a particular
host problem.
By acknowledging the current problem, future notifications
for the same servicestate are disabled
Syntax: ACKNOWLEDGE_HOST_PROBLEM;<host_name>;<sticky>;<notify>;
<persistent>;<author>;<comment>
"""
cmd = "ACKNOWLEDGE_HOST_PROBLEM"
ack_cmd_str = self._fmt_ack_str(cmd, host)
self._write_command(ack_cmd_str)
def schedule_forced_host_check(self, host):
"""
This command schedules a forced active check for a particular host.
Syntax: SCHEDULE_FORCED_HOST_CHECK;<host_name>;<check_time>
"""
cmd = "SCHEDULE_FORCED_HOST_CHECK"
chk_cmd_str = self._fmt_chk_str(cmd, host, svc=None)
self._write_command(chk_cmd_str)
def schedule_forced_host_svc_check(self, host):
"""
This command schedules a forced active check for all services
associated with a particular host.
Syntax: SCHEDULE_FORCED_HOST_SVC_CHECKS;<host_name>;<check_time>
"""
cmd = "SCHEDULE_FORCED_HOST_SVC_CHECKS"
chk_cmd_str = self._fmt_chk_str(cmd, host, svc=None)
self._write_command(chk_cmd_str)
def schedule_forced_svc_check(self, host, services=None):
"""
This command schedules a forced active check for a particular
service.
Syntax: SCHEDULE_FORCED_SVC_CHECK;<host_name>;<service_description>;<check_time>
"""
cmd = "SCHEDULE_FORCED_SVC_CHECK"
if services is None:
services = []
for service in services:
chk_cmd_str = self._fmt_chk_str(cmd, host, svc=service)
self._write_command(chk_cmd_str)
def schedule_host_svc_downtime(self, host, minutes=30, start=None):
"""
This command is used to schedule downtime for
@@ -1020,6 +1222,12 @@ class Nagios(object):
minutes=self.minutes,
start=self.start)
elif self.action == 'acknowledge':
if self.services == 'host':
self.acknowledge_host_problem(self.host)
else:
self.acknowledge_svc_problem(self.host, services=self.services)
elif self.action == 'delete_downtime':
if self.services == 'host':
self.delete_host_downtime(self.host)
@@ -1028,6 +1236,14 @@ class Nagios(object):
else:
self.delete_host_downtime(self.host, services=self.services)
elif self.action == 'forced_check':
if self.services == 'host':
self.schedule_forced_host_check(self.host)
elif self.services == 'all':
self.schedule_forced_host_svc_check(self.host)
else:
self.schedule_forced_svc_check(self.host, services=self.services)
elif self.action == "servicegroup_host_downtime":
if self.servicegroup:
self.schedule_servicegroup_host_downtime(servicegroup=self.servicegroup, minutes=self.minutes, start=self.start)

View File

@@ -1760,6 +1760,7 @@ def main():
),
supports_check_mode=True,
)
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
if not HAVE_DBUS:
module.fail_json(msg=missing_required_lib('dbus'), exception=DBUS_IMP_ERR)

View File

@@ -1,6 +1,7 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Jim Richardson <weaselkeeper@gmail.com>
# Copyright (c) 2019, Bernd Arnold <wopfel@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
@@ -38,8 +39,15 @@ options:
description:
- Message priority (see U(https://pushover.net) for details).
required: false
device:
description:
- A device the message should be sent to. Multiple devices can be specified, separated by a comma.
required: false
version_added: 1.2.0
author: "Jim Richardson (@weaselkeeper)"
author:
- "Jim Richardson (@weaselkeeper)"
- "Bernd Arnold (@wopfel)"
'''
EXAMPLES = '''
@@ -58,6 +66,14 @@ EXAMPLES = '''
app_token: wxfdksl
user_key: baa5fe97f2c5ab3ca8f0bb59
delegate_to: localhost
- name: Send notifications via pushover.net to a specific device
community.general.pushover:
msg: '{{ inventory_hostname }} has been lost somewhere'
app_token: wxfdksl
user_key: baa5fe97f2c5ab3ca8f0bb59
device: admins-iPhone
delegate_to: localhost
'''
from ansible.module_utils.basic import AnsibleModule
@@ -74,7 +90,7 @@ class Pushover(object):
self.user = user
self.token = token
def run(self, priority, msg, title):
def run(self, priority, msg, title, device):
''' Do, whatever it is, we do. '''
url = '%s/1/messages.json' % (self.base_uri)
@@ -89,6 +105,10 @@ class Pushover(object):
options = dict(options,
title=title)
if device is not None:
options = dict(options,
device=device)
data = urlencode(options)
headers = {"Content-type": "application/x-www-form-urlencoded"}
@@ -108,12 +128,13 @@ def main():
app_token=dict(required=True, no_log=True),
user_key=dict(required=True, no_log=True),
pri=dict(required=False, default='0', choices=['-2', '-1', '0', '1', '2']),
device=dict(type='str'),
),
)
msg_object = Pushover(module, module.params['user_key'], module.params['app_token'])
try:
response = msg_object.run(module.params['pri'], module.params['msg'], module.params['title'])
response = msg_object.run(module.params['pri'], module.params['msg'], module.params['title'], module.params['device'])
except Exception:
module.fail_json(msg='Unable to send msg via pushover')

View File

@@ -61,6 +61,12 @@ options:
description:
- Optional. Timestamp of parent message to thread this message. https://api.slack.com/docs/message-threading
type: str
message_id:
description:
- Optional. Message ID to edit, instead of posting a new message.
Corresponds to C(ts) in the Slack API (U(https://api.slack.com/messaging/modifying)).
type: str
version_added: 1.2.0
username:
description:
- This is the sender of the message.
@@ -204,16 +210,31 @@ EXAMPLES = """
thread_id: "{{ slack_response['ts'] }}"
color: good
msg: 'And this is my threaded response!'
- name: Send a message to be edited later on
community.general.slack:
token: thetoken/generatedby/slack
channel: '#ansible'
msg: Deploying something...
register: slack_response
- name: Edit message
community.general.slack:
token: thetoken/generatedby/slack
channel: "{{ slack_response.channel }}"
msg: Deployment complete!
message_id: "{{ slack_response.ts }}"
"""
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.urls import fetch_url
OLD_SLACK_INCOMING_WEBHOOK = 'https://%s/services/hooks/incoming-webhook?token=%s'
SLACK_INCOMING_WEBHOOK = 'https://hooks.slack.com/services/%s'
SLACK_POSTMESSAGE_WEBAPI = 'https://slack.com/api/chat.postMessage'
SLACK_UPDATEMESSAGE_WEBAPI = 'https://slack.com/api/chat.update'
SLACK_CONVERSATIONS_HISTORY_WEBAPI = 'https://slack.com/api/conversations.history'
# Escaping quotes and apostrophes to avoid ending string prematurely in ansible call.
# We do not escape other characters used as Slack metacharacters (e.g. &, <, >).
@@ -251,7 +272,7 @@ def recursive_escape_quotes(obj, keys):
def build_payload_for_slack(module, text, channel, thread_id, username, icon_url, icon_emoji, link_names,
parse, color, attachments, blocks):
parse, color, attachments, blocks, message_id):
payload = {}
if color == "normal" and text is not None:
payload = dict(text=escape_quotes(text))
@@ -259,7 +280,7 @@ def build_payload_for_slack(module, text, channel, thread_id, username, icon_url
# With a custom color we have to set the message as attachment, and explicitly turn markdown parsing on for it.
payload = dict(attachments=[dict(text=escape_quotes(text), color=color, mrkdwn_in=["text"])])
if channel is not None:
if (channel[0] == '#') or (channel[0] == '@'):
if channel.startswith(('#', '@', 'C0')):
payload['channel'] = channel
else:
payload['channel'] = '#' + channel
@@ -275,6 +296,8 @@ def build_payload_for_slack(module, text, channel, thread_id, username, icon_url
payload['link_names'] = link_names
if parse is not None:
payload['parse'] = parse
if message_id is not None:
payload['ts'] = message_id
if attachments is not None:
if 'attachments' not in payload:
@@ -309,13 +332,37 @@ def build_payload_for_slack(module, text, channel, thread_id, username, icon_url
return payload
def get_slack_message(module, domain, token, channel, ts):
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': 'Bearer ' + token
}
qs = urlencode({
'channel': channel,
'ts': ts,
'limit': 1,
'inclusive': 'true',
})
url = SLACK_CONVERSATIONS_HISTORY_WEBAPI + '?' + qs
response, info = fetch_url(module=module, url=url, headers=headers, method='GET')
if info['status'] != 200:
module.fail_json(msg="failed to get slack message")
data = module.from_json(response.read())
if len(data['messages']) < 1:
module.fail_json(msg="no messages matching ts: %s" % ts)
if len(data['messages']) > 1:
module.fail_json(msg="more than 1 message matching ts: %s" % ts)
return data['messages'][0]
def do_notify_slack(module, domain, token, payload):
use_webapi = False
if token.count('/') >= 2:
# New style webhook token
slack_uri = SLACK_INCOMING_WEBHOOK % (token)
elif re.match(r'^xox[abp]-\w+-\w+$', token):
slack_uri = SLACK_POSTMESSAGE_WEBAPI
elif re.match(r'^xox[abp]-\S+$', token):
slack_uri = SLACK_UPDATEMESSAGE_WEBAPI if 'ts' in payload else SLACK_POSTMESSAGE_WEBAPI
use_webapi = True
else:
if not domain:
@@ -363,7 +410,9 @@ def main():
color=dict(type='str', default='normal'),
attachments=dict(type='list', required=False, default=None),
blocks=dict(type='list', elements='dict'),
)
message_id=dict(type='str', default=None),
),
supports_check_mode=True,
)
domain = module.params['domain']
@@ -379,20 +428,38 @@ def main():
color = module.params['color']
attachments = module.params['attachments']
blocks = module.params['blocks']
message_id = module.params['message_id']
color_choices = ['normal', 'good', 'warning', 'danger']
if color not in color_choices and not is_valid_hex_color(color):
module.fail_json(msg="Color value specified should be either one of %r "
"or any valid hex value with length 3 or 6." % color_choices)
changed = True
# if updating an existing message, we can check if there's anything to update
if message_id is not None:
changed = False
msg = get_slack_message(module, domain, token, channel, message_id)
for key in ('icon_url', 'icon_emoji', 'link_names', 'color', 'attachments', 'blocks'):
if msg.get(key) != module.params.get(key):
changed = True
break
# if check mode is active, we shouldn't do anything regardless.
# if changed=False, we don't need to do anything, so don't do it.
if module.check_mode or not changed:
module.exit_json(changed=changed, ts=msg['ts'], channel=msg['channel'])
elif module.check_mode:
module.exit_json(changed=changed)
payload = build_payload_for_slack(module, text, channel, thread_id, username, icon_url, icon_emoji, link_names,
parse, color, attachments, blocks)
parse, color, attachments, blocks, message_id)
slack_response = do_notify_slack(module, domain, token, payload)
if 'ok' in slack_response:
# Evaluate WebAPI response
if slack_response['ok']:
module.exit_json(changed=True, ts=slack_response['ts'], channel=slack_response['channel'],
module.exit_json(changed=changed, ts=slack_response['ts'], channel=slack_response['channel'],
api=slack_response, payload=payload)
else:
module.fail_json(msg="Slack API error", error=slack_response['error'])

View File

@@ -157,7 +157,7 @@ def get_installed_versions(module, remote=False):
(rc, out, err) = module.run_command(cmd, environ_update=environ, check_rc=True)
installed_versions = []
for line in out.splitlines():
match = re.match(r"\S+\s+\((.+)\)", line)
match = re.match(r"\S+\s+\((?:default: )?(.+)\)", line)
if match:
versions = match.group(1)
for version in versions.split(', '):

View File

@@ -186,6 +186,7 @@ class HomebrewCask(object):
. # dots
/ # slash (for taps)
- # dashes
@ # at symbol
'''
INVALID_PATH_REGEX = _create_regex_group(VALID_PATH_CHARS)

View File

@@ -148,7 +148,7 @@ def ensure(module, state, packages, params):
else:
no_refresh = ['--no-refresh']
to_modify = filter(behaviour[state]['filter'], packages)
to_modify = list(filter(behaviour[state]['filter'], packages))
if to_modify:
rc, out, err = module.run_command(['pkg', behaviour[state]['subcommand']] + dry_run + accept_licenses + beadm + no_refresh + ['-q', '--'] + to_modify)
response['rc'] = rc

View File

@@ -1,7 +1,7 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013, Alexander Winkler <mail () winkler-alexander.de>
# Copyright: (c) 2013, Alexander Winkler <mail () winkler-alexander.de>
# based on svr4pkg by
# Boyd Adamson <boyd () boydadamson.com> (2012)
#
@@ -11,46 +11,55 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: pkgutil
short_description: Manage CSW-Packages on Solaris
short_description: OpenCSW package management on Solaris
description:
- Manages CSW packages (SVR4 format) on Solaris 10 and 11.
- These were the native packages on Solaris <= 10 and are available
as a legacy feature in Solaris 11.
- Pkgutil is an advanced packaging system, which resolves dependency on installation.
It is designed for CSW packages.
author: "Alexander Winkler (@dermute)"
- This module installs, updates and removes packages from the OpenCSW project for Solaris.
- Unlike the M(community.general.svr4pkg) module, it will resolve and download dependencies.
- See U(https://www.opencsw.org/) for more information about the project.
author:
- Alexander Winkler (@dermute)
- David Ponessa (@scathatheworm)
options:
name:
description:
- Package name, e.g. (C(CSWnrpe))
- The name of the package.
- When using C(state=latest), this can be C('*'), which updates all installed packages managed by pkgutil.
type: list
required: true
type: str
elements: str
aliases: [ pkg ]
site:
description:
- Specifies the repository path to install the package from.
- Its global definition is done in C(/etc/opt/csw/pkgutil.conf).
- The repository path to install the package from.
- Its global definition is in C(/etc/opt/csw/pkgutil.conf).
required: false
type: str
state:
description:
- Whether to install (C(present)), or remove (C(absent)) a package.
- The upgrade (C(latest)) operation will update/install the package to the latest version available.
- "Note: The module has a limitation that (C(latest)) only works for one package, not lists of them."
required: true
choices: ["present", "absent", "latest"]
- Whether to install (C(present)/C(installed)), or remove (C(absent)/C(removed)) packages.
- The upgrade (C(latest)) operation will update/install the packages to the latest version available.
type: str
required: true
choices: [ absent, installed, latest, present, removed ]
update_catalog:
description:
- If you want to refresh your catalog from the mirror, set this to (C(yes)).
required: false
default: no
- If you always want to refresh your catalog from the mirror, even when it's not stale, set this to C(yes).
type: bool
default: no
force:
description:
- To allow the update process to downgrade packages to match what is present in the repository, set this to C(yes).
- This is useful for rolling back to stable from testing, or similar operations.
type: bool
version_added: 1.2.0
notes:
- In order to check the availability of packages, the catalog cache under C(/var/opt/csw/pkgutil) may be refreshed even in check mode.
'''
EXAMPLES = '''
EXAMPLES = r'''
- name: Install a package
community.general.pkgutil:
name: CSWcommon
@@ -59,35 +68,80 @@ EXAMPLES = '''
- name: Install a package from a specific repository
community.general.pkgutil:
name: CSWnrpe
site: 'ftp://myinternal.repo/opencsw/kiel'
site: ftp://myinternal.repo/opencsw/kiel
state: latest
- name: Remove a package
community.general.pkgutil:
name: CSWtop
state: absent
- name: Install several packages
community.general.pkgutil:
name:
- CSWsudo
- CSWtop
state: present
- name: Update all packages
community.general.pkgutil:
name: '*'
state: latest
- name: Update all packages and force versions to match latest in catalog
community.general.pkgutil:
name: '*'
state: latest
force: yes
'''
RETURN = r''' # '''
from ansible.module_utils.basic import AnsibleModule
def package_installed(module, name):
cmd = ['pkginfo']
cmd.append('-q')
cmd.append(name)
rc, out, err = run_command(module, cmd)
if rc == 0:
return True
else:
return False
def packages_not_installed(module, names):
''' Check if each package is installed and return list of the ones absent '''
pkgs = []
for pkg in names:
rc, out, err = run_command(module, ['pkginfo', '-q', pkg])
if rc != 0:
pkgs.append(pkg)
return pkgs
def package_latest(module, name, site):
# Only supports one package
cmd = ['pkgutil', '-U', '--single', '-c']
def packages_installed(module, names):
''' Check if each package is installed and return list of the ones present '''
pkgs = []
for pkg in names:
if not pkg.startswith('CSW'):
continue
rc, out, err = run_command(module, ['pkginfo', '-q', pkg])
if rc == 0:
pkgs.append(pkg)
return pkgs
def packages_not_latest(module, names, site, update_catalog):
''' Check status of each package and return list of the ones with an upgrade available '''
cmd = ['pkgutil']
if update_catalog:
cmd.append('-U')
cmd.append('-c')
if site is not None:
cmd += ['-t', site]
cmd.append(name)
cmd.extend('-t', site)
if names != ['*']:
cmd.extend(names)
rc, out, err = run_command(module, cmd)
# replace | tail -1 |grep -v SAME
# use -2, because splitting on \n create a empty line
# at the end of the list
return 'SAME' in out.split('\n')[-2]
# Find packages in the catalog which are not up to date
packages = []
for line in out.split('\n')[1:-1]:
if 'catalog' not in line and 'SAME' not in line:
packages.append(line.split(' ')[0])
# Remove duplicates
return list(set(packages))
def run_command(module, cmd, **kwargs):
@@ -96,117 +150,129 @@ def run_command(module, cmd, **kwargs):
return module.run_command(cmd, **kwargs)
def package_install(module, state, name, site, update_catalog):
cmd = ['pkgutil', '-iy']
def package_install(module, state, pkgs, site, update_catalog, force):
cmd = ['pkgutil']
if module.check_mode:
cmd.append('-n')
cmd.append('-iy')
if update_catalog:
cmd += ['-U']
cmd.append('-U')
if site is not None:
cmd += ['-t', site]
if state == 'latest':
cmd += ['-f']
cmd.append(name)
(rc, out, err) = run_command(module, cmd)
return (rc, out, err)
cmd.extend('-t', site)
if force:
cmd.append('-f')
cmd.extend(pkgs)
return run_command(module, cmd)
def package_upgrade(module, name, site, update_catalog):
cmd = ['pkgutil', '-ufy']
def package_upgrade(module, pkgs, site, update_catalog, force):
cmd = ['pkgutil']
if module.check_mode:
cmd.append('-n')
cmd.append('-uy')
if update_catalog:
cmd += ['-U']
cmd.append('-U')
if site is not None:
cmd += ['-t', site]
cmd.append(name)
(rc, out, err) = run_command(module, cmd)
return (rc, out, err)
cmd.extend('-t', site)
if force:
cmd.append('-f')
cmd += pkgs
return run_command(module, cmd)
def package_uninstall(module, name):
cmd = ['pkgutil', '-ry', name]
(rc, out, err) = run_command(module, cmd)
return (rc, out, err)
def package_uninstall(module, pkgs):
cmd = ['pkgutil']
if module.check_mode:
cmd.append('-n')
cmd.append('-ry')
cmd.extend(pkgs)
return run_command(module, cmd)
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(required=True, choices=['present', 'absent', 'latest']),
site=dict(default=None),
update_catalog=dict(required=False, default=False, type='bool'),
name=dict(type='list', elements='str', required=True, aliases=['pkg']),
state=dict(type='str', required=True, choices=['absent', 'installed', 'latest', 'present', 'removed']),
site=dict(type='str'),
update_catalog=dict(type='bool', default=False),
force=dict(type='bool', default=False),
),
supports_check_mode=True
supports_check_mode=True,
)
name = module.params['name']
state = module.params['state']
site = module.params['site']
update_catalog = module.params['update_catalog']
force = module.params['force']
rc = None
out = ''
err = ''
result = {}
result['name'] = name
result['state'] = state
result = dict(
name=name,
state=state,
)
if state == 'present':
if not package_installed(module, name):
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = package_install(module, state, name, site, update_catalog)
# Stdout is normally empty but for some packages can be
# very long and is not often useful
if len(out) > 75:
out = out[:75] + '...'
if state in ['installed', 'present']:
# Fail with an explicit error when trying to "install" '*'
if name == ['*']:
module.fail_json(msg="Can not use 'state: present' with name: '*'")
# Build list of packages that are actually not installed from the ones requested
pkgs = packages_not_installed(module, name)
# If the package list is empty then all packages are already present
if pkgs == []:
module.exit_json(changed=False)
(rc, out, err) = package_install(module, state, pkgs, site, update_catalog, force)
if rc != 0:
module.fail_json(msg=(err or out))
elif state in ['latest']:
# When using latest for *
if name == ['*']:
# Check for packages that are actually outdated
pkgs = packages_not_latest(module, name, site, update_catalog)
# If the package list comes up empty, everything is already up to date
if pkgs == []:
module.exit_json(changed=False)
# If there are packages to update, just empty the list and run the command without it
# pkgutil logic is to update all when run without packages names
pkgs = []
(rc, out, err) = package_upgrade(module, pkgs, site, update_catalog, force)
if rc != 0:
if err:
msg = err
else:
msg = out
module.fail_json(msg=msg)
elif state == 'latest':
if not package_installed(module, name):
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = package_install(module, state, name, site, update_catalog)
if len(out) > 75:
out = out[:75] + '...'
if rc != 0:
if err:
msg = err
else:
msg = out
module.fail_json(msg=msg)
module.fail_json(msg=(err or out))
else:
if not package_latest(module, name, site):
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = package_upgrade(module, name, site, update_catalog)
if len(out) > 75:
out = out[:75] + '...'
if rc != 0:
if err:
msg = err
else:
msg = out
module.fail_json(msg=msg)
# Build list of packages that are either outdated or not installed
pkgs = packages_not_installed(module, name)
pkgs += packages_not_latest(module, name, site, update_catalog)
elif state == 'absent':
if package_installed(module, name):
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = package_uninstall(module, name)
if len(out) > 75:
out = out[:75] + '...'
# If the package list is empty that means all packages are installed and up to date
if pkgs == []:
module.exit_json(changed=False)
(rc, out, err) = package_upgrade(module, pkgs, site, update_catalog, force)
if rc != 0:
if err:
msg = err
else:
msg = out
module.fail_json(msg=msg)
module.fail_json(msg=(err or out))
elif state in ['absent', 'removed']:
# Build list of packages requested for removal that are actually present
pkgs = packages_installed(module, name)
# If the list is empty, no packages need to be removed
if pkgs == []:
module.exit_json(changed=False)
(rc, out, err) = package_uninstall(module, pkgs)
if rc != 0:
module.fail_json(msg=(err or out))
if rc is None:
# pkgutil was not executed because the package was already present/absent
# pkgutil was not executed because the package was already present/absent/up to date
result['changed'] = False
elif rc == 0:
result['changed'] = True

View File

@@ -123,9 +123,19 @@ EXAMPLES = '''
runrefresh: yes
'''
import traceback
XML_IMP_ERR = None
try:
from xml.dom.minidom import parseString as parseXML
HAS_XML = True
except ImportError:
XML_IMP_ERR = traceback.format_exc()
HAS_XML = False
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
REPO_OPTS = ['alias', 'name', 'priority', 'enabled', 'autorefresh', 'gpgcheck']
@@ -143,7 +153,8 @@ def _parse_repos(module):
"""parses the output of zypper --xmlout repos and return a parse repo dictionary"""
cmd = _get_cmd('--xmlout', 'repos')
from xml.dom.minidom import parseString as parseXML
if not HAS_XML:
module.fail_json(msg=missing_required_lib("python-xml"), exception=XML_IMP_ERR)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
repos = []

View File

@@ -206,6 +206,36 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
- name: Turn system power off
community.general.redfish_command:
category: Systems
command: PowerForceOff
resource_id: 437XR1138R2
- name: Restart system power forcefully
community.general.redfish_command:
category: Systems
command: PowerForceRestart
resource_id: 437XR1138R2
- name: Shutdown system power gracefully
community.general.redfish_command:
category: Systems
command: PowerGracefulShutdown
resource_id: 437XR1138R2
- name: Turn system power on
community.general.redfish_command:
category: Systems
command: PowerOn
resource_id: 437XR1138R2
- name: Reboot system power
community.general.redfish_command:
category: Systems
command: PowerReboot
resource_id: 437XR1138R2
- name: Set one-time boot device to {{ bootdevice }}
community.general.redfish_command:
category: Systems
@@ -238,6 +268,21 @@ EXAMPLES = '''
username: "{{ username }}"
password: "{{ password }}"
- name: Set persistent boot device override
community.general.redfish_command:
category: Systems
command: EnableContinuousBootOverride
resource_id: 437XR1138R2
bootdevice: "{{ bootdevice }}"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Disable persistent boot device override
community.general.redfish_command:
category: Systems
command: DisableBootOverride
- name: Set chassis indicator LED to blink
community.general.redfish_command:
category: Chassis
@@ -424,6 +469,51 @@ EXAMPLES = '''
virtual_media:
image_url: 'http://example.com/images/SomeLinux-current.iso'
resource_id: BMC
- name: Restart manager power gracefully
community.general.redfish_command:
category: Manager
command: GracefulRestart
resource_id: BMC
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Restart manager power gracefully
community.general.redfish_command:
category: Manager
command: PowerGracefulRestart
resource_id: BMC
- name: Turn manager power off
community.general.redfish_command:
category: Manager
command: PowerForceOff
resource_id: BMC
- name: Restart manager power forcefully
community.general.redfish_command:
category: Manager
command: PowerForceRestart
resource_id: BMC
- name: Shutdown manager power gracefully
community.general.redfish_command:
category: Manager
command: PowerGracefulShutdown
resource_id: BMC
- name: Turn manager power on
community.general.redfish_command:
category: Manager
command: PowerOn
resource_id: BMC
- name: Reboot manager power
community.general.redfish_command:
category: Manager
command: PowerReboot
resource_id: BMC
'''
RETURN = '''
@@ -442,14 +532,15 @@ from ansible.module_utils._text import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["PowerOn", "PowerForceOff", "PowerForceRestart", "PowerGracefulRestart",
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot"],
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot", "EnableContinuousBootOverride", "DisableBootOverride"],
"Chassis": ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",
"UpdateUserRole", "UpdateUserPassword", "UpdateUserName",
"UpdateAccountServiceProperties"],
"Sessions": ["ClearSessions"],
"Manager": ["GracefulRestart", "ClearLogs", "VirtualMediaInsert",
"VirtualMediaEject"],
"VirtualMediaEject", "PowerOn", "PowerForceOff", "PowerForceRestart",
"PowerGracefulRestart", "PowerGracefulShutdown", "PowerReboot"],
"Update": ["SimpleUpdate"]
}
@@ -530,6 +621,13 @@ def main():
'update_creds': module.params['update_creds']
}
# Boot override options
boot_opts = {
'bootdevice': module.params['bootdevice'],
'uefi_target': module.params['uefi_target'],
'boot_next': module.params['boot_next']
}
# VirtualMedia options
virtual_media = module.params['virtual_media']
@@ -576,13 +674,17 @@ def main():
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
if "Power" in command:
if command.startswith('Power'):
result = rf_utils.manage_system_power(command)
elif command == "SetOneTimeBoot":
result = rf_utils.set_one_time_boot_device(
module.params['bootdevice'],
module.params['uefi_target'],
module.params['boot_next'])
boot_opts['override_enabled'] = 'Once'
result = rf_utils.set_boot_override(boot_opts)
elif command == "EnableContinuousBootOverride":
boot_opts['override_enabled'] = 'Continuous'
result = rf_utils.set_boot_override(boot_opts)
elif command == "DisableBootOverride":
boot_opts['override_enabled'] = 'Disabled'
result = rf_utils.set_boot_override(boot_opts)
elif category == "Chassis":
result = rf_utils._find_chassis_resource()
@@ -617,8 +719,13 @@ def main():
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
# standardize on the Power* commands, but allow the the legacy
# GracefulRestart command
if command == 'GracefulRestart':
result = rf_utils.restart_manager_gracefully()
command = 'PowerGracefulRestart'
if command.startswith('Power'):
result = rf_utils.manage_manager_power(command)
elif command == 'ClearLogs':
result = rf_utils.clear_logs()
elif command == 'VirtualMediaInsert':

Some files were not shown because too many files have changed in this diff Show More