Compare commits

..

61 Commits
1.3.0 ... 1.3.2

Author SHA1 Message Date
Felix Fontein
b56539f17e Release 1.3.2. 2021-01-04 18:26:01 +01:00
Felix Fontein
167d4bae90 Add release summary. 2021-01-04 13:36:21 +01:00
Felix Fontein
de85c11bd1 [stable-1] Add OC, hashi_vault and Google removal announcements (#1560)
* Add removal announcements.

* Remove Latin abbrevation.

* Add hashi_vault removal announcement.
2021-01-04 11:06:49 +01:00
patchback[bot]
d0731b111c Re-enable nomad tests (#1582) (#1586)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit d12951b9c7)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-01-04 09:18:37 +01:00
patchback[bot]
7cd96ef3b6 changed make parameter from --question to -q (#1574) (#1581)
* changed make parameter from --question to -q

* changelog fragment

* Update changelogs/fragments/1574-make-question.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0bd4b3cbc9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-01-03 13:55:58 +01:00
patchback[bot]
b7a44a593e fix passwordstore.py to be compatible with gopass versions (#1493) (#1580)
* Be compatible to latest gopass versions.
`gopass show` is deprecated.

* add changelog fragment

* Update changelogs/fragments/1493-fix_passwordstore.py_to_be_compatible_with_gopass_versions.yml

Co-authored-by: Eike Waldt <git@yog.wtf>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 491b622041)

Co-authored-by: Eike Waldt <git@yeoldegrove.de>
2021-01-03 11:48:37 +00:00
patchback[bot]
c413963ecb pamd - fixed bug (#1538) (#1579)
* Fixed bug

- The module was searching back (and forward, in the ``after`` state) for lines that were not comments, assuming it would be a valid rule or an include.

* remove the line, make yamllint happy

* Update changelogs/fragments/1394-pamd-removing-comments.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 325a19d88a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-01-03 11:48:07 +00:00
patchback[bot]
4f7d44aa10 monit: add support for all monit services when checking process state (#1532) (#1578)
* add support for all monit service types

* ignore case when performing check

* add changelog

* Escape special characters before matching

Co-authored-by: Felix Fontein <felix@fontein.de>

* escape each element individually

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit bed1dc479f)

Co-authored-by: Graham Herceg <g.a.herceg@gmail.com>
2021-01-03 11:47:54 +00:00
patchback[bot]
56055d4f1e Remove bridge-slave from list of IP based connections (#1517) (#1577)
* Removed the bridge-slave from list of ip based connections since nmcli does not accept IP options for bridge-slave connections.

* Update changelogs/fragments/1517-bridge-slave-from-list-of-ip-based-connections.yml

Thanks for the tip.

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit fd741ed663)

Co-authored-by: momcilo78 <momcilo@majic.rs>
2021-01-03 12:43:56 +01:00
patchback[bot]
3fa4a9c073 Legacy Python certificate validation fixed (#470) (#1576)
* Legacy Python certificate validation fixed

* added changelog fragment

* removed blank line for sanity checks

* Update changelogs/fragments/470-spacewalk-legacy-python-certificate-validation.yaml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Update changelogs/fragments/470-spacewalk-legacy-python-certificate-validation.yaml

Co-authored-by: jpe <petz.johannes@afb.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
(cherry picked from commit df9f0741b5)

Co-authored-by: Johannes Petz <PetzJohannes@users.noreply.github.com>
2021-01-03 11:37:40 +01:00
patchback[bot]
1552bae77b syslogger - update syslog.openlog API call for older Python (#1572) (#1573)
* syslogger - update syslog.openlog API call for older Python

Fixes: #953

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>

* Update changelogs/fragments/953_syslogger.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ce83bde742)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-01-02 21:05:43 +01:00
patchback[bot]
a9cad80a36 lxc_container: update docs (#1544) (#1570)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit eacbf45632)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-01-01 17:37:11 +01:00
patchback[bot]
fc79283662 snmp_facts: doc update (#1569) (#1571)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit ba50d114d4)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-01-01 17:36:50 +01:00
patchback[bot]
0d8ea31781 Scaleway: Update documentation (#1567) (#1568)
Online SAS is rebranded as Scaleway in 2015. Updated
inventory documentation for the same.

Fixes: #814

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 7b529c72b3)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2021-01-01 13:44:09 +01:00
patchback[bot]
7ac14f964b updated deprecated homebrew cask commands (#1481) (#1565)
* updated deprecated homebrew cask commands

* added methods for brew version deprecation check

* added comments and changelog fragment

* added unit test for version comparison

* switch to use disutils LooseVersion for version comparison

* updated changelog message and minor refactor for building brew command based on version

* added caching logic for retrieval of brew version and updated PR changelog yaml

* Update changelogs/fragments/1481-deprecated-brew-cask-command.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/packaging/os/homebrew_cask.py

* Update plugins/modules/packaging/os/homebrew_cask.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/packaging/os/homebrew_cask.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* switch to use subprocess.check_output instead of subprocess.run

* replace subprocess with run_command

* removed unused subprocess import

* removed error handling logic to depend on check_rc=True instead

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ed813176ce)

Co-authored-by: Jianhao Tan <jianhao@shopback.com>
2020-12-31 23:07:39 +00:00
patchback[bot]
95d725a3cc launchd: Handle deprecated APIs in plistlib (#1554) (#1563)
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6c88b69d6f)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-12-30 09:00:33 +01:00
patchback[bot]
95de8bd39d sendgrid: Update docs (#1557) (#1558)
* Updated docs
* Warn user about required Sendgrid Python library version i.e <=1.6.22

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 818cafc580)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-12-29 10:25:40 +01:00
Felix Fontein
ecbdaca971 Backport of 5eef093e99 (#1548) 2020-12-27 15:50:21 +01:00
patchback[bot]
54754f7e81 Add hnakamur to ignore list for LXD modules (see #1543). (#1545) (#1546)
(cherry picked from commit 09e2699d1c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-27 14:47:40 +01:00
patchback[bot]
bd15741647 jira - some improvements to the module (#1536) (#1547)
* Some improvements to the module

- Fixed examples documentation ().
- Module no longer incorrectly reports change for information gathering operations ().
- Replaced custom parameter validation with ``required_if`` ().
- Added the traceback output to ``fail_json()`` calls deriving from exceptions ().

* added PR URL to changelof frag

* Update changelogs/fragments/jira_improvements.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* mentioned issue required for transition in chnagelog

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 5016f402a5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-12-27 13:47:33 +00:00
Felix Fontein
fa05ca3f63 Backport of 117f132213 (#1542) 2020-12-26 18:30:07 +00:00
patchback[bot]
29992f1fbf nios_member: fix nios api member_normalize error with python 3 (#1527) (#1534)
* nios_member: fix nios api member_normalize error with python 3

Force a copy of the key to allow change during iteration.

* Update - add changelog fragment

* Update - add changelog fragment

* Update changelogs/fragments/1527-fix-nios-api-member-normalize.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c63f3f9956)

Co-authored-by: neatherweb <35084494+neatherweb@users.noreply.github.com>
2020-12-23 08:49:29 +01:00
patchback[bot]
34ab07865f Ensured `changed returns False`. (#1530) (#1531)
* Ensured ``changed`` returns ``False``.

- Added small improvement on the ``_load_scope()`` method.

* yamllint caught it

* Rephrased changelog fragment

(cherry picked from commit 1faf8ef08b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-12-22 16:15:50 +01:00
patchback[bot]
5fa1fc65ca YAML callback: do not remove non-ASCII Unicode from multi-line string output (#1522) (#1529)
* Do not remove non-ASCII Unicode from multi-line string output.

* Added basic tests.

* Add Unicode test.

* Simplify tests, avoid later Jinja features.

* Refactor.

* Make use diy tests use callback test framework as well.

* Remove color codes.

* Work around stable-2.9 bug.

* Simplify again.

(cherry picked from commit 0a7ed3b019)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-22 09:59:57 +00:00
Felix Fontein
c0bb56c454 Next release will be 1.3.2. 2020-12-21 14:57:57 +01:00
Felix Fontein
332ba8166c Release 1.3.1. 2020-12-21 14:25:46 +01:00
Felix Fontein
725450e57a Add release summary. 2020-12-21 14:24:51 +01:00
patchback[bot]
f4311e08aa Some adjustments/improvements (#1516) (#1520)
* Some adjustments/improvements

- Added doc details for parameters ``description`` and ``objectClass``
- Added type details to argument_spec of parameters ``description`` and ``objectClass``.
- Removed unused import
- Simplified logic of ``LdapEntry._load_attrs()``
- Replaced parameter validation test with ``required_if``.

* Added changelog frag

(cherry picked from commit 5ee5c004b4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-12-21 14:20:35 +01:00
patchback[bot]
9e7b067904 Raises error for non-existent repo path (#1512) (#1518)
* Raises error for non-existent repo path, requires Ansible 2.10.4 or higher.

* Changes from suggestions in the PR

* Suggestion from PR

* Update changelogs/fragments/630-git_config-handling-invalid-dir.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit e9dafb3467)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-12-21 13:11:45 +01:00
patchback[bot]
d29db3ecf9 saltstack: fix put_file to preserve checksum (#1472) (#1514)
* saltstack: fix put_file to preserve checksum

Use hashutil.base64_decodefile to ensure that the file checksum
is preserved, since file.write only supports text files.

Signed-off-by: Zac Medico <zmedico@gmail.com>

* Update changelogs/fragments/1472-saltstack-fix-put_file-to-preserve-checksum.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 47b940fc63)

Co-authored-by: Zac Medico <zmedico@gmail.com>
2020-12-19 18:55:57 +00:00
patchback[bot]
4aba7d5b87 jira: Provide useful error message to user (#1509) (#1513)
* Code refactor
* Handle exception
* Provide useful error message to user when exception is raised

Fixes: #1504

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit eb79c14e9c)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-12-19 18:11:11 +01:00
Felix Fontein
88d00c32db Use stable-1 branch for AZP CI badge. 2020-12-18 17:26:10 +01:00
Felix Fontein
f1e1b46ce2 Use AZP badge instead of Shippable badge for CI.
(cherry picked from commit 1ed5a36a81)
2020-12-18 17:24:42 +01:00
John R Barker
c4256d8674 [stable-1] AZP Bootstrap (#1505)
* AZP Bootstrap

(cherry picked from commit 33126b7267)

* AZP: Correct Cloud jobs

(cherry picked from commit 2fea31b292377e17ba5447dcb79c4b753a6fad58)

* Fix AZP CI (#1508)

* Fix 2.9/2.10 cloud

* Fix splunk callback tests.

* ansible_virtualization_type on AZP can be one of container/containerd instead of docker for dockerized tests.

* Disable nomad tests.

* Work around AZP bugs.

(cherry picked from commit dd55c3c3bb)

* Run tests on all groups.

* Reduce 2.9 coverage to decrease large number of jobs.

* Try to fix test.

* Revert "Try to fix test."

This reverts commit 23f51451c6.

* Other target selection for 2.9.

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-18 16:07:09 +00:00
patchback[bot]
0bfed46136 bitbucket_pipeline_variable: Change pagination logic (#1498) (#1502)
Bitbucket's pagination for pipeline variables is flawed.
Refactor bitbucket_pipeline_variable code to correct this logic.

Fixes: #1425

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 4c14df6d88)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-12-17 11:12:39 +01:00
patchback[bot]
a04912dec0 [PR #645/c3ef9bf6 backport][stable-1] cobbler: Add Python 3 support (#1499)
* cobbler: Add Python 3 support (#645)


(cherry picked from commit c3ef9bf668)

* Fix changelog fragment.

(cherry picked from commit d495d3969b)

Co-authored-by: Dag Wieers <dag@wieers.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-16 23:04:23 +01:00
patchback[bot]
7f92aa0854 Icinga doc fix (#1495) (#1496)
* Document it is a dictionary to reduce confusion.

* Add variable example

(cherry picked from commit 757427cadf)

Co-authored-by: Erinn Looney-Triggs <erinn@users.noreply.github.com>
2020-12-16 08:06:43 +01:00
Felix Fontein
a16164cb72 Prepare docker tests for AZP. (#1482)
Backport of important parts of https://github.com/ansible-collections/community.docker/pull/48 to stable-1.
2020-12-15 20:04:09 +00:00
patchback[bot]
3960153f70 fix so module nios_host_record can remove aliases (#1470) (#1490)
* fix for https://github.com/ansible-collections/community.general/issues/1335

* added changelog fragment

* Update changelogs/fragments/nios_host_record-fix-aliases-removal.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* extend changelog to specify CNAMES

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 496be77a2b)

Co-authored-by: Pablo Escobar Lopez <pescobar001@gmail.com>
2020-12-15 20:49:52 +01:00
Felix Fontein
6d4760eb20 Fix docker CI. (#1494)
Backport of ansible-collections/community.docker#50 to stable-1.
2020-12-15 20:20:43 +01:00
patchback[bot]
777a741d4d keycloak: Provide meaningful error message to user (#1487) (#1489)
When user provides auth URL value which does not startswith
http or https protocol schema, provide a meaningful error message
stating so.

Fixes: #331

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit f37eb12580)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-12-15 06:39:47 +01:00
patchback[bot]
aaf42f3646 Fix property name typo in get_memory_inventory() (#1484) (#1486)
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 51dfc1f288)

Co-authored-by: Bill Dodd <billdodd@gmail.com>
2020-12-14 07:57:07 +01:00
patchback[bot]
c167ac10e0 xfconf: add return values and expand test coverage (#1419) (#1479)
* xfconf: add return values and expand test coverage

* fix pep8

* fix pylint

* fix returns yaml docs

* Add changelog fragemnt

* revert docts for `returned`

* Update changelogs/fragments/1419-xfconf-return-values.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/system/xfconf.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* update return values to raw for scalar/lists

* another doc tweak: None -> none

* Break newline for pep8

* Fix merge mistake

* Back to list of strings

* fix yaml syntax

* Fall back to old way, deprecate returns, add ingores for errors

* add a note about dprecating facts

* Add depracation messages and fix docstring error

* remove deprecation of return values.

* Update plugins/modules/system/xfconf.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* drop the deprecation message too

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 8e53b3df6f)

Co-authored-by: Matthew Campbell <calvinmc@gmail.com>
2020-12-12 15:29:20 +01:00
Felix Fontein
154d8a313c [stable-1] Fix docker_image tests (#1476)
* Backport of https://github.com/ansible-collections/community.docker/pull/47 to stable-1.

* Also fix old-options.
2020-12-12 08:33:09 +01:00
patchback[bot]
b76492687b BOTMETA.yml: Add a new maintainer of osx_defaults module (#1469) (#1471)
(cherry picked from commit 8d9fd52d3d)

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-12-11 06:44:28 +01:00
Felix Fontein
a118bb8d05 Backport of https://github.com/ansible-collections/community.docker/pull/43 (518e99411a9ce47c249a045b346cf28b63b512e2). (#1468) 2020-12-09 11:05:30 +03:00
patchback[bot]
d2b1df49c1 Bugfix: Fix parsing array values from osx_defaults (#358) (#1467)
* Bugfix: Fix parsing array values in osx_defaults

Unquote values and unescape double quotes when reading array values from defaults.

* Fix fragments: fix_parsing_array_values_in_osx_defaults

Co-authored-by: Felix Fontein <felix@fontein.de>

* add test code for Bugfix: Fix parsing array values from osx_defaults

* handle spaces after the comma

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 1110e93c5d)

Co-authored-by: Kazufumi NOTO <noto.kazufumi@gmail.com>
2020-12-09 08:12:28 +01:00
patchback[bot]
fb3085e78d Add ignore-2.10.txt entry analogous to ignore-2.11.txt entry. (#1465) (#1466)
(cherry picked from commit e1bf23d27d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-08 22:33:17 +01:00
patchback[bot]
ad163ed3af Fix new sanity errors. (#1451) (#1452)
(cherry picked from commit 4566812591)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-06 10:37:17 +01:00
patchback[bot]
15257e9a64 zypper_repository: fix broken tests (#1449) (#1450)
* Change broken repository

* Re-enable integration tests

* Fix integration test

(cherry picked from commit 65d4fe2f4f)

Co-authored-by: Amin Vakil <info@aminvakil.com>
2020-12-05 16:27:16 +01:00
patchback[bot]
c642ee9157 Fix #1435: mas : Fix "invalid literal" when no app (#1436) (#1448)
* Fix #1435: mas : Fix "invalid literal" when no app

* Add changelog fragment

* Update changelogs/fragments/1436-mas-fix-no-app-installed.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit b80854ff50)

Co-authored-by: Jean-Pierre Matsumoto <jpmat296@gmail.com>
2020-12-04 09:36:03 +01:00
patchback[bot]
7e89bc6f61 Temporarily disable zypper_repository tests. (#1445) (#1447)
(cherry picked from commit e1ca4ce1e8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-04 08:12:29 +00:00
patchback[bot]
9defd1aca1 json_query: Handle AnsibleUnicode, AnsibleUnsafeText (#1434) (#1443)
jmespath library does not undestand custom string types
such as AnsibleUnicode, and AnsibleUnsafeText.
So user need to use ``to_json | from_json`` filter while using
functions like ``starts_with`` and ``contains`` etc.
This hack will allow user to get rid of this filter.

Fixes: #320

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 5319437bc2)

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-12-04 08:06:09 +01:00
patchback[bot]
ff1a8415bd Remove adejoux as maintainer from aix_lvol. (#1432) (#1438)
See https://github.com/ansible-collections/community.general/pull/1330 for details.

(cherry picked from commit d1acf52906)

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-12-03 22:18:20 +01:00
patchback[bot]
961011891b [icinga2_host.py] Actually return codes instead of data (#335) (#1431)
* [icinga2_host.py] Actually return codes instead of data

Currently the module tries to return the `data`, which can result in a blank message instead of the code being shown.

```
 "msg": "bad return code creating host: "
```

* add changelog fragment

* Update changelogs/fragments/335-icinga2_host-return-error-code.yaml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* return code and data on fail

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: John R Barker <john@johnrbarker.com>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
Co-authored-by: Deric Crago <deric.crago@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 70ba401602)

Co-authored-by: Martin <spleefer90@gmail.com>
2020-12-02 08:17:26 +01:00
Felix Fontein
6470d3defe Tidy up validate-modules:no-default-for-required-parameter and other cases (#1423) (#1429)
* Fixed validate-modules:mutually_exclusive-unknown for plugins/modules/packaging/os/redhat_subscription.py

* fixed validation-modules for plugins/modules/cloud/lxd/lxd_container.py

* fixed validation-modules for plugins/modules/web_infrastructure/sophos_utm/utm_network_interface_address.py

* fixed validation-modules for plugins/modules/cloud/opennebula/one_host.py

* fixed validation-modules for plugins/modules/cloud/opennebula/one_image_info.py

* fixed validation-modules for plugins/modules/cloud/opennebula/one_image.py

* fixed validation-modules for plugins/modules/cloud/opennebula/one_service.py

* fixed validation-modules for plugins/modules/cloud/opennebula/one_vm.py

* fixed validation-modules for plugins/modules/net_tools/cloudflare_dns.py

* fixed validation-modules for plugins/modules/net_tools/ip_netns.py

* fixed validation-modules for plugins/modules/net_tools/ipinfoio_facts.py

* fixed validation-modules for plugins/modules/net_tools/netcup_dns.py

* fixed validation-modules for plugins/modules/remote_management/wakeonlan.py

* added types to plugins/modules/remote_management/stacki/stacki_host.py but still cannot remove ignore line

* added a couple of FIXME comments

* fixed validation-modules for plugins/modules/remote_management/manageiq/manageiq_provider.py

* fixed validation-modules for plugins/modules/notification/rocketchat.py

* fixed validation-modules for plugins/modules/monitoring/bigpanda.py

* fixed validation-modules for plugins/modules/identity/keycloak/keycloak_client.py

* fixed validation-modules for plugins/modules/identity/keycloak/keycloak_clienttemplate.py

* fixed validation-modules for plugins/modules/cloud/univention/udm_user.py

* fixed validation-modules for plugins/modules/cloud/univention/udm_group.py

* fixed validation-modules for plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py

* fixed validation-modules for plugins/modules/cloud/smartos/imgadm.py

* fixed validation-modules for plugins/modules/cloud/profitbricks/profitbricks_nic.py

* fixed validation-modules for plugins/modules/cloud/ovirt/ovirt_external_provider_facts.py

* Tidy up validate-modules ignores no-default-for-required-parameter + couple of other cases

* Added changelog frag

* fixed validation-modules for plugins/modules/cloud/centurylink/clc_alert_policy.py

* fixed validation-modules for plugins/modules/cloud/centurylink/clc_firewall_policy.py

* fixed validation-modules for plugins/modules/cloud/lxd/lxd_profile.py

* Typos and small fixes

* fixed validation-modules for plugins/modules/net_tools/ldap/ldap_passwd.py

* Typos and small fixes, part 2

* Fixes from PR comments

* Update plugins/modules/cloud/profitbricks/profitbricks_nic.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Rolled back the mutually-exclusive-unknown in redhat_subscription

* Update changelogs/fragments/1423-valmod_multiple_cases.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ae0d3cb090)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-12-01 21:13:54 +00:00
patchback[bot]
32ac93fb16 mas : Add example using PATH: /usr/local/bin (#1424) (#1427)
Use of "environement" is required when mas is installed with
homebrew. I suppose most of people use homebrew.

(cherry picked from commit db61a899d5)

Co-authored-by: Jean-Pierre Matsumoto <jpmat296@gmail.com>
2020-12-01 20:57:09 +01:00
Felix Fontein
1dfe7963cf Tidy up validate-modules:doc-required-mismatch (#1415) (#1417)
* Tidy up validate-modules ignores doc-required-mismatch

* Tidy up validate-modules ignores doc-required-mismatch - update on 2.11

* Fixed chengelog frag

* rolledback removal of parameter from cloud/smartos/vmadm.py

* removed changelog frag for the rollback

* Update plugins/modules/cloud/smartos/vmadm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Revert "removed changelog frag for the rollback"

This reverts commit 56a02ead3b.

* suggestion from PR

* yet another PR suggestion

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit b69ea1dfd9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-11-28 15:41:36 +01:00
Felix Fontein
a46fb7bcae Tidy up validate-modules:doc-choices-do-not-match-spec II: The Rebase (#1409) (#1416)
* fixed validation-modules for plugins/modules/cloud/lxc/lxc_container.py

* fixed validation-modules for plugins/modules/cloud/smartos/vmadm.py

* fixed validation-modules for plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py

* fixed validation-modules for plugins/modules/cloud/univention/udm_dns_record.py

* fixed validation-modules for plugins/modules/cloud/univention/udm_dns_zone.py

* fixed validation-modules for plugins/modules/cloud/lxc/lxc_container.py

* fixed validation-modules for plugins/modules/cloud/univention/udm_user.py

* fixed validation-modules for plugins/modules/clustering/etcd3.py

* fixed validation-modules for plugins/modules/clustering/znode.py

* fixed validation-modules for plugins/modules/remote_management/hpilo/hpilo_boot.py

* fixed validation-modules for plugins/modules/remote_management/ipmi/ipmi_boot.py

* fixed validation-modules for plugins/modules/remote_management/ipmi/ipmi_power.py

* fixed validation-modules for plugins/modules/remote_management/manageiq/manageiq_provider.py

* fixed validation-modules for plugins/modules/remote_management/stacki/stacki_host.py

* fixed validation-modules for plugins/modules/cloud/univention/udm_share.py

* Removed validate-modules:doc-choices-do-not-match-spec from ignore files

* fixed alias samba_inherit_permissions in udm_share.py

* Rolled back a couple of lines

* Removed duplicate key in docs

* Rolled back a couple of troublesome lines

* Removed no-longer necessary ignore lines

* Removed no-longer necessary ignore lines on 2.11 as well

* Removed no-longer necessary ignore lines on 2.9 this time

(cherry picked from commit cff8463882)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-11-28 12:38:48 +01:00
patchback[bot]
70a8ca6ac3 Tidy up validate-modules:doc-elements-mismatch (#1399) (#1407)
* fixed validation-modules for plugins/modules/cloud/xenserver/xenserver_guest.py

* fixed validation-modules for plugins/modules/identity/ipa/ipa_hbacrule.py

* fixed validation-modules for plugins/modules/identity/keycloak/keycloak_client.py

* fixed validation-modules for plugins/modules/identity/keycloak/keycloak_clienttemplate.py

* fixed validation-modules for plugins/modules/net_tools/nios/nios_fixed_address.py

* fixed validation-modules for plugins/modules/net_tools/nios/nios_host_record.py

* fixed validation-modules for plugins/modules/net_tools/nios/nios_member.py

* fixed validation-modules for plugins/modules/net_tools/nios/nios_network.py

* fixed validation-modules for plugins/modules/net_tools/nios/nios_nsgroup.py

* fixed validation-modules for plugins/modules/remote_management/redfish/redfish_config.py

* fixed validation-modules for plugins/modules/source_control/github/github_webhook.py

* fixed validation-modules for plugins/modules/web_infrastructure/sophos_utm/utm_proxy_exception.py

* Tidy up validate-modules ignores doc-elements-mismatch

* Added changelog frag for utm_proxy_exception

* Update changelogs/fragments/1399-fixed-wrong-elements-type.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fixed couple of missing docs

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 47c456f740)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2020-11-27 08:25:08 +01:00
Felix Fontein
17f598fdc2 Bump version for next release. 2020-11-26 14:13:09 +01:00
180 changed files with 2999 additions and 1421 deletions

View File

@@ -0,0 +1,3 @@
## Azure Pipelines Configuration
Please see the [Documentation](https://github.com/ansible/community/wiki/Testing:-Azure-Pipelines) for more information.

View File

@@ -0,0 +1,329 @@
trigger:
batch: true
branches:
include:
- main
- stable-*
pr:
autoCancel: true
branches:
include:
- main
- stable-*
schedules:
- cron: 0 9 * * *
displayName: Nightly
always: true
branches:
include:
- main
- stable-*
variables:
- name: checkoutPath
value: ansible_collections/community/general
- name: coverageBranches
value: main
- name: pipelinesCoverage
value: coverage
- name: entryPoint
value: tests/utils/shippable/shippable.sh
- name: fetchDepth
value: 0
resources:
containers:
- container: default
image: quay.io/ansible/azure-pipelines-test-container:1.7.1
pool: Standard
stages:
### Sanity
- stage: Sanity_devel
displayName: Sanity devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: devel/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- test: extra
- stage: Sanity_2_10
displayName: Sanity 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.10/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_9
displayName: Sanity 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.9/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
### Units
- stage: Units_devel
displayName: Units devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: devel/units/{0}/1
targets:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- stage: Units_2_10
displayName: Units 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.10/units/{0}/1
targets:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- stage: Units_2_9
displayName: Units 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.9/units/{0}/1
targets:
- test: 2.6
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
## Remote
- stage: Remote_devel
displayName: Remote devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/{0}
targets:
- name: OS X 10.11
test: osx/10.11
- name: macOS 10.15
test: macos/10.15
- name: RHEL 7.8
test: rhel/7.8
- name: RHEL 8.2
test: rhel/8.2
- name: FreeBSD 11.1
test: freebsd/11.1
- name: FreeBSD 12.1
test: freebsd/12.1
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Remote_2_10
displayName: Remote 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.10/{0}
targets:
- name: OS X 10.11
test: osx/10.11
- name: RHEL 8.2
test: rhel/8.2
- name: FreeBSD 12.1
test: freebsd/12.1
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Remote_2_9
displayName: Remote 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.9/{0}
targets:
- name: RHEL 8.2
test: rhel/8.2
#- name: FreeBSD 12.0
# test: freebsd/12.0
groups:
- 1
- 2
- 3
- 4
- 5
### Docker
- stage: Docker_devel
displayName: Docker devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: devel/linux/{0}
targets:
- name: CentOS 6
test: centos6
- name: CentOS 7
test: centos7
- name: CentOS 8
test: centos8
- name: Fedora 31
test: fedora31
- name: Fedora 32
test: fedora32
- name: openSUSE 15 py2
test: opensuse15py2
- name: openSUSE 15 py3
test: opensuse15
- name: Ubuntu 16.04
test: ubuntu1604
- name: Ubuntu 18.04
test: ubuntu1804
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Docker_2_10
displayName: Docker 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.10/linux/{0}
targets:
#- name: CentOS 8
# test: centos8
- name: Fedora 32
test: fedora32
- name: openSUSE 15 py3
test: opensuse15
- name: Ubuntu 18.04
test: ubuntu1804
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Docker_2_9
displayName: Docker 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.9/linux/{0}
targets:
#- name: CentOS 8
# test: centos8
#- name: Fedora 31
# test: fedora31
#- name: openSUSE 15 py3
# test: opensuse15
- name: Ubuntu 18.04
test: ubuntu1804
groups:
- 1
- 2
- 3
- 4
- 5
### Cloud
- stage: Cloud_devel
displayName: Cloud devel
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: devel/cloud/{0}/1
targets:
- test: 2.7
- test: 3.6
- stage: Cloud_2_10
displayName: Cloud 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.10/cloud/{0}/1
targets:
- test: 3.6
- stage: Cloud_2_9
displayName: Cloud 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.9/cloud/{0}/1
targets:
- test: 2.7
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity_devel
- Sanity_2_9
- Sanity_2_10
- Units_devel
- Units_2_9
- Units_2_10
- Remote_devel
- Remote_2_9
- Remote_2_10
- Docker_devel
- Docker_2_9
- Docker_2_10
- Cloud_devel
- Cloud_2_9
- Cloud_2_10
jobs:
- template: templates/coverage.yml

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
# Aggregate code coverage results for later processing.
set -o pipefail -eu
agent_temp_directory="$1"
PATH="${PWD}/bin:${PATH}"
mkdir "${agent_temp_directory}/coverage/"
options=(--venv --venv-system-site-packages --color -v)
ansible-test coverage combine --export "${agent_temp_directory}/coverage/" "${options[@]}"
if ansible-test coverage analyze targets generate --help >/dev/null 2>&1; then
# Only analyze coverage if the installed version of ansible-test supports it.
# Doing so allows this script to work unmodified for multiple Ansible versions.
ansible-test coverage analyze targets generate "${agent_temp_directory}/coverage/coverage-analyze-targets.json" "${options[@]}"
fi

View File

@@ -0,0 +1,60 @@
#!/usr/bin/env python
"""
Combine coverage data from multiple jobs, keeping the data only from the most recent attempt from each job.
Coverage artifacts must be named using the format: "Coverage $(System.JobAttempt) {StableUniqueNameForEachJob}"
The recommended coverage artifact name format is: Coverage $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)
Keep in mind that Azure Pipelines does not enforce unique job display names (only names).
It is up to pipeline authors to avoid name collisions when deviating from the recommended format.
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import shutil
import sys
def main():
"""Main program entry point."""
source_directory = sys.argv[1]
if '/ansible_collections/' in os.getcwd():
output_path = "tests/output"
else:
output_path = "test/results"
destination_directory = os.path.join(output_path, 'coverage')
if not os.path.exists(destination_directory):
os.makedirs(destination_directory)
jobs = {}
count = 0
for name in os.listdir(source_directory):
match = re.search('^Coverage (?P<attempt>[0-9]+) (?P<label>.+)$', name)
label = match.group('label')
attempt = int(match.group('attempt'))
jobs[label] = max(attempt, jobs.get(label, 0))
for label, attempt in jobs.items():
name = 'Coverage {attempt} {label}'.format(label=label, attempt=attempt)
source = os.path.join(source_directory, name)
source_files = os.listdir(source)
for source_file in source_files:
source_path = os.path.join(source, source_file)
destination_path = os.path.join(destination_directory, source_file + '.' + label)
print('"%s" -> "%s"' % (source_path, destination_path))
shutil.copyfile(source_path, destination_path)
count += 1
print('Coverage file count: %d' % count)
print('##vso[task.setVariable variable=coverageFileCount]%d' % count)
print('##vso[task.setVariable variable=outputPath]%s' % output_path)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
# Check the test results and set variables for use in later steps.
set -o pipefail -eu
if [[ "$PWD" =~ /ansible_collections/ ]]; then
output_path="tests/output"
else
output_path="test/results"
fi
echo "##vso[task.setVariable variable=outputPath]${output_path}"
if compgen -G "${output_path}"'/junit/*.xml' > /dev/null; then
echo "##vso[task.setVariable variable=haveTestResults]true"
fi
if compgen -G "${output_path}"'/bot/ansible-test-*' > /dev/null; then
echo "##vso[task.setVariable variable=haveBotResults]true"
fi
if compgen -G "${output_path}"'/coverage/*' > /dev/null; then
echo "##vso[task.setVariable variable=haveCoverageData]true"
fi

View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
# Upload code coverage reports to codecov.io.
# Multiple coverage files from multiple languages are accepted and aggregated after upload.
# Python coverage, as well as PowerShell and Python stubs can all be uploaded.
set -o pipefail -eu
output_path="$1"
curl --silent --show-error https://codecov.io/bash > codecov.sh
for file in "${output_path}"/reports/coverage*.xml; do
name="${file}"
name="${name##*/}" # remove path
name="${name##coverage=}" # remove 'coverage=' prefix if present
name="${name%.xml}" # remove '.xml' suffix
bash codecov.sh \
-f "${file}" \
-n "${name}" \
-X coveragepy \
-X gcov \
-X fix \
-X search \
-X xcode \
|| echo "Failed to upload code coverage report to codecov.io: ${file}"
done

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
# Generate code coverage reports for uploading to Azure Pipelines and codecov.io.
set -o pipefail -eu
PATH="${PWD}/bin:${PATH}"
if ! ansible-test --help >/dev/null 2>&1; then
# Install the devel version of ansible-test for generating code coverage reports.
# This is only used by Ansible Collections, which are typically tested against multiple Ansible versions (in separate jobs).
# Since a version of ansible-test is required that can work the output from multiple older releases, the devel version is used.
pip install https://github.com/ansible/ansible/archive/devel.tar.gz --disable-pip-version-check
fi
ansible-test coverage xml --stub --venv --venv-system-site-packages --color -v

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
# Configure the test environment and run the tests.
set -o pipefail -eu
entry_point="$1"
test="$2"
read -r -a coverage_branches <<< "$3" # space separated list of branches to run code coverage on for scheduled builds
export COMMIT_MESSAGE
export COMPLETE
export COVERAGE
export IS_PULL_REQUEST
if [ "${SYSTEM_PULLREQUEST_TARGETBRANCH:-}" ]; then
IS_PULL_REQUEST=true
COMMIT_MESSAGE=$(git log --format=%B -n 1 HEAD^2)
else
IS_PULL_REQUEST=
COMMIT_MESSAGE=$(git log --format=%B -n 1 HEAD)
fi
COMPLETE=
COVERAGE=
if [ "${BUILD_REASON}" = "Schedule" ]; then
COMPLETE=yes
if printf '%s\n' "${coverage_branches[@]}" | grep -q "^${BUILD_SOURCEBRANCHNAME}$"; then
COVERAGE=yes
fi
fi
"${entry_point}" "${test}" 2>&1 | "$(dirname "$0")/time-command.py"

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env python
"""Prepends a relative timestamp to each input line from stdin and writes it to stdout."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import time
def main():
"""Main program entry point."""
start = time.time()
sys.stdin.reconfigure(errors='surrogateescape')
sys.stdout.reconfigure(errors='surrogateescape')
for line in sys.stdin:
seconds = time.time() - start
sys.stdout.write('%02d:%02d %s' % (seconds // 60, seconds % 60, line))
sys.stdout.flush()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,39 @@
# This template adds a job for processing code coverage data.
# It will upload results to Azure Pipelines and codecov.io.
# Use it from a job stage that completes after all other jobs have completed.
# This can be done by placing it in a separate summary stage that runs after the test stage(s) have completed.
jobs:
- job: Coverage
displayName: Code Coverage
container: default
workspace:
clean: all
steps:
- checkout: self
fetchDepth: $(fetchDepth)
path: $(checkoutPath)
- task: DownloadPipelineArtifact@2
displayName: Download Coverage Data
inputs:
path: coverage/
patterns: "Coverage */*=coverage.combined"
- bash: .azure-pipelines/scripts/combine-coverage.py coverage/
displayName: Combine Coverage Data
- bash: .azure-pipelines/scripts/report-coverage.sh
displayName: Generate Coverage Report
condition: gt(variables.coverageFileCount, 0)
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
# Azure Pipelines only accepts a single coverage data file.
# That means only Python or PowerShell coverage can be uploaded, but not both.
# Set the "pipelinesCoverage" variable to determine which type is uploaded.
# Use "coverage" for Python and "coverage-powershell" for PowerShell.
summaryFileLocation: "$(outputPath)/reports/$(pipelinesCoverage).xml"
displayName: Publish to Azure Pipelines
condition: gt(variables.coverageFileCount, 0)
- bash: .azure-pipelines/scripts/publish-codecov.sh "$(outputPath)"
displayName: Publish to codecov.io
condition: gt(variables.coverageFileCount, 0)
continueOnError: true

View File

@@ -0,0 +1,55 @@
# This template uses the provided targets and optional groups to generate a matrix which is then passed to the test template.
# If this matrix template does not provide the required functionality, consider using the test template directly instead.
parameters:
# A required list of dictionaries, one per test target.
# Each item in the list must contain a "test" or "name" key.
# Both may be provided. If one is omitted, the other will be used.
- name: targets
type: object
# An optional list of values which will be used to multiply the targets list into a matrix.
# Values can be strings or numbers.
- name: groups
type: object
default: []
# An optional format string used to generate the job name.
# - {0} is the name of an item in the targets list.
- name: nameFormat
type: string
default: "{0}"
# An optional format string used to generate the test name.
# - {0} is the name of an item in the targets list.
- name: testFormat
type: string
default: "{0}"
# An optional format string used to add the group to the job name.
# {0} is the formatted name of an item in the targets list.
# {{1}} is the group -- be sure to include the double "{{" and "}}".
- name: nameGroupFormat
type: string
default: "{0} - {{1}}"
# An optional format string used to add the group to the test name.
# {0} is the formatted test of an item in the targets list.
# {{1}} is the group -- be sure to include the double "{{" and "}}".
- name: testGroupFormat
type: string
default: "{0}/{{1}}"
jobs:
- template: test.yml
parameters:
jobs:
- ${{ if eq(length(parameters.groups), 0) }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(parameters.nameFormat, coalesce(target.name, target.test)) }}
test: ${{ format(parameters.testFormat, coalesce(target.test, target.name)) }}
- ${{ if not(eq(length(parameters.groups), 0)) }}:
- ${{ each group in parameters.groups }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(format(parameters.nameGroupFormat, parameters.nameFormat), coalesce(target.name, target.test), group) }}
test: ${{ format(format(parameters.testGroupFormat, parameters.testFormat), coalesce(target.test, target.name), group) }}

View File

@@ -0,0 +1,45 @@
# This template uses the provided list of jobs to create test one or more test jobs.
# It can be used directly if needed, or through the matrix template.
parameters:
# A required list of dictionaries, one per test job.
# Each item in the list must contain a "job" and "name" key.
- name: jobs
type: object
jobs:
- ${{ each job in parameters.jobs }}:
- job: test_${{ replace(replace(replace(job.test, '/', '_'), '.', '_'), '-', '_') }}
displayName: ${{ job.name }}
container: default
workspace:
clean: all
steps:
- checkout: self
fetchDepth: $(fetchDepth)
path: $(checkoutPath)
- bash: .azure-pipelines/scripts/run-tests.sh "$(entryPoint)" "${{ job.test }}" "$(coverageBranches)"
displayName: Run Tests
- bash: .azure-pipelines/scripts/process-results.sh
condition: succeededOrFailed()
displayName: Process Results
- bash: .azure-pipelines/scripts/aggregate-coverage.sh "$(Agent.TempDirectory)"
condition: eq(variables.haveCoverageData, 'true')
displayName: Aggregate Coverage Data
- task: PublishTestResults@2
condition: eq(variables.haveTestResults, 'true')
inputs:
testResultsFiles: "$(outputPath)/junit/*.xml"
displayName: Publish Test Results
- task: PublishPipelineArtifact@1
condition: eq(variables.haveBotResults, 'true')
displayName: Publish Bot Results
inputs:
targetPath: "$(outputPath)/bot/"
artifactName: "Bot $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"
- task: PublishPipelineArtifact@1
condition: eq(variables.haveCoverageData, 'true')
displayName: Publish Coverage Data
inputs:
targetPath: "$(Agent.TempDirectory)/coverage/"
artifactName: "Coverage $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"

6
.github/BOTMETA.yml vendored
View File

@@ -259,7 +259,7 @@ files:
$modules/cloud/lxc/lxc_container.py:
maintainers: cloudnull
$modules/cloud/lxd/:
maintainers: hnakamur
ignore: hnakamur
$modules/cloud/memset/:
maintainers: glitchcrab
$modules/cloud/misc/cloud_init_data_facts.py:
@@ -893,8 +893,6 @@ files:
maintainers: $team_aix
labels: aix
keywords: aix efix lpar wpar
$modules/system/aix_lvol.py:
maintainers: adejoux
$modules/system/alternatives.py:
maintainers: mulby
labels: alternatives
@@ -967,7 +965,7 @@ files:
maintainers: agaffney
$modules/system/osx_defaults.py:
notify: chris-short
maintainers: $team_macos
maintainers: $team_macos notok
labels: macos osx_defaults
keywords: brew cask darwin homebrew macosx macports osx
$modules/system/pam_limits.py:

View File

@@ -5,6 +5,103 @@ Community General Release Notes
.. contents:: Topics
v1.3.2
======
Release Summary
---------------
Regular bugfix release.
Major Changes
-------------
- For community.general 2.0.0, the Google modules will be moved to the `community.google <https://galaxy.ansible.com/community/google>`_ collection.
A redirection will be inserted so that users using ansible-base 2.10 or newer do not have to change anything.
If you use Ansible 2.9 and explicitly use Google modules from this collection, you will need to adjust your playbooks and roles to use FQCNs starting with ``community.google.`` instead of ``community.general.``,
for example replace ``community.general.gcpubsub`` in a task by ``community.google.gcpubsub``.
If you use ansible-base and installed ``community.general`` manually and rely on the Google modules, you have to make sure to install the ``community.google`` collection as well.
If you are using FQCNs, for example ``community.general.gcpubsub`` instead of ``gcpubsub``, it will continue working, but we still recommend to adjust the FQCNs as well.
- For community.general 2.0.0, the OC connection plugin will be moved to the `community.okd <https://galaxy.ansible.com/community/okd>`_ collection.
A redirection will be inserted so that users using ansible-base 2.10 or newer do not have to change anything.
If you use Ansible 2.9 and explicitly use OC connection plugin from this collection, you will need to adjust your playbooks and roles to use FQCNs ``community.okd.oc`` instead of ``community.general.oc``.
If you use ansible-base and installed ``community.general`` manually and rely on the OC connection plugin, you have to make sure to install the ``community.okd`` collection as well.
If you are using FQCNs, in other words ``community.general.oc`` instead of ``oc``, it will continue working, but we still recommend to adjust this FQCN as well.
- For community.general 2.0.0, the hashi_vault lookup plugin will be moved to the `community.hashi_vault <https://galaxy.ansible.com/community/hashi_vault>`_ collection.
A redirection will be inserted so that users using ansible-base 2.10 or newer do not have to change anything.
If you use Ansible 2.9 and explicitly use hashi_vault lookup plugin from this collection, you will need to adjust your playbooks and roles to use FQCNs ``community.hashi_vault.hashi_vault`` instead of ``community.general.hashi_vault``.
If you use ansible-base and installed ``community.general`` manually and rely on the hashi_vault lookup plugin, you have to make sure to install the ``community.hashi_vault`` collection as well.
If you are using FQCNs, in other words ``community.general.hashi_vault`` instead of ``hashi_vault``, it will continue working, but we still recommend to adjust this FQCN as well.
Minor Changes
-------------
- homebrew_cask - Homebrew will be deprecating use of ``brew cask`` commands as of version 2.6.0, see https://brew.sh/2020/12/01/homebrew-2.6.0/. Added logic to stop using ``brew cask`` for brew version >= 2.6.0 (https://github.com/ansible-collections/community.general/pull/1481).
- jira - added the traceback output to ``fail_json()`` calls deriving from exceptions (https://github.com/ansible-collections/community.general/pull/1536).
Bugfixes
--------
- docker_image - if ``push=true`` is used with ``repository``, and the image does not need to be tagged, still push. This can happen if ``repository`` and ``name`` are equal (https://github.com/ansible-collections/community.docker/issues/52, https://github.com/ansible-collections/community.docker/pull/53).
- docker_image - report error when loading a broken archive that contains no image (https://github.com/ansible-collections/community.docker/issues/46, https://github.com/ansible-collections/community.docker/pull/55).
- docker_image - report error when the loaded archive does not contain the specified image (https://github.com/ansible-collections/community.docker/issues/41, https://github.com/ansible-collections/community.docker/pull/55).
- jira - ``fetch`` and ``search`` no longer indicate that something changed (https://github.com/ansible-collections/community.general/pull/1536).
- jira - ensured parameter ``issue`` is mandatory for operation ``transition`` (https://github.com/ansible-collections/community.general/pull/1536).
- jira - module no longer incorrectly reports change for information gathering operations (https://github.com/ansible-collections/community.general/pull/1536).
- jira - replaced custom parameter validation with ``required_if`` (https://github.com/ansible-collections/community.general/pull/1536).
- launchd - handle deprecated APIs like ``readPlist`` and ``writePlist`` in ``plistlib`` (https://github.com/ansible-collections/community.general/issues/1552).
- ldap_search - the module no longer incorrectly reports a change (https://github.com/ansible-collections/community.general/issues/1040).
- make - fixed ``make`` parameter used for check mode when running a non-GNU ``make`` (https://github.com/ansible-collections/community.general/pull/1574).
- monit - add support for all monit service checks (https://github.com/ansible-collections/community.general/pull/1532).
- nios_member - fix Python 3 compatibility with nios api ``member_normalize`` function (https://github.com/ansible-collections/community.general/issues/1526).
- nmcli - remove ``bridge-slave`` from list of IP based connections ((https://github.com/ansible-collections/community.general/issues/1500).
- pamd - added logic to retain the comment line (https://github.com/ansible-collections/community.general/issues/1394).
- passwordstore lookup plugin - always use explicit ``show`` command to retrieve password. This ensures compatibility with ``gopass`` and avoids problems when password names equal ``pass`` commands (https://github.com/ansible-collections/community.general/pull/1493).
- rhn_channel - Python 2.7.5 fails if the certificate should not be validated. Fixed this by creating the correct ``ssl_context`` (https://github.com/ansible-collections/community.general/pull/470).
- sendgrid - update documentation and warn user about sendgrid Python library version (https://github.com/ansible-collections/community.general/issues/1553).
- syslogger - update ``syslog.openlog`` API call for older Python versions, and improve error handling (https://github.com/ansible-collections/community.general/issues/953).
- yaml callback plugin - do not remove non-ASCII Unicode characters from multiline string output (https://github.com/ansible-collections/community.general/issues/1519).
v1.3.1
======
Release Summary
---------------
Regular bugfix release.
Bugfixes
--------
- bigpanda - removed the dynamic default for ``host`` param (https://github.com/ansible-collections/community.general/pull/1423).
- bitbucket_pipeline_variable - change pagination logic for pipeline variable get API (https://github.com/ansible-collections/community.general/issues/1425).
- cobbler inventory script - add Python 3 support (https://github.com/ansible-collections/community.general/issues/638).
- docker_container - the validation for ``capabilities`` in ``device_requests`` was incorrect (https://github.com/ansible-collections/community.docker/issues/42, https://github.com/ansible-collections/community.docker/pull/43).
- git_config - now raises an error for non-existent repository paths (https://github.com/ansible-collections/community.general/issues/630).
- icinga2_host - fix returning error codes (https://github.com/ansible-collections/community.general/pull/335).
- jira - provide error message raised from exception (https://github.com/ansible-collections/community.general/issues/1504).
- json_query - handle ``AnsibleUnicode`` and ``AnsibleUnsafeText`` (https://github.com/ansible-collections/community.general/issues/320).
- keycloak module_utils - provide meaningful error message to user when auth URL does not start with http or https (https://github.com/ansible-collections/community.general/issues/331).
- ldap_entry - improvements in documentation, simplifications and replaced code with better ``AnsibleModule`` arguments (https://github.com/ansible-collections/community.general/pull/1516).
- mas - fix ``invalid literal`` when no app can be found (https://github.com/ansible-collections/community.general/pull/1436).
- nios_host_record - fix to remove ``aliases`` (CNAMES) for configuration comparison (https://github.com/ansible-collections/community.general/issues/1335).
- osx_defaults - unquote values and unescape double quotes when reading array values (https://github.com/ansible-collections/community.general/pull/358).
- profitbricks_nic - removed the dynamic default for ``name`` param (https://github.com/ansible-collections/community.general/pull/1423).
- profitbricks_nic - replaced code with ``required`` and ``required_if`` (https://github.com/ansible-collections/community.general/pull/1423).
- redfish_info module, redfish_utils module utils - correct ``PartNumber`` property name in Redfish ``GetMemoryInventory`` command (https://github.com/ansible-collections/community.general/issues/1483).
- saltstack connection plugin - use ``hashutil.base64_decodefile`` to ensure that the file checksum is preserved (https://github.com/ansible-collections/community.general/pull/1472).
- udm_user - removed the dynamic default for ``userexpiry`` param (https://github.com/ansible-collections/community.general/pull/1423).
- utm_network_interface_address - changed param type from invalid 'boolean' to valid 'bool' (https://github.com/ansible-collections/community.general/pull/1423).
- utm_proxy_exception - four parameters had elements types set as 'string' (invalid), changed to 'str' (https://github.com/ansible-collections/community.general/pull/1399).
- vmadm - simplification of code (https://github.com/ansible-collections/community.general/pull/1415).
- xfconf - add in missing return values that are specified in the documentation (https://github.com/ansible-collections/community.general/issues/1418).
v1.3.0
======

View File

@@ -1,6 +1,6 @@
# Community General Collection
[![Run Status](https://api.shippable.com/projects/5e664a167c32620006c9fa50/badge?branch=main)](https://app.shippable.com/github/ansible-collections/community.general/dashboard)
[![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-1)](https://dev.azure.com/ansible/community.general/_build?definitionId=31)
[![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general)
This repo contains the `community.general` Ansible Collection. The collection includes many modules and plugins supported by Ansible community which are not part of more specialized community collections.

View File

@@ -1568,3 +1568,199 @@ releases:
name: proxmox_user_info
namespace: cloud.misc
release_date: '2020-11-26'
1.3.1:
changes:
bugfixes:
- bigpanda - removed the dynamic default for ``host`` param (https://github.com/ansible-collections/community.general/pull/1423).
- bitbucket_pipeline_variable - change pagination logic for pipeline variable
get API (https://github.com/ansible-collections/community.general/issues/1425).
- cobbler inventory script - add Python 3 support (https://github.com/ansible-collections/community.general/issues/638).
- docker_container - the validation for ``capabilities`` in ``device_requests``
was incorrect (https://github.com/ansible-collections/community.docker/issues/42,
https://github.com/ansible-collections/community.docker/pull/43).
- git_config - now raises an error for non-existent repository paths (https://github.com/ansible-collections/community.general/issues/630).
- icinga2_host - fix returning error codes (https://github.com/ansible-collections/community.general/pull/335).
- jira - provide error message raised from exception (https://github.com/ansible-collections/community.general/issues/1504).
- json_query - handle ``AnsibleUnicode`` and ``AnsibleUnsafeText`` (https://github.com/ansible-collections/community.general/issues/320).
- keycloak module_utils - provide meaningful error message to user when auth
URL does not start with http or https (https://github.com/ansible-collections/community.general/issues/331).
- ldap_entry - improvements in documentation, simplifications and replaced code
with better ``AnsibleModule`` arguments (https://github.com/ansible-collections/community.general/pull/1516).
- mas - fix ``invalid literal`` when no app can be found (https://github.com/ansible-collections/community.general/pull/1436).
- nios_host_record - fix to remove ``aliases`` (CNAMES) for configuration comparison
(https://github.com/ansible-collections/community.general/issues/1335).
- osx_defaults - unquote values and unescape double quotes when reading array
values (https://github.com/ansible-collections/community.general/pull/358).
- profitbricks_nic - removed the dynamic default for ``name`` param (https://github.com/ansible-collections/community.general/pull/1423).
- profitbricks_nic - replaced code with ``required`` and ``required_if`` (https://github.com/ansible-collections/community.general/pull/1423).
- redfish_info module, redfish_utils module utils - correct ``PartNumber`` property
name in Redfish ``GetMemoryInventory`` command (https://github.com/ansible-collections/community.general/issues/1483).
- saltstack connection plugin - use ``hashutil.base64_decodefile`` to ensure
that the file checksum is preserved (https://github.com/ansible-collections/community.general/pull/1472).
- udm_user - removed the dynamic default for ``userexpiry`` param (https://github.com/ansible-collections/community.general/pull/1423).
- utm_network_interface_address - changed param type from invalid 'boolean'
to valid 'bool' (https://github.com/ansible-collections/community.general/pull/1423).
- utm_proxy_exception - four parameters had elements types set as 'string' (invalid),
changed to 'str' (https://github.com/ansible-collections/community.general/pull/1399).
- vmadm - simplification of code (https://github.com/ansible-collections/community.general/pull/1415).
- xfconf - add in missing return values that are specified in the documentation
(https://github.com/ansible-collections/community.general/issues/1418).
release_summary: Regular bugfix release.
fragments:
- 1.3.1.yml
- 1399-fixed-wrong-elements-type.yaml
- 1415-valmod_req_mismatch.yml
- 1419-xfconf-return-values.yaml
- 1423-valmod_multiple_cases.yml
- 1425_bitbucket_pipeline_variable.yml
- 1436-mas-fix-no-app-installed.yml
- 1472-saltstack-fix-put_file-to-preserve-checksum.yml
- 1484-fix-property-name-in-redfish-memory-inventory.yml
- 1504_jira.yml
- 1516-ldap_entry-improvements.yaml
- 320_unsafe_text.yml
- 331_keycloak.yml
- 335-icinga2_host-return-error-code.yaml
- 630-git_config-handling-invalid-dir.yaml
- 638_cobbler_py3.yml
- community.docker-43-docker_container-device_requests.yml
- fix_parsing_array_values_in_osx_defaults.yml
- nios_host_record-fix-aliases-removal.yml
release_date: '2020-12-21'
1.3.2:
changes:
bugfixes:
- docker_image - if ``push=true`` is used with ``repository``, and the image
does not need to be tagged, still push. This can happen if ``repository``
and ``name`` are equal (https://github.com/ansible-collections/community.docker/issues/52,
https://github.com/ansible-collections/community.docker/pull/53).
- docker_image - report error when loading a broken archive that contains no
image (https://github.com/ansible-collections/community.docker/issues/46,
https://github.com/ansible-collections/community.docker/pull/55).
- docker_image - report error when the loaded archive does not contain the specified
image (https://github.com/ansible-collections/community.docker/issues/41,
https://github.com/ansible-collections/community.docker/pull/55).
- jira - ``fetch`` and ``search`` no longer indicate that something changed
(https://github.com/ansible-collections/community.general/pull/1536).
- jira - ensured parameter ``issue`` is mandatory for operation ``transition``
(https://github.com/ansible-collections/community.general/pull/1536).
- jira - module no longer incorrectly reports change for information gathering
operations (https://github.com/ansible-collections/community.general/pull/1536).
- jira - replaced custom parameter validation with ``required_if`` (https://github.com/ansible-collections/community.general/pull/1536).
- launchd - handle deprecated APIs like ``readPlist`` and ``writePlist`` in
``plistlib`` (https://github.com/ansible-collections/community.general/issues/1552).
- ldap_search - the module no longer incorrectly reports a change (https://github.com/ansible-collections/community.general/issues/1040).
- make - fixed ``make`` parameter used for check mode when running a non-GNU
``make`` (https://github.com/ansible-collections/community.general/pull/1574).
- monit - add support for all monit service checks (https://github.com/ansible-collections/community.general/pull/1532).
- nios_member - fix Python 3 compatibility with nios api ``member_normalize``
function (https://github.com/ansible-collections/community.general/issues/1526).
- nmcli - remove ``bridge-slave`` from list of IP based connections ((https://github.com/ansible-collections/community.general/issues/1500).
- pamd - added logic to retain the comment line (https://github.com/ansible-collections/community.general/issues/1394).
- passwordstore lookup plugin - always use explicit ``show`` command to retrieve
password. This ensures compatibility with ``gopass`` and avoids problems when
password names equal ``pass`` commands (https://github.com/ansible-collections/community.general/pull/1493).
- rhn_channel - Python 2.7.5 fails if the certificate should not be validated.
Fixed this by creating the correct ``ssl_context`` (https://github.com/ansible-collections/community.general/pull/470).
- sendgrid - update documentation and warn user about sendgrid Python library
version (https://github.com/ansible-collections/community.general/issues/1553).
- syslogger - update ``syslog.openlog`` API call for older Python versions,
and improve error handling (https://github.com/ansible-collections/community.general/issues/953).
- yaml callback plugin - do not remove non-ASCII Unicode characters from multiline
string output (https://github.com/ansible-collections/community.general/issues/1519).
major_changes:
- 'For community.general 2.0.0, the Google modules will be moved to the `community.google
<https://galaxy.ansible.com/community/google>`_ collection.
A redirection will be inserted so that users using ansible-base 2.10 or newer
do not have to change anything.
If you use Ansible 2.9 and explicitly use Google modules from this collection,
you will need to adjust your playbooks and roles to use FQCNs starting with
``community.google.`` instead of ``community.general.``,
for example replace ``community.general.gcpubsub`` in a task by ``community.google.gcpubsub``.
If you use ansible-base and installed ``community.general`` manually and rely
on the Google modules, you have to make sure to install the ``community.google``
collection as well.
If you are using FQCNs, for example ``community.general.gcpubsub`` instead
of ``gcpubsub``, it will continue working, but we still recommend to adjust
the FQCNs as well.
'
- 'For community.general 2.0.0, the OC connection plugin will be moved to the
`community.okd <https://galaxy.ansible.com/community/okd>`_ collection.
A redirection will be inserted so that users using ansible-base 2.10 or newer
do not have to change anything.
If you use Ansible 2.9 and explicitly use OC connection plugin from this collection,
you will need to adjust your playbooks and roles to use FQCNs ``community.okd.oc``
instead of ``community.general.oc``.
If you use ansible-base and installed ``community.general`` manually and rely
on the OC connection plugin, you have to make sure to install the ``community.okd``
collection as well.
If you are using FQCNs, in other words ``community.general.oc`` instead of
``oc``, it will continue working, but we still recommend to adjust this FQCN
as well.
'
- 'For community.general 2.0.0, the hashi_vault lookup plugin will be moved
to the `community.hashi_vault <https://galaxy.ansible.com/community/hashi_vault>`_
collection.
A redirection will be inserted so that users using ansible-base 2.10 or newer
do not have to change anything.
If you use Ansible 2.9 and explicitly use hashi_vault lookup plugin from this
collection, you will need to adjust your playbooks and roles to use FQCNs
``community.hashi_vault.hashi_vault`` instead of ``community.general.hashi_vault``.
If you use ansible-base and installed ``community.general`` manually and rely
on the hashi_vault lookup plugin, you have to make sure to install the ``community.hashi_vault``
collection as well.
If you are using FQCNs, in other words ``community.general.hashi_vault`` instead
of ``hashi_vault``, it will continue working, but we still recommend to adjust
this FQCN as well.
'
minor_changes:
- homebrew_cask - Homebrew will be deprecating use of ``brew cask`` commands
as of version 2.6.0, see https://brew.sh/2020/12/01/homebrew-2.6.0/. Added
logic to stop using ``brew cask`` for brew version >= 2.6.0 (https://github.com/ansible-collections/community.general/pull/1481).
- jira - added the traceback output to ``fail_json()`` calls deriving from exceptions
(https://github.com/ansible-collections/community.general/pull/1536).
release_summary: Regular bugfix release.
fragments:
- 1.3.2.yml
- 1040-ldap_search-changed-must-be-false.yaml
- 1394-pamd-removing-comments.yaml
- 1481-deprecated-brew-cask-command.yaml
- 1493-fix_passwordstore.py_to_be_compatible_with_gopass_versions.yml
- 1517-bridge-slave-from-list-of-ip-based-connections.yml
- 1522-yaml-callback-unicode.yml
- 1527-fix-nios-api-member-normalize.yaml
- 1532-monit-support-all-services.yaml
- 1552_launchd.yml
- 1553_sendgrid.yml
- 1574-make-question.yaml
- 470-spacewalk-legacy-python-certificate-validation.yaml
- 953_syslogger.yml
- community.docker-53-docker_image-tag-push.yml
- community.docker-55-docker_image-loading.yml
- google-migration.yml
- hashi_vault-migration.yml
- jira_improvements.yaml
- oc-migration.yml
release_date: '2021-01-04'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 1.3.0
version: 1.3.2
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -50,7 +50,7 @@ def my_represent_scalar(self, tag, value, style=None):
# ...no trailing space
value = value.rstrip()
# ...and non-printable characters
value = ''.join(x for x in value if x in string.printable)
value = ''.join(x for x in value if x in string.printable or ord(x) >= 0xA0)
# ...tabs prevent blocks from expanding
value = value.expandtabs()
# ...and odd bits of whitespace

View File

@@ -19,6 +19,7 @@ DOCUMENTATION = '''
import re
import os
import pty
import codecs
import subprocess
from ansible.module_utils._text import to_bytes, to_text
@@ -85,9 +86,9 @@ class Connection(ConnectionBase):
out_path = self._normalize_path(out_path, '/')
self._display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.host)
with open(in_path) as in_fh:
with open(in_path, 'rb') as in_fh:
content = in_fh.read()
self.client.cmd(self.host, 'file.write', [out_path, content])
self.client.cmd(self.host, 'hashutil.base64_decodefile', [codecs.encode(content, 'base64'), out_path])
# TODO test it
def fetch_file(self, in_path, out_path):

View File

@@ -24,7 +24,6 @@ options:
- Value can also be specified using C(INFOBLOX_HOST) environment
variable.
type: str
required: true
username:
description:
- Configures the username to use to authenticate the connection to

View File

@@ -15,6 +15,7 @@ options:
description:
- Online OAuth token.
type: str
required: true
aliases: [ oauth_token ]
api_url:
description:

View File

@@ -35,6 +35,9 @@ def json_query(data, expr):
raise AnsibleError('You need to install "jmespath" prior to running '
'json_query filter')
# Hack to handle Ansible String Types
# See issue: https://github.com/ansible-collections/community.general/issues/320
jmespath.functions.REVERSE_TYPES_MAP['string'] = jmespath.functions.REVERSE_TYPES_MAP['string'] + ('AnsibleUnicode', 'AnsibleUnsafeText', )
try:
return jmespath.search(expr, data)
except jmespath.exceptions.JMESPathError as e:

View File

@@ -2,18 +2,16 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
name: online
plugin_type: inventory
author:
- Remy Leone (@sieben)
short_description: Online inventory source
short_description: Scaleway (previously Online SAS or Online.net) inventory source
description:
- Get inventory hosts from Online
- Get inventory hosts from Scaleway (previously Online SAS or Online.net).
options:
plugin:
description: token that ensures this is a source file for the 'online' plugin.
@@ -45,7 +43,7 @@ DOCUMENTATION = '''
- rpn
'''
EXAMPLES = '''
EXAMPLES = r'''
# online_inventory.yml file in YAML format
# Example command line: ansible-inventory --list -i online_inventory.yml

View File

@@ -204,7 +204,7 @@ class LookupModule(LookupBase):
def check_pass(self):
try:
self.passoutput = to_text(
check_output2(["pass", self.passname], env=self.env),
check_output2(["pass", "show", self.passname], env=self.env),
errors='surrogate_or_strict'
).splitlines()
self.password = self.passoutput[0]

View File

@@ -75,6 +75,8 @@ class KeycloakError(Exception):
def get_token(base_url, validate_certs, auth_realm, client_id,
auth_username, auth_password, client_secret):
if not base_url.lower().startswith(('http', 'https')):
raise KeycloakError("auth_url '%s' should either start with 'http' or 'https'." % base_url)
auth_url = URL_TOKEN.format(url=base_url, realm=auth_realm)
temp_payload = {
'grant_type': 'password',

View File

@@ -144,7 +144,7 @@ def member_normalize(member_spec):
'pre_provisioning', 'network_setting', 'v6_network_setting',
'ha_port_setting', 'lan_port_setting', 'lan2_physical_setting',
'lan_ha_port_setting', 'mgmt_network_setting', 'v6_mgmt_network_setting']
for key in member_spec.keys():
for key in list(member_spec.keys()):
if key in member_elements and member_spec[key] is not None:
member_spec[key] = member_spec[key][0]
if isinstance(member_spec[key], dict):
@@ -455,6 +455,9 @@ class WapiModule(WapiBase):
return False
elif isinstance(proposed_item, list):
if key == 'aliases':
if set(current_item) != set(proposed_item):
return False
for subitem in proposed_item:
if not self.issubset(subitem, current_item):
return False

View File

@@ -1886,7 +1886,7 @@ class RedfishUtils(object):
memory_results = []
key = "Memory"
# Get these entries, but does not fail if not found
properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber',
properties = ['SerialNumber', 'MemoryDeviceType', 'PartNumber',
'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name']
# Search for 'key' entry and extract URI from it

View File

@@ -217,18 +217,18 @@ class ClcAlertPolicy:
:return: argument spec dictionary
"""
argument_spec = dict(
name=dict(default=None), # @FIXME default=None is redundant - remove all
id=dict(default=None),
alias=dict(required=True, default=None),
alert_recipients=dict(type='list', default=None),
name=dict(),
id=dict(),
alias=dict(required=True),
alert_recipients=dict(type='list'),
metric=dict(
choices=[
'cpu',
'memory',
'disk'],
default=None),
duration=dict(type='str', default=None),
threshold=dict(type='int', default=None),
duration=dict(type='str'),
threshold=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])
)
mutually_exclusive = [

View File

@@ -214,12 +214,12 @@ class ClcFirewallPolicy:
"""
argument_spec = dict(
location=dict(required=True),
source_account_alias=dict(required=True, default=None), # @FIXME remove default=None
destination_account_alias=dict(default=None),
firewall_policy_id=dict(default=None),
ports=dict(default=None, type='list'),
source=dict(default=None, type='list'),
destination=dict(default=None, type='list'),
source_account_alias=dict(required=True),
destination_account_alias=dict(),
firewall_policy_id=dict(),
ports=dict(type='list'),
source=dict(type='list'),
destination=dict(type='list'),
wait=dict(default=True), # @FIXME type=bool
state=dict(default='present', choices=['present', 'absent']),
enabled=dict(default=True, choices=[True, False])

View File

@@ -1451,13 +1451,12 @@ class TaskParameters(DockerBaseClass):
# Make sure that capabilities are lists of lists of strings
if dr['capabilities']:
for or_index, or_list in enumerate(dr['capabilities']):
for and_index, and_list in enumerate(or_list):
for term_index, term in enumerate(and_list):
if not isinstance(term, string_types):
self.fail(
"device_requests[{0}].capabilities[{1}][{2}][{3}] is not a string".format(
dr_index, or_index, and_index, term_index))
and_list[term_index] = to_native(term)
for and_index, and_term in enumerate(or_list):
if not isinstance(and_term, string_types):
self.fail(
"device_requests[{0}].capabilities[{1}][{2}] is not a string".format(
dr_index, or_index, and_index))
or_list[and_index] = to_native(and_term)
# Make sure that options is a dictionary mapping strings to strings
if dr['options']:
dr['options'] = clean_dict_booleans_for_docker_api(dr['options'])

View File

@@ -698,8 +698,8 @@ class ImageManager(DockerBaseClass):
if image and image['Id'] == self.results['image']['Id']:
self.results['changed'] = False
if push:
self.push_image(repo, repo_tag)
if push:
self.push_image(repo, repo_tag)
def build_image(self):
'''
@@ -749,7 +749,7 @@ class ImageManager(DockerBaseClass):
# line = json.loads(line)
self.log(line, pretty_print=True)
if "stream" in line or "status" in line:
build_line = line.get("stream") or line.get("status")
build_line = line.get("stream") or line.get("status") or ''
build_output.append(build_line)
if line.get('error'):
@@ -774,17 +774,45 @@ class ImageManager(DockerBaseClass):
:return: image dict
'''
# Load image(s) from file
load_output = []
try:
self.log("Opening image %s" % self.load_path)
with open(self.load_path, 'rb') as image_tar:
self.log("Loading image from %s" % self.load_path)
self.client.load_image(image_tar)
for line in self.client.load_image(image_tar):
self.log(line, pretty_print=True)
if "stream" in line or "status" in line:
load_line = line.get("stream") or line.get("status") or ''
load_output.append(load_line)
except EnvironmentError as exc:
if exc.errno == errno.ENOENT:
self.fail("Error opening image %s - %s" % (self.load_path, str(exc)))
self.fail("Error loading image %s - %s" % (self.name, str(exc)))
self.client.fail("Error opening image %s - %s" % (self.load_path, str(exc)))
self.client.fail("Error loading image %s - %s" % (self.name, str(exc)), stdout='\n'.join(load_output))
except Exception as exc:
self.fail("Error loading image %s - %s" % (self.name, str(exc)))
self.client.fail("Error loading image %s - %s" % (self.name, str(exc)), stdout='\n'.join(load_output))
# Collect loaded images
loaded_images = set()
for line in load_output:
if line.startswith('Loaded image:'):
loaded_images.add(line[len('Loaded image:'):].strip())
if not loaded_images:
self.client.fail("Detected no loaded images. Archive potentially corrupt?", stdout='\n'.join(load_output))
expected_image = '%s:%s' % (self.name, self.tag)
if expected_image not in loaded_images:
self.client.fail(
"The archive did not contain image '%s'. Instead, found %s." % (
expected_image, ', '.join(["'%s'" % image for image in sorted(loaded_images)])),
stdout='\n'.join(load_output))
loaded_images.remove(expected_image)
if loaded_images:
self.client.module.warn(
"The archive contained more images than specified: %s" % (
', '.join(["'%s'" % image for image in sorted(loaded_images)]), ))
return self.client.find_image(self.name, self.tag)

View File

@@ -107,8 +107,8 @@ options:
private_ip:
description:
- Add private IPv4 address when Linode is created.
- Default is C(false).
type: bool
default: "no"
ssh_pub_key:
description:
- SSH public key applied to root user

View File

@@ -1,24 +1,25 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
# Copyright: (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: lxc_container
short_description: Manage LXC Containers
description:
- Management of LXC containers
- Management of LXC containers.
author: "Kevin Carter (@cloudnull)"
options:
name:
description:
- Name of a container.
type: str
required: true
backing_store:
choices:
@@ -30,93 +31,105 @@ options:
- zfs
description:
- Backend storage type for the container.
type: str
default: dir
template:
description:
- Name of the template to use within an LXC create.
type: str
default: ubuntu
template_options:
description:
- Template options when building the container.
type: str
config:
description:
- Path to the LXC configuration file.
type: path
lv_name:
description:
- Name of the logical volume, defaults to the container name.
default: $CONTAINER_NAME
- If not specified, it defaults to C($CONTAINER_NAME).
type: str
vg_name:
description:
- If Backend store is lvm, specify the name of the volume group.
- If backend store is lvm, specify the name of the volume group.
type: str
default: lxc
thinpool:
description:
- Use LVM thin pool called TP.
type: str
fs_type:
description:
- Create fstype TYPE.
type: str
default: ext4
fs_size:
description:
- File system Size.
type: str
default: 5G
directory:
description:
- Place rootfs directory under DIR.
type: path
zfs_root:
description:
- Create zfs under given zfsroot.
type: str
container_command:
description:
- Run a command within a container.
type: str
lxc_path:
description:
- Place container under PATH
- Place container under PATH.
type: path
container_log:
choices:
- true
- false
description:
- Enable a container log for host actions to the container.
type: bool
default: 'no'
container_log_level:
choices:
- Info
- info
- INFO
- Error
- error
- ERROR
- Debug
- debug
- DEBUG
description:
- Set the log level for a container where *container_log* was set.
type: str
required: false
default: INFO
clone_name:
description:
- Name of the new cloned server. This is only used when state is
clone.
- Name of the new cloned server.
- This is only used when state is clone.
type: str
clone_snapshot:
choices:
- true
- false
description:
- Create a snapshot a container when cloning. This is not supported
by all container storage backends. Enabling this may fail if the
backing store does not support snapshots.
- Create a snapshot a container when cloning.
- This is not supported by all container storage backends.
- Enabling this may fail if the backing store does not support snapshots.
type: bool
default: 'no'
archive:
choices:
- true
- false
description:
- Create an archive of a container. This will create a tarball of the
running container.
- Create an archive of a container.
- This will create a tarball of the running container.
type: bool
default: 'no'
archive_path:
description:
- Path the save the archived container. If the path does not exist
the archive method will attempt to create it.
- Path the save the archived container.
- If the path does not exist the archive method will attempt to create it.
type: path
archive_compression:
choices:
- gzip
@@ -125,6 +138,7 @@ options:
description:
- Type of compression to use when creating an archive of a running
container.
type: str
default: gzip
state:
choices:
@@ -133,16 +147,19 @@ options:
- restarted
- absent
- frozen
- clone
description:
- Define the state of a container. If you clone a container using
`clone_name` the newly cloned container created in a stopped state.
The running container will be stopped while the clone operation is
- Define the state of a container.
- If you clone a container using I(clone_name) the newly cloned
container created in a stopped state.
- The running container will be stopped while the clone operation is
happening and upon completion of the clone the original container
state will be restored.
type: str
default: started
container_config:
description:
- list of 'key=value' options to use when configuring a container.
- A list of C(key=value) options to use when configuring a container.
type: list
elements: str
requirements:
@@ -172,7 +189,7 @@ notes:
name lxc-python2.
'''
EXAMPLES = """
EXAMPLES = r"""
- name: Create a started container
community.general.lxc_container:
name: test-container-started
@@ -355,7 +372,7 @@ EXAMPLES = """
- test-container-new-archive-destroyed-clone
"""
RETURN = """
RETURN = r"""
lxc_container:
description: container information
returned: success
@@ -912,8 +929,7 @@ class LxcContainerManagement(object):
if self._container_exists(container_name=self.container_name, lxc_path=self.lxc_path):
return str(self.container.state).lower()
else:
return str('absent')
return str('absent')
def _execute_command(self):
"""Execute a shell command."""
@@ -1695,7 +1711,7 @@ def main():
),
container_log=dict(
type='bool',
default='false'
default=False
),
container_log_level=dict(
choices=[n for i in LXC_LOGGING_LEVELS.values() for n in i],
@@ -1711,7 +1727,7 @@ def main():
),
archive=dict(
type='bool',
default='false'
default=False
),
archive_path=dict(
type='path',

View File

@@ -19,11 +19,13 @@ options:
name:
description:
- Name of a container.
type: str
required: true
architecture:
description:
- The architecture for the container (e.g. "x86_64" or "i686").
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1)
type: str
required: false
config:
description:
@@ -37,12 +39,18 @@ options:
- The key starts with 'volatile.' are ignored for this comparison.
- Not all config values are supported to apply the existing container.
Maybe you need to delete and recreate a container.
type: dict
required: false
profiles:
description:
- Profile to be used by the container
type: list
devices:
description:
- 'The devices for the container
(e.g. { "rootfs": { "path": "/dev/kvm", "type": "unix-char" }).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1)'
type: dict
required: false
ephemeral:
description:
@@ -61,6 +69,7 @@ options:
- 'See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1) for complete API documentation.'
- 'Note that C(protocol) accepts two choices: C(lxd) or C(simplestreams)'
required: false
type: dict
state:
choices:
- started
@@ -72,6 +81,7 @@ options:
- Define the state of a container.
required: false
default: started
type: str
target:
description:
- For cluster deployments. Will attempt to create a container on a target node.
@@ -88,6 +98,7 @@ options:
starting or restarting.
required: false
default: 30
type: int
wait_for_ipv4_addresses:
description:
- If this is true, the C(lxd_container) waits until IPv4 addresses
@@ -108,23 +119,27 @@ options:
- The unix domain socket path or the https URL for the LXD server.
required: false
default: unix:/var/lib/lxd/unix.socket
type: str
snap_url:
description:
- The unix domain socket path when LXD is installed by snap package manager.
required: false
default: unix:/var/snap/lxd/common/lxd/unix.socket
type: str
client_key:
description:
- The client certificate key file path.
- If not specified, it defaults to C(${HOME}/.config/lxc/client.key).
required: false
default: '"{}/.config/lxc/client.key" .format(os.environ["HOME"])'
aliases: [ key_file ]
type: str
client_cert:
description:
- The client certificate file path.
- If not specified, it defaults to C(${HOME}/.config/lxc/client.crt).
required: false
default: '"{}/.config/lxc/client.crt" .format(os.environ["HOME"])'
aliases: [ cert_file ]
type: str
trust_password:
description:
- The client trusted password.
@@ -135,6 +150,7 @@ options:
- If trust_password is set, this module send a request for
authentication before sending any requests.
required: false
type: str
notes:
- Containers must have a unique name. If you attempt to create a container
with a name that already existed in the users namespace the module will
@@ -356,8 +372,12 @@ class LXDContainerManagement(object):
self.addresses = None
self.target = self.module.params['target']
self.key_file = self.module.params.get('client_key', None)
self.cert_file = self.module.params.get('client_cert', None)
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = '{0}/.config/lxc/client.key'.format(os.environ['HOME'])
self.cert_file = self.module.params.get('client_cert')
if self.cert_file is None:
self.cert_file = '{0}/.config/lxc/client.crt'.format(os.environ['HOME'])
self.debug = self.module._verbosity >= 4
try:
@@ -671,12 +691,10 @@ def main():
),
client_key=dict(
type='str',
default='{0}/.config/lxc/client.key'.format(os.environ['HOME']),
aliases=['key_file']
),
client_cert=dict(
type='str',
default='{0}/.config/lxc/client.crt'.format(os.environ['HOME']),
aliases=['cert_file']
),
trust_password=dict(type='str', no_log=True)

View File

@@ -20,9 +20,11 @@ options:
description:
- Name of a profile.
required: true
type: str
description:
description:
- Description of the profile.
type: str
config:
description:
- 'The config for the container (e.g. {"limits.memory": "4GB"}).
@@ -35,18 +37,21 @@ options:
- Not all config values are supported to apply the existing profile.
Maybe you need to delete and recreate a profile.
required: false
type: dict
devices:
description:
- 'The devices for the profile
(e.g. {"rootfs": {"path": "/dev/kvm", "type": "unix-char"}).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#patch-3)'
required: false
type: dict
new_name:
description:
- A new name of a profile.
- If this parameter is specified a profile will be renamed to this name.
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-11)
required: false
type: str
state:
choices:
- present
@@ -55,28 +60,33 @@ options:
- Define the state of a profile.
required: false
default: present
type: str
url:
description:
- The unix domain socket path or the https URL for the LXD server.
required: false
default: unix:/var/lib/lxd/unix.socket
type: str
snap_url:
description:
- The unix domain socket path when LXD is installed by snap package manager.
required: false
default: unix:/var/snap/lxd/common/lxd/unix.socket
type: str
client_key:
description:
- The client certificate key file path.
- If not specified, it defaults to C($HOME/.config/lxc/client.key).
required: false
default: '"{}/.config/lxc/client.key" .format(os.environ["HOME"])'
aliases: [ key_file ]
type: str
client_cert:
description:
- The client certificate file path.
- If not specified, it defaults to C($HOME/.config/lxc/client.crt).
required: false
default: '"{}/.config/lxc/client.crt" .format(os.environ["HOME"])'
aliases: [ cert_file ]
type: str
trust_password:
description:
- The client trusted password.
@@ -87,6 +97,7 @@ options:
- If trust_password is set, this module send a request for
authentication before sending any requests.
required: false
type: str
notes:
- Profiles must have a unique name. If you attempt to create a profile
with a name that already existed in the users namespace the module will
@@ -201,8 +212,12 @@ class LXDProfileManagement(object):
self.state = self.module.params['state']
self.new_name = self.module.params.get('new_name', None)
self.key_file = self.module.params.get('client_key', None)
self.cert_file = self.module.params.get('client_cert', None)
self.key_file = self.module.params.get('client_key')
if self.key_file is None:
self.key_file = '{0}/.config/lxc/client.key'.format(os.environ['HOME'])
self.cert_file = self.module.params.get('client_cert')
if self.cert_file is None:
self.cert_file = '{0}/.config/lxc/client.crt'.format(os.environ['HOME'])
self.debug = self.module._verbosity >= 4
try:
@@ -370,12 +385,10 @@ def main():
),
client_key=dict(
type='str',
default='{0}/.config/lxc/client.key'.format(os.environ['HOME']),
aliases=['key_file']
),
client_cert=dict(
type='str',
default='{0}/.config/lxc/client.crt'.format(os.environ['HOME']),
aliases=['cert_file']
),
trust_password=dict(type='str', no_log=True)

View File

@@ -26,6 +26,7 @@ options:
description:
- Hostname of the machine to manage.
required: true
type: str
state:
description:
- Takes the host to the desired lifecycle state.
@@ -41,29 +42,36 @@ options:
- disabled
- offline
default: present
type: str
im_mad_name:
description:
- The name of the information manager, this values are taken from the oned.conf with the tag name IM_MAD (name)
default: kvm
type: str
vmm_mad_name:
description:
- The name of the virtual machine manager mad name, this values are taken from the oned.conf with the tag name VM_MAD (name)
default: kvm
type: str
cluster_id:
description:
- The cluster ID.
default: 0
type: int
cluster_name:
description:
- The cluster specified by name.
type: str
labels:
description:
- The labels for this host.
type: list
template:
description:
- The template or attribute changes to merge into the host template.
aliases:
- attributes
type: dict
extends_documentation_fragment:
- community.general.opennebula

View File

@@ -39,20 +39,25 @@ options:
- It is recommended to use HTTPS so that the username/password are not
- transferred over the network unencrypted.
- If not set then the value of the C(ONE_URL) environment variable is used.
type: str
api_username:
description:
- Name of the user to login into the OpenNebula RPC server. If not set
- then the value of the C(ONE_USERNAME) environment variable is used.
type: str
api_password:
description:
- Password of the user to login into OpenNebula RPC server. If not set
- then the value of the C(ONE_PASSWORD) environment variable is used.
type: str
id:
description:
- A C(id) of the image you would like to manage.
type: int
name:
description:
- A C(name) of the image you would like to manage.
type: str
state:
description:
- C(present) - state that is used to manage the image
@@ -61,6 +66,7 @@ options:
- C(renamed) - rename the image to the C(new_name)
choices: ["present", "absent", "cloned", "renamed"]
default: present
type: str
enabled:
description:
- Whether the image should be enabled or disabled.
@@ -69,6 +75,7 @@ options:
description:
- A name that will be assigned to the existing or new image.
- In the case of cloning, by default C(new_name) will take the name of the origin image with the prefix 'Copy of'.
type: str
author:
- "Milan Ilic (@ilicmilan)"
'''

View File

@@ -40,18 +40,22 @@ options:
- It is recommended to use HTTPS so that the username/password are not
- transferred over the network unencrypted.
- If not set then the value of the C(ONE_URL) environment variable is used.
type: str
api_username:
description:
- Name of the user to login into the OpenNebula RPC server. If not set
- then the value of the C(ONE_USERNAME) environment variable is used.
type: str
api_password:
description:
- Password of the user to login into OpenNebula RPC server. If not set
- then the value of the C(ONE_PASSWORD) environment variable is used.
type: str
ids:
description:
- A list of images ids whose facts you want to gather.
aliases: ['id']
type: list
name:
description:
- A C(name) of the image whose facts will be gathered.
@@ -59,6 +63,7 @@ options:
- which restricts the list of images (whose facts will be returned) whose names match specified regex.
- Also, if the C(name) begins with '~*' case-insensitive matching will be performed.
- See examples for more details.
type: str
author:
- "Milan Ilic (@ilicmilan)"
- "Jan Meerkamp (@meerkampdvv)"

View File

@@ -36,24 +36,31 @@ options:
- URL of the OpenNebula OneFlow API server.
- It is recommended to use HTTPS so that the username/password are not transferred over the network unencrypted.
- If not set then the value of the ONEFLOW_URL environment variable is used.
type: str
api_username:
description:
- Name of the user to login into the OpenNebula OneFlow API server. If not set then the value of the C(ONEFLOW_USERNAME) environment variable is used.
type: str
api_password:
description:
- Password of the user to login into OpenNebula OneFlow API server. If not set then the value of the C(ONEFLOW_PASSWORD) environment variable is used.
type: str
template_name:
description:
- Name of service template to use to create a new instance of a service
type: str
template_id:
description:
- ID of a service template to use to create a new instance of a service
type: int
service_id:
description:
- ID of a service instance that you would like to manage
type: int
service_name:
description:
- Name of a service instance that you would like to manage
type: str
unique:
description:
- Setting C(unique=yes) will make sure that there is only one service instance running with a name set with C(service_name) when
@@ -66,15 +73,19 @@ options:
- C(absent) - terminate an instance of a service specified with C(service_id)/C(service_name).
choices: ["present", "absent"]
default: present
type: str
mode:
description:
- Set permission mode of a service instance in octet format, e.g. C(600) to give owner C(use) and C(manage) and nothing to group and others.
type: str
owner_id:
description:
- ID of the user which will be set as the owner of the service
type: int
group_id:
description:
- ID of the group which will be set as the group of the service
type: int
wait:
description:
- Wait for the instance to reach RUNNING state after DEPLOYING or COOLDOWN state after SCALING
@@ -84,16 +95,20 @@ options:
description:
- How long before wait gives up, in seconds
default: 300
type: int
custom_attrs:
description:
- Dictionary of key/value custom attributes which will be used when instantiating a new service.
default: {}
type: dict
role:
description:
- Name of the role whose cardinality should be changed
type: str
cardinality:
description:
- Number of VMs for the specified role
type: int
force:
description:
- Force the new cardinality even if it is outside the limits

View File

@@ -40,10 +40,12 @@ options:
- It is recommended to use HTTPS so that the username/password are not
- transferred over the network unencrypted.
- If not set then the value of the C(ONE_URL) environment variable is used.
type: str
api_username:
description:
- Name of the user to login into the OpenNebula RPC server. If not set
- then the value of the C(ONE_USERNAME) environment variable is used.
type: str
api_password:
description:
- Password of the user to login into OpenNebula RPC server. If not set
@@ -51,20 +53,25 @@ options:
- if both I(api_username) or I(api_password) are not set, then it will try
- authenticate with ONE auth file. Default path is "~/.one/one_auth".
- Set environment variable C(ONE_AUTH) to override this path.
type: str
template_name:
description:
- Name of VM template to use to create a new instace
type: str
template_id:
description:
- ID of a VM template to use to create a new instance
type: int
vm_start_on_hold:
description:
- Set to true to put vm on hold while creating
default: False
type: bool
instance_ids:
description:
- A list of instance ids used for states':' C(absent), C(running), C(rebooted), C(poweredoff)
aliases: ['ids']
type: list
state:
description:
- C(present) - create instances from a template specified with C(template_id)/C(template_name).
@@ -74,6 +81,7 @@ options:
- C(absent) - terminate instances
choices: ["present", "absent", "running", "rebooted", "poweredoff"]
default: present
type: str
hard:
description:
- Reboot, power-off or terminate instances C(hard)
@@ -92,6 +100,7 @@ options:
description:
- How long before wait gives up, in seconds
default: 300
type: int
attributes:
description:
- A dictionary of key/value attributes to add to new instances, or for
@@ -104,61 +113,75 @@ options:
- When used with C(count_attributes) and C(exact_count) the module will
- match the base name without the index part.
default: {}
type: dict
labels:
description:
- A list of labels to associate with new instances, or for setting
- C(state) of instances with these labels.
default: []
type: list
count_attributes:
description:
- A dictionary of key/value attributes that can only be used with
- C(exact_count) to determine how many nodes based on a specific
- attributes criteria should be deployed. This can be expressed in
- multiple ways and is shown in the EXAMPLES section.
type: dict
count_labels:
description:
- A list of labels that can only be used with C(exact_count) to determine
- how many nodes based on a specific labels criteria should be deployed.
- This can be expressed in multiple ways and is shown in the EXAMPLES
- section.
type: list
count:
description:
- Number of instances to launch
default: 1
type: int
exact_count:
description:
- Indicates how many instances that match C(count_attributes) and
- C(count_labels) parameters should be deployed. Instances are either
- created or terminated based on this value.
- NOTE':' Instances with the least IDs will be terminated first.
type: int
mode:
description:
- Set permission mode of the instance in octet format, e.g. C(600) to give owner C(use) and C(manage) and nothing to group and others.
type: str
owner_id:
description:
- ID of the user which will be set as the owner of the instance
type: int
group_id:
description:
- ID of the group which will be set as the group of the instance
type: int
memory:
description:
- The size of the memory for new instances (in MB, GB, ...)
type: str
disk_size:
description:
- The size of the disk created for new instances (in MB, GB, TB,...).
- NOTE':' If The Template hats Multiple Disks the Order of the Sizes is
- matched against the order specified in C(template_id)/C(template_name).
type: list
cpu:
description:
- Percentage of CPU divided by 100 required for the new instance. Half a
- processor is written 0.5.
type: float
vcpu:
description:
- Number of CPUs (cores) new VM will have.
type: int
networks:
description:
- A list of dictionaries with network parameters. See examples for more details.
default: []
type: list
disk_saveas:
description:
- Creates an image from a VM disk.
@@ -167,6 +190,7 @@ options:
- I(NOTE)':' This operation will only be performed on the first VM (if more than one VM ID is passed)
- and the VM has to be in the C(poweredoff) state.
- Also this operation will fail if an image with specified C(name) already exists.
type: dict
persistent:
description:
- Create a private persistent copy of the template plus any image defined in DISK, and instantiate that copy.
@@ -177,10 +201,12 @@ options:
description:
- Name of Datastore to use to create a new instace
version_added: '0.2.0'
type: int
datastore_name:
description:
- Name of Datastore to use to create a new instace
version_added: '0.2.0'
type: str
author:
- "Milan Ilic (@ilicmilan)"
- "Jan Meerkamp (@meerkampdvv)"

View File

@@ -43,9 +43,11 @@ options:
- "Type of the external provider."
choices: ['os_image', 'os_network', 'os_volume', 'foreman']
required: true
type: str
name:
description:
- "Name of the external provider, can be used as glob expression."
type: str
extends_documentation_fragment:
- community.general.ovirt_facts
@@ -110,11 +112,8 @@ def main():
argument_spec = ovirt_info_full_argument_spec(
name=dict(default=None, required=False),
type=dict(
default=None,
required=True,
choices=[
'os_image', 'os_network', 'os_volume', 'foreman',
],
choices=['os_image', 'os_network', 'os_volume', 'foreman'],
aliases=['provider'],
),
)

View File

@@ -41,7 +41,6 @@ options:
id:
description:
- "ID of the scheduling policy."
required: true
name:
description:
- "Name of the scheduling policy, can be used as glob expression."
@@ -77,7 +76,6 @@ ovirt_scheduling_policies:
import fnmatch
import traceback
from ansible.module_utils.common.removed import removed_module
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils._ovirt import (
check_sdk,

View File

@@ -48,7 +48,6 @@ options:
project_id:
description:
- UUID of a project of the device to/from which to assign/remove a subnet.
required: True
type: str
device_count:
@@ -63,6 +62,7 @@ options:
- IPv4 or IPv6 subnet which you want to manage. It must come from a reserved block for your project in the Packet Host.
aliases: [name]
type: str
required: true
state:
description:

View File

@@ -45,6 +45,7 @@ options:
description:
- UUID of the project to which the device and volume belong.
type: str
required: true
volume:
description:
@@ -52,6 +53,7 @@ options:
- It can be a UUID, an API-generated volume name, or user-defined description string.
- 'Example values: 4a347482-b546-4f67-8300-fb5018ef0c5, volume-4a347482, "my volume"'
type: str
required: true
device:
description:

View File

@@ -17,13 +17,16 @@ options:
description:
- The datacenter in which to operate.
type: str
required: true
server:
description:
- The server name or ID.
type: str
required: true
name:
description:
- The name or ID of the NIC. This is only required on deletes, but not on create.
- If not specified, it defaults to a value based on UUID4.
type: str
lan:
description:
@@ -33,12 +36,12 @@ options:
description:
- The ProfitBricks username. Overrides the PB_SUBSCRIPTION_ID environment variable.
type: str
required: false
required: true
subscription_password:
description:
- THe ProfitBricks password. Overrides the PB_PASSWORD environment variable.
type: str
required: false
required: true
wait:
description:
- wait for the operation to complete before returning
@@ -97,6 +100,10 @@ uuid_match = re.compile(
r'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _make_default_name():
return str(uuid.uuid4()).replace('-', '')[:10]
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
@@ -134,6 +141,8 @@ def create_nic(module, profitbricks):
server = module.params.get('server')
lan = module.params.get('lan')
name = module.params.get('name')
if name is None:
name = _make_default_name()
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
@@ -184,6 +193,8 @@ def delete_nic(module, profitbricks):
datacenter = module.params.get('datacenter')
server = module.params.get('server')
name = module.params.get('name')
if name is None:
name = _make_default_name()
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
@@ -230,30 +241,25 @@ def delete_nic(module, profitbricks):
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
server=dict(),
name=dict(default=str(uuid.uuid4()).replace('-', '')[:10]), # @FIXME please do not do that
datacenter=dict(required=True),
server=dict(required=True),
name=dict(),
lan=dict(),
subscription_user=dict(),
subscription_password=dict(no_log=True),
subscription_user=dict(required=True),
subscription_password=dict(required=True, no_log=True),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
state=dict(default='present'),
),
required_if=(
('state', 'absent', ['name']),
('state', 'present', ['lan']),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
if not module.params.get('subscription_user'): # @ FIXME use required in argument_spec, same for lines below
module.fail_json(msg='subscription_user parameter is required')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is required')
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required')
if not module.params.get('server'):
module.fail_json(msg='server parameter is required')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
@@ -264,9 +270,6 @@ def main():
state = module.params.get('state')
if state == 'absent':
if not module.params.get('name'):
module.fail_json(msg='name parameter is required')
try:
(changed) = delete_nic(module, profitbricks)
module.exit_json(changed=changed)
@@ -274,9 +277,6 @@ def main():
module.fail_json(msg='failed to set nic state: %s' % str(e))
elif state == 'present':
if not module.params.get('lan'):
module.fail_json(msg='lan parameter is required')
try:
(nic_dict) = create_nic(module, profitbricks)
module.exit_json(nics=nic_dict) # @FIXME changed not calculated?

View File

@@ -26,10 +26,12 @@ options:
default: zones
description:
- zpool to import to or delete images from.
type: str
source:
required: false
description:
- URI for the image source.
type: str
state:
required: true
choices: [ present, absent, deleted, imported, updated, vacuumed ]
@@ -37,16 +39,22 @@ options:
- State the object operated on should be in. C(imported) is an alias for
for C(present) and C(deleted) for C(absent). When set to C(vacuumed)
and C(uuid) to C(*), it will remove all unused images.
type: str
type:
required: false
choices: [ imgapi, docker, dsapi ]
default: imgapi
description:
- Type for image sources.
type: str
uuid:
required: false
description:
- Image UUID. Can either be a full UUID or C(*) for all images.
type: str
requirements:
- python >= 2.6
'''
@@ -260,12 +268,12 @@ class Imgadm(object):
def main():
module = AnsibleModule(
argument_spec=dict(
force=dict(default=None, type='bool'),
force=dict(type='bool'),
pool=dict(default='zones'),
source=dict(default=None),
state=dict(default=None, required=True, choices=['present', 'absent', 'deleted', 'imported', 'updated', 'vacuumed']),
source=dict(),
state=dict(required=True, choices=['present', 'absent', 'deleted', 'imported', 'updated', 'vacuumed']),
type=dict(default='imgapi', choices=['imgapi', 'docker', 'dsapi']),
uuid=dict(default=None)
uuid=dict()
),
# This module relies largely on imgadm(1M) to enforce idempotency, which does not
# provide a "noop" (or equivalent) mode to do a dry-run.

View File

@@ -21,193 +21,235 @@ options:
description:
- When enabled, the zone dataset will be mounted on C(/zones/archive)
upon removal.
type: bool
autoboot:
required: false
description:
- Whether or not a VM is booted when the system is rebooted.
type: bool
brand:
required: true
choices: [ joyent, joyent-minimal, lx, kvm, bhyve ]
default: joyent
description:
- Type of virtual machine. The C(bhyve) option was added in community.general 0.2.0.
type: str
boot:
required: false
description:
- Set the boot order for KVM VMs.
type: str
cpu_cap:
required: false
description:
- Sets a limit on the amount of CPU time that can be used by a VM.
Use C(0) for no cap.
type: int
cpu_shares:
required: false
description:
- Sets a limit on the number of fair share scheduler (FSS) CPU shares for
a VM. This limit is relative to all other VMs on the system.
type: int
cpu_type:
required: false
choices: [ qemu64, host ]
default: qemu64
description:
- Control the type of virtual CPU exposed to KVM VMs.
type: str
customer_metadata:
required: false
description:
- Metadata to be set and associated with this VM, this contain customer
modifiable keys.
type: dict
delegate_dataset:
required: false
description:
- Whether to delegate a ZFS dataset to an OS VM.
type: bool
disk_driver:
required: false
description:
- Default value for a virtual disk model for KVM guests.
type: str
disks:
required: false
description:
- A list of disks to add, valid properties are documented in vmadm(1M).
type: list
dns_domain:
required: false
description:
- Domain value for C(/etc/hosts).
type: str
docker:
required: false
description:
- Docker images need this flag enabled along with the I(brand) set to C(lx).
type: bool
filesystems:
required: false
description:
- Mount additional filesystems into an OS VM.
required: false
description:
- Mount additional filesystems into an OS VM.
type: list
firewall_enabled:
required: false
description:
- Enables the firewall, allowing fwadm(1M) rules to be applied.
type: bool
force:
required: false
description:
- Force a particular action (i.e. stop or delete a VM).
type: bool
fs_allowed:
required: false
description:
- Comma separated list of filesystem types this zone is allowed to mount.
type: str
hostname:
required: false
description:
- Zone/VM hostname.
type: str
image_uuid:
required: false
description:
- Image UUID.
type: str
indestructible_delegated:
required: false
description:
- Adds an C(@indestructible) snapshot to delegated datasets.
type: bool
indestructible_zoneroot:
required: false
description:
- Adds an C(@indestructible) snapshot to zoneroot.
type: bool
internal_metadata:
required: false
description:
- Metadata to be set and associated with this VM, this contains operator
generated keys.
type: dict
internal_metadata_namespace:
required: false
description:
- List of namespaces to be set as I(internal_metadata-only); these namespaces
will come from I(internal_metadata) rather than I(customer_metadata).
type: str
kernel_version:
required: false
description:
- Kernel version to emulate for LX VMs.
type: str
limit_priv:
required: false
description:
- Set (comma separated) list of privileges the zone is allowed to use.
type: str
maintain_resolvers:
required: false
description:
- Resolvers in C(/etc/resolv.conf) will be updated when updating
the I(resolvers) property.
type: bool
max_locked_memory:
required: false
description:
- Total amount of memory (in MiBs) on the host that can be locked by this VM.
type: int
max_lwps:
required: false
description:
- Maximum number of lightweight processes this VM is allowed to have running.
type: int
max_physical_memory:
required: false
description:
- Maximum amount of memory (in MiBs) on the host that the VM is allowed to use.
type: int
max_swap:
required: false
description:
- Maximum amount of virtual memory (in MiBs) the VM is allowed to use.
type: int
mdata_exec_timeout:
required: false
description:
- Timeout in seconds (or 0 to disable) for the C(svc:/smartdc/mdata:execute) service
that runs user-scripts in the zone.
type: int
name:
required: false
aliases: [ alias ]
description:
- Name of the VM. vmadm(1M) uses this as an optional name.
type: str
nic_driver:
required: false
description:
- Default value for a virtual NIC model for KVM guests.
type: str
nics:
required: false
description:
- A list of nics to add, valid properties are documented in vmadm(1M).
type: list
nowait:
required: false
description:
- Consider the provisioning complete when the VM first starts, rather than
when the VM has rebooted.
type: bool
qemu_opts:
required: false
description:
- Additional qemu arguments for KVM guests. This overwrites the default arguments
provided by vmadm(1M) and should only be used for debugging.
type: str
qemu_extra_opts:
required: false
description:
- Additional qemu cmdline arguments for KVM guests.
type: str
quota:
required: false
description:
- Quota on zone filesystems (in MiBs).
type: int
ram:
required: false
description:
- Amount of virtual RAM for a KVM guest (in MiBs).
type: int
resolvers:
required: false
description:
- List of resolvers to be put into C(/etc/resolv.conf).
type: list
routes:
required: false
description:
- Dictionary that maps destinations to gateways, these will be set as static
routes in the VM.
type: dict
spice_opts:
required: false
description:
- Addition options for SPICE-enabled KVM VMs.
type: str
spice_password:
required: false
description:
- Password required to connect to SPICE. By default no password is set.
Please note this can be read from the Global Zone.
type: str
state:
required: true
choices: [ present, absent, stopped, restarted ]
choices: [ present, running, absent, deleted, stopped, created, restarted, rebooted ]
default: running
description:
- States for the VM to be in. Please note that C(present), C(stopped) and C(restarted)
operate on a VM that is currently provisioned. C(present) means that the VM will be
@@ -215,74 +257,91 @@ options:
shutdown the zone before removing it.
C(stopped) means the zone will be created if it doesn't exist already, before shutting
it down.
type: str
tmpfs:
required: false
description:
- Amount of memory (in MiBs) that will be available in the VM for the C(/tmp) filesystem.
type: int
uuid:
required: false
description:
- UUID of the VM. Can either be a full UUID or C(*) for all VMs.
type: str
vcpus:
required: false
description:
- Number of virtual CPUs for a KVM guest.
type: int
vga:
required: false
description:
- Specify VGA emulation used by KVM VMs.
type: str
virtio_txburst:
required: false
description:
- Number of packets that can be sent in a single flush of the tx queue of virtio NICs.
type: int
virtio_txtimer:
required: false
description:
- Timeout (in nanoseconds) for the TX timer of virtio NICs.
type: int
vnc_password:
required: false
description:
- Password required to connect to VNC. By default no password is set.
Please note this can be read from the Global Zone.
type: str
vnc_port:
required: false
description:
- TCP port to listen of the VNC server. Or set C(0) for random,
or C(-1) to disable.
type: int
zfs_data_compression:
required: false
description:
- Specifies compression algorithm used for this VMs data dataset. This option
only has effect on delegated datasets.
type: str
zfs_data_recsize:
required: false
description:
- Suggested block size (power of 2) for files in the delegated dataset's filesystem.
type: int
zfs_filesystem_limit:
required: false
description:
- Maximum number of filesystems the VM can have.
type: int
zfs_io_priority:
required: false
description:
- IO throttle priority value relative to other VMs.
type: int
zfs_root_compression:
required: false
description:
- Specifies compression algorithm used for this VMs root dataset. This option
only has effect on the zoneroot dataset.
type: str
zfs_root_recsize:
required: false
description:
- Suggested block size (power of 2) for files in the zoneroot dataset's filesystem.
type: int
zfs_snapshot_limit:
required: false
description:
- Number of snapshots the VM can have.
type: int
zpool:
required: false
description:
- ZFS pool the VM's zone dataset will be created in.
type: str
requirements:
- python >= 2.6
'''
@@ -497,17 +556,11 @@ def set_vm_state(module, vm_uuid, vm_state):
def create_payload(module, uuid):
# Create the JSON payload (vmdef) and return the filename.
p = module.params
# Filter out the few options that are not valid VM properties.
module_options = ['debug', 'force', 'state']
vmattrs = filter(lambda prop: prop not in module_options, p)
vmdef = {}
for attr in vmattrs:
if p[attr]:
vmdef[attr] = p[attr]
# @TODO make this a simple {} comprehension as soon as py2 is ditched
# @TODO {k: v for k, v in p.items() if k not in module_options}
vmdef = dict([(k, v) for k, v in module.params.items() if k not in module_options and v])
try:
vmdef_json = json.dumps(vmdef)

View File

@@ -23,22 +23,22 @@ options:
credentials_path:
description:
- (String) Optional parameter that allows to set a non-default credentials path.
Default is ~/.spotinst/credentials
- (Path) Optional parameter that allows to set a non-default credentials path.
default: ~/.spotinst/credentials
type: path
account_id:
description:
- (String) Optional parameter that allows to set an account-id inside the module configuration
By default this is retrieved from the credentials path
type: str
availability_vs_cost:
choices:
- availabilityOriented
- costOriented
- balanced
description:
- (String) The strategy orientation.
- "The choices available are: C(availabilityOriented), C(costOriented), C(balanced)."
required: true
type: str
availability_zones:
description:
@@ -49,6 +49,7 @@ options:
subnet_id (String),
placement_group_name (String),
required: true
type: list
block_device_mappings:
description:
@@ -66,6 +67,7 @@ options:
snapshot_id(Integer),
volume_type(String),
volume_size(Integer))
type: list
chef:
description:
@@ -75,10 +77,12 @@ options:
user (String),
pem_key (String),
chef_version (String)
type: dict
draining_timeout:
description:
- (Integer) Time for instance to be drained from incoming requests and deregistered from ELB before termination.
type: int
ebs_optimized:
description:
@@ -93,67 +97,72 @@ options:
keys allowed are -
volume_ids (List of Strings),
device_name (String)
type: list
ecs:
description:
- (Object) The ECS integration configuration.;
Expects the following key -
cluster_name (String)
type: dict
elastic_ips:
description:
- (List of Strings) List of ElasticIps Allocation Ids (Example C(eipalloc-9d4e16f8)) to associate to the group instances
type: list
fallback_to_od:
description:
- (Boolean) In case of no spots available, Elastigroup will launch an On-demand instance instead
type: bool
health_check_grace_period:
description:
- (Integer) The amount of time, in seconds, after the instance has launched to start and check its health.
default: 300
- If not specified, it defaults to C(300).
type: int
health_check_unhealthy_duration_before_replacement:
description:
- (Integer) Minimal mount of time instance should be unhealthy for us to consider it unhealthy.
type: int
health_check_type:
choices:
- ELB
- HCS
- TARGET_GROUP
- MLB
- EC2
description:
- (String) The service to use for the health check.
- "The choices available are: C(ELB), C(HCS), C(TARGET_GROUP), C(MLB), C(EC2)."
type: str
iam_role_name:
description:
- (String) The instance profile iamRole name
- Only use iam_role_arn, or iam_role_name
type: str
iam_role_arn:
description:
- (String) The instance profile iamRole arn
- Only use iam_role_arn, or iam_role_name
type: str
id:
description:
- (String) The group id if it already exists and you want to update, or delete it.
This will not work unless the uniqueness_by field is set to id.
When this is set, and the uniqueness_by field is set, the group will either be updated or deleted, but not created.
type: str
image_id:
description:
- (String) The image Id used to launch the instance.;
In case of conflict between Instance type and image type, an error will be returned
required: true
type: str
key_pair:
description:
- (String) Specify a Key Pair to attach to the instances
required: true
type: str
kubernetes:
description:
@@ -161,40 +170,47 @@ options:
Expects the following keys -
api_server (String),
token (String)
type: dict
lifetime_period:
description:
- (String) lifetime period
- (Integer) lifetime period
type: int
load_balancers:
description:
- (List of Strings) List of classic ELB names
type: list
max_size:
description:
- (Integer) The upper limit number of instances that you can scale up to
required: true
type: int
mesosphere:
description:
- (Object) The Mesosphere integration configuration.
Expects the following key -
api_server (String)
type: dict
min_size:
description:
- (Integer) The lower limit number of instances that you can scale down to
required: true
type: int
monitoring:
description:
- (Boolean) Describes whether instance Enhanced Monitoring is enabled
required: true
- (String) Describes whether instance Enhanced Monitoring is enabled
type: str
name:
description:
- (String) Unique name for elastigroup to be created, updated or deleted
required: true
type: str
network_interfaces:
description:
@@ -212,23 +228,26 @@ options:
subnet_id (String),
associate_ipv6_address (Boolean),
private_ip_addresses (List of Objects, Keys are privateIpAddress (String, required) and primary (Boolean))
type: list
on_demand_count:
description:
- (Integer) Required if risk is not set
- Number of on demand instances to launch. All other instances will be spot instances.;
Either set this parameter or the risk parameter
type: int
on_demand_instance_type:
description:
- (String) On-demand instance type that will be provisioned
required: true
type: str
opsworks:
description:
- (Object) The elastigroup OpsWorks integration configration.;
Expects the following key -
layer_id (String)
type: dict
persistence:
description:
@@ -237,18 +256,14 @@ options:
should_persist_root_device (Boolean),
should_persist_block_devices (Boolean),
should_persist_private_ip (Boolean)
type: dict
product:
choices:
- Linux/UNIX
- SUSE Linux
- Windows
- Linux/UNIX (Amazon VPC)
- SUSE Linux (Amazon VPC)
- Windows
description:
- (String) Operation system type._
- (String) Operation system type.
- "Available choices are: C(Linux/UNIX), C(SUSE Linux), C(Windows), C(Linux/UNIX (Amazon VPC)), C(SUSE Linux (Amazon VPC))."
required: true
type: str
rancher:
description:
@@ -258,6 +273,7 @@ options:
access_key (String),
secret_key (String),
master_host (String)
type: dict
right_scale:
description:
@@ -265,10 +281,12 @@ options:
Expects the following keys -
account_id (String),
refresh_token (String)
type: dict
risk:
description:
- (Integer) required if on demand is not set. The percentage of Spot instances to launch (0 - 100).
type: int
roll_config:
description:
@@ -278,6 +296,7 @@ options:
batch_size_percentage(Integer, Required),
grace_period - (Integer, Required),
health_check_type(String, Optional)
type: dict
scheduled_tasks:
description:
@@ -295,17 +314,20 @@ options:
grace_period (Integer),
task_type (String, required),
is_enabled (Boolean)
type: list
security_group_ids:
description:
- (List of Strings) One or more security group IDs. ;
In case of update it will override the existing Security Group with the new given array
required: true
type: list
shutdown_script:
description:
- (String) The Base64-encoded shutdown script that executes prior to instance termination.
Encode before setting.
type: str
signals:
description:
@@ -313,15 +335,18 @@ options:
keys allowed are -
name (String, required),
timeout (Integer)
type: list
spin_up_time:
description:
- (Integer) spin up time, in seconds, for the instance
type: int
spot_instance_types:
description:
- (List of Strings) Spot instance type that will be provisioned.
required: true
type: list
state:
choices:
@@ -329,38 +354,41 @@ options:
- absent
description:
- (String) create or delete the elastigroup
default: present
type: str
tags:
description:
- (List of tagKey:tagValue paris) a list of tags to configure in the elastigroup. Please specify list of keys and values (key colon value);
- (List of tagKey:tagValue pairs) a list of tags to configure in the elastigroup. Please specify list of keys and values (key colon value);
type: list
target:
description:
- (Integer) The number of instances to launch
required: true
type: int
target_group_arns:
description:
- (List of Strings) List of target group arns instances should be registered to
type: list
tenancy:
choices:
- default
- dedicated
description:
- (String) dedicated vs shared tenancy
- (String) dedicated vs shared tenancy.
- "The available choices are: C(default), C(dedicated)."
type: str
terminate_at_end_of_billing_hour:
description:
- (Boolean) terminate at the end of billing hour
type: bool
unit:
choices:
- instance
- weight
description:
- (String) The capacity unit to launch instances by.
required: true
- "The available choices are: C(instance), C(weight)."
type: str
up_scaling_policies:
description:
@@ -384,7 +412,7 @@ options:
target (String),
maximum (String),
minimum (String)
type: list
down_scaling_policies:
description:
@@ -408,6 +436,7 @@ options:
target (String),
maximum (String),
minimum (String)
type: list
target_tracking_policies:
description:
@@ -422,6 +451,7 @@ options:
unit (String, required),
cooldown (String, required),
target (String, required)
type: list
uniqueness_by:
choices:
@@ -430,12 +460,13 @@ options:
description:
- (String) If your group names are not unique, you may use this feature to update or delete a specific group.
Whenever this property is set, you must set a group_id in order to update or delete a group, otherwise a group will be created.
default: name
type: str
user_data:
description:
- (String) Base64-encoded MIME user data. Encode before setting the value.
type: str
utilize_reserved_instances:
description:
@@ -447,11 +478,13 @@ options:
description:
- (Boolean) Whether or not the elastigroup creation / update actions should wait for the instances to spin
type: bool
default: false
wait_timeout:
description:
- (Integer) How long the module should wait for instances before failing the action.;
Only works if wait_for_instances is True.
type: int
'''
EXAMPLES = '''
@@ -899,7 +932,6 @@ multai_fields = ('multai_token',)
def handle_elastigroup(client, module):
has_changed = False
should_create = False
group_id = None
message = 'None'
@@ -992,7 +1024,7 @@ def retrieve_group_instances(client, module, group_id):
healthy_instances = client.get_instance_healthiness(group_id=group_id)
for healthy_instance in healthy_instances:
if(healthy_instance.get('healthStatus') == 'HEALTHY'):
if healthy_instance.get('healthStatus') == 'HEALTHY':
amount_of_fulfilled_instances += 1
instances.append(healthy_instance)

View File

@@ -39,11 +39,11 @@ options:
- Corresponding DNS zone for this record, e.g. example.com.
type:
required: true
choices: [ host_record, alias, ptr_record, srv_record, txt_record ]
description:
- "Define the record type. C(host_record) is a A or AAAA record,
C(alias) is a CNAME, C(ptr_record) is a PTR record, C(srv_record)
is a SRV record and C(txt_record) is a TXT record."
- "The available choices are: C(host_record), C(alias), C(ptr_record), C(srv_record), C(txt_record)."
data:
required: false
default: []

View File

@@ -29,9 +29,9 @@ options:
- Whether the dns zone is present or not.
type:
required: true
choices: [ forward_zone, reverse_zone ]
description:
- Define if the zone is a forward or reverse DNS zone.
- "The available choices are: C(forward_zone), C(reverse_zone)."
zone:
required: true
description:

View File

@@ -27,27 +27,34 @@ options:
choices: [ present, absent ]
description:
- Whether the group is present or not.
type: str
name:
required: true
description:
- Name of the posix group.
type: str
description:
required: false
description:
- Group description.
type: str
position:
required: false
description:
- define the whole ldap position of the group, e.g.
C(cn=g123m-1A,cn=classes,cn=schueler,cn=groups,ou=schule,dc=example,dc=com).
type: str
ou:
required: false
description:
- LDAP OU, e.g. school for LDAP OU C(ou=school,dc=example,dc=com).
type: str
subpath:
required: false
description:
- Subpath inside the OU, e.g. C(cn=classes,cn=students,cn=groups).
type: str
default: "cn=groups"
'''

View File

@@ -27,267 +27,298 @@ options:
choices: [ present, absent ]
description:
- Whether the share is present or not.
type: str
name:
required: true
description:
- Name
type: str
host:
required: false
description:
- Host FQDN (server which provides the share), e.g. C({{
ansible_fqdn }}). Required if C(state=present).
type: str
path:
required: false
description:
- Directory on the providing server, e.g. C(/home). Required if C(state=present).
samba_name:
type: path
sambaName:
required: false
description:
- Windows name. Required if C(state=present).
aliases: [ sambaName ]
type: str
aliases: [ samba_name ]
ou:
required: true
description:
- Organisational unit, inside the LDAP Base DN.
type: str
owner:
default: 0
default: '0'
description:
- Directory owner of the share's root directory.
type: str
group:
default: '0'
description:
- Directory owner group of the share's root directory.
type: str
directorymode:
default: '00755'
description:
- Permissions for the share's root directory.
type: str
root_squash:
default: '1'
choices: [ '0', '1' ]
default: true
description:
- Modify user ID for root user (root squashing).
type: bool
subtree_checking:
default: '1'
choices: [ '0', '1' ]
default: true
description:
- Subtree checking.
type: bool
sync:
default: 'sync'
description:
- NFS synchronisation.
type: str
writeable:
default: '1'
choices: [ '0', '1' ]
default: true
description:
- NFS write access.
samba_block_size:
type: bool
sambaBlockSize:
description:
- Blocking size.
aliases: [ sambaBlockSize ]
samba_blocking_locks:
default: '1'
choices: [ '0', '1' ]
type: str
aliases: [ samba_block_size ]
sambaBlockingLocks:
default: true
description:
- Blocking locks.
aliases: [ sambaBlockingLocks ]
type: bool
aliases: [ samba_blocking_locks ]
sambaBrowseable:
description:
- Show in Windows network environment.
type: bool
default: True
aliases: [ samba_browsable ]
samba_create_mode:
sambaCreateMode:
default: '0744'
description:
- File mode.
aliases: [ sambaCreateMode ]
samba_csc_policy:
type: str
aliases: [ samba_create_mode ]
sambaCscPolicy:
default: 'manual'
description:
- Client-side caching policy.
aliases: [ sambaCscPolicy ]
samba_custom_settings:
type: str
aliases: [ samba_csc_policy ]
sambaCustomSettings:
default: []
description:
- Option name in smb.conf and its value.
aliases: [ sambaCustomSettings ]
samba_directory_mode:
type: list
aliases: [ samba_custom_settings ]
sambaDirectoryMode:
default: '0755'
description:
- Directory mode.
aliases: [ sambaDirectoryMode ]
samba_directory_security_mode:
type: str
aliases: [ samba_directory_mode ]
sambaDirectorySecurityMode:
default: '0777'
description:
- Directory security mode.
aliases: [ sambaDirectorySecurityMode ]
samba_dos_filemode:
default: '0'
choices: [ '0', '1' ]
type: str
aliases: [ samba_directory_security_mode ]
sambaDosFilemode:
default: false
description:
- Users with write access may modify permissions.
aliases: [ sambaDosFilemode ]
samba_fake_oplocks:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_dos_filemode ]
sambaFakeOplocks:
default: false
description:
- Fake oplocks.
aliases: [ sambaFakeOplocks ]
samba_force_create_mode:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_fake_oplocks ]
sambaForceCreateMode:
default: false
description:
- Force file mode.
aliases: [ sambaForceCreateMode ]
samba_force_directory_mode:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_force_create_mode ]
sambaForceDirectoryMode:
default: false
description:
- Force directory mode.
aliases: [ sambaForceDirectoryMode ]
samba_force_directory_security_mode:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_force_directory_mode ]
sambaForceDirectorySecurityMode:
default: false
description:
- Force directory security mode.
aliases: [ sambaForceDirectorySecurityMode ]
samba_force_group:
type: bool
aliases: [ samba_force_directory_security_mode ]
sambaForceGroup:
description:
- Force group.
aliases: [ sambaForceGroup ]
samba_force_security_mode:
default: '0'
choices: [ '0', '1' ]
type: str
aliases: [ samba_force_group ]
sambaForceSecurityMode:
default: false
description:
- Force security mode.
aliases: [ sambaForceSecurityMode ]
samba_force_user:
type: bool
aliases: [ samba_force_security_mode ]
sambaForceUser:
description:
- Force user.
aliases: [ sambaForceUser ]
samba_hide_files:
type: str
aliases: [ samba_force_user ]
sambaHideFiles:
description:
- Hide files.
aliases: [ sambaHideFiles ]
samba_hide_unreadable:
default: '0'
choices: [ '0', '1' ]
type: str
aliases: [ samba_hide_files ]
sambaHideUnreadable:
default: false
description:
- Hide unreadable files/directories.
aliases: [ sambaHideUnreadable ]
samba_hosts_allow:
type: bool
aliases: [ samba_hide_unreadable ]
sambaHostsAllow:
default: []
description:
- Allowed host/network.
aliases: [ sambaHostsAllow ]
samba_hosts_deny:
type: list
aliases: [ samba_hosts_allow ]
sambaHostsDeny:
default: []
description:
- Denied host/network.
aliases: [ sambaHostsDeny ]
samba_inherit_acls:
default: '1'
choices: [ '0', '1' ]
type: list
aliases: [ samba_hosts_deny ]
sambaInheritAcls:
default: true
description:
- Inherit ACLs.
aliases: [ sambaInheritAcls ]
samba_inherit_owner:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_inherit_acls ]
sambaInheritOwner:
default: false
description:
- Create files/directories with the owner of the parent directory.
aliases: [ sambaInheritOwner ]
samba_inherit_permissions:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_inherit_owner ]
sambaInheritPermissions:
default: false
description:
- Create files/directories with permissions of the parent directory.
aliases: [ sambaInheritPermissions ]
samba_invalid_users:
type: bool
aliases: [ samba_inherit_permissions ]
sambaInvalidUsers:
description:
- Invalid users or groups.
aliases: [ sambaInvalidUsers ]
samba_level_2_oplocks:
default: '1'
choices: [ '0', '1' ]
type: str
aliases: [ samba_invalid_users ]
sambaLevel2Oplocks:
default: true
description:
- Level 2 oplocks.
aliases: [ sambaLevel2Oplocks ]
samba_locking:
default: '1'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_level_2_oplocks ]
sambaLocking:
default: true
description:
- Locking.
aliases: [ sambaLocking ]
samba_msdfs_root:
default: '0'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_locking ]
sambaMSDFSRoot:
default: false
description:
- MSDFS root.
aliases: [ sambaMSDFSRoot ]
samba_nt_acl_support:
default: '1'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_msdfs_root ]
sambaNtAclSupport:
default: true
description:
- NT ACL support.
aliases: [ sambaNtAclSupport ]
samba_oplocks:
default: '1'
choices: [ '0', '1' ]
type: bool
aliases: [ samba_nt_acl_support ]
sambaOplocks:
default: true
description:
- Oplocks.
aliases: [ sambaOplocks ]
samba_postexec:
type: bool
aliases: [ samba_oplocks ]
sambaPostexec:
description:
- Postexec script.
aliases: [ sambaPostexec ]
samba_preexec:
type: str
aliases: [ samba_postexec ]
sambaPreexec:
description:
- Preexec script.
aliases: [ sambaPreexec ]
samba_public:
default: '0'
choices: [ '0', '1' ]
type: str
aliases: [ samba_preexec ]
sambaPublic:
default: false
description:
- Allow anonymous read-only access with a guest user.
aliases: [ sambaPublic ]
samba_security_mode:
type: bool
aliases: [ samba_public ]
sambaSecurityMode:
default: '0777'
description:
- Security mode.
aliases: [ sambaSecurityMode ]
samba_strict_locking:
type: str
aliases: [ samba_security_mode ]
sambaStrictLocking:
default: 'Auto'
description:
- Strict locking.
aliases: [ sambaStrictLocking ]
samba_vfs_objects:
type: str
aliases: [ samba_strict_locking ]
sambaVFSObjects:
description:
- VFS objects.
aliases: [ sambaVFSObjects ]
samba_valid_users:
type: str
aliases: [ samba_vfs_objects ]
sambaValidUsers:
description:
- Valid users or groups.
aliases: [ sambaValidUsers ]
samba_write_list:
type: str
aliases: [ samba_valid_users ]
sambaWriteList:
description:
- Restrict write access to these users/groups.
aliases: [ sambaWriteList ]
samba_writeable:
default: '1'
choices: [ '0', '1' ]
type: str
aliases: [ samba_write_list ]
sambaWriteable:
default: true
description:
- Samba write access.
aliases: [ sambaWriteable ]
type: bool
aliases: [ samba_writeable ]
nfs_hosts:
default: []
description:
- Only allow access for this host, IP address or network.
nfs_custom_settings:
type: list
nfsCustomSettings:
default: []
description:
- Option name in exports file.
aliases: [ nfsCustomSettings ]
type: list
aliases: [ nfs_custom_settings ]
'''

View File

@@ -27,202 +27,250 @@ options:
choices: [ present, absent ]
description:
- Whether the user is present or not.
type: str
username:
required: true
description:
- User name
aliases: ['name']
type: str
firstname:
description:
- First name. Required if C(state=present).
type: str
lastname:
description:
- Last name. Required if C(state=present).
type: str
password:
description:
- Password. Required if C(state=present).
type: str
birthday:
description:
- Birthday
type: str
city:
description:
- City of users business address.
type: str
country:
description:
- Country of users business address.
type: str
department_number:
description:
- Department number of users business address.
aliases: [ departmentNumber ]
type: str
description:
description:
- Description (not gecos)
type: str
display_name:
description:
- Display name (not gecos)
aliases: [ displayName ]
type: str
email:
default: []
default: ['']
description:
- A list of e-mail addresses.
type: list
employee_number:
description:
- Employee number
aliases: [ employeeNumber ]
type: str
employee_type:
description:
- Employee type
aliases: [ employeeType ]
type: str
gecos:
description:
- GECOS
type: str
groups:
default: []
description:
- "POSIX groups, the LDAP DNs of the groups will be found with the
LDAP filter for each group as $GROUP:
C((&(objectClass=posixGroup)(cn=$GROUP)))."
type: list
home_share:
description:
- "Home NFS share. Must be a LDAP DN, e.g.
C(cn=home,cn=shares,ou=school,dc=example,dc=com)."
aliases: [ homeShare ]
type: str
home_share_path:
description:
- Path to home NFS share, inside the homeShare.
aliases: [ homeSharePath ]
type: str
home_telephone_number:
default: []
description:
- List of private telephone numbers.
aliases: [ homeTelephoneNumber ]
type: list
homedrive:
description:
- Windows home drive, e.g. C("H:").
type: str
mail_alternative_address:
default: []
description:
- List of alternative e-mail addresses.
aliases: [ mailAlternativeAddress ]
type: list
mail_home_server:
description:
- FQDN of mail server
aliases: [ mailHomeServer ]
type: str
mail_primary_address:
description:
- Primary e-mail address
aliases: [ mailPrimaryAddress ]
type: str
mobile_telephone_number:
default: []
description:
- Mobile phone number
aliases: [ mobileTelephoneNumber ]
type: list
organisation:
description:
- Organisation
aliases: [ organization ]
override_pw_history:
type: str
overridePWHistory:
type: bool
default: 'no'
description:
- Override password history
aliases: [ overridePWHistory ]
override_pw_length:
aliases: [ override_pw_history ]
overridePWLength:
type: bool
default: 'no'
description:
- Override password check
aliases: [ overridePWLength ]
aliases: [ override_pw_length ]
pager_telephonenumber:
default: []
description:
- List of pager telephone numbers.
aliases: [ pagerTelephonenumber ]
type: list
phone:
description:
- List of telephone numbers.
type: list
postcode:
description:
- Postal code of users business address.
type: str
primary_group:
default: cn=Domain Users,cn=groups,$LDAP_BASE_DN
description:
- Primary group. This must be the group LDAP DN.
- If not specified, it defaults to C(cn=Domain Users,cn=groups,$LDAP_BASE_DN).
aliases: [ primaryGroup ]
type: str
profilepath:
description:
- Windows profile directory
type: str
pwd_change_next_login:
choices: [ '0', '1' ]
description:
- Change password on next login.
aliases: [ pwdChangeNextLogin ]
type: str
room_number:
description:
- Room number of users business address.
aliases: [ roomNumber ]
type: str
samba_privileges:
description:
- "Samba privilege, like allow printer administration, do domain
join."
aliases: [ sambaPrivileges ]
type: list
samba_user_workstations:
description:
- Allow the authentication only on this Microsoft Windows host.
aliases: [ sambaUserWorkstations ]
type: list
sambahome:
description:
- Windows home path, e.g. C('\\$FQDN\$USERNAME').
type: str
scriptpath:
description:
- Windows logon script.
type: str
secretary:
default: []
description:
- A list of superiors as LDAP DNs.
type: list
serviceprovider:
default: []
default: ['']
description:
- Enable user for the following service providers.
type: list
shell:
default: '/bin/bash'
description:
- Login shell
type: str
street:
description:
- Street of users business address.
type: str
title:
description:
- Title, e.g. C(Prof.).
type: str
unixhome:
default: '/home/$USERNAME'
description:
- Unix home directory
- If not specified, it defaults to C(/home/$USERNAME).
type: str
userexpiry:
default: Today + 1 year
description:
- Account expiry date, e.g. C(1999-12-31).
- If not specified, it defaults to the current day plus one year.
type: str
position:
default: ''
description:
- "Define the whole position of users object inside the LDAP tree,
e.g. C(cn=employee,cn=users,ou=school,dc=example,dc=com)."
type: str
update_password:
default: always
choices: [ always, on_create ]
description:
- "C(always) will update passwords if they differ.
C(on_create) will only set the password for newly created users."
type: str
ou:
default: ''
description:
- "Organizational Unit inside the LDAP Base DN, e.g. C(school) for
LDAP OU C(ou=school,dc=example,dc=com)."
type: str
subpath:
default: 'cn=users'
description:
- "LDAP subpath inside the organizational unit, e.g.
C(cn=teachers,cn=users) for LDAP container
C(cn=teachers,cn=users,dc=example,dc=com)."
type: str
'''
@@ -272,61 +320,44 @@ def main():
expiry = date.strftime(date.today() + timedelta(days=365), "%Y-%m-%d")
module = AnsibleModule(
argument_spec=dict(
birthday=dict(default=None,
type='str'),
city=dict(default=None,
type='str'),
country=dict(default=None,
type='str'),
department_number=dict(default=None,
type='str',
birthday=dict(type='str'),
city=dict(type='str'),
country=dict(type='str'),
department_number=dict(type='str',
aliases=['departmentNumber']),
description=dict(default=None,
type='str'),
display_name=dict(default=None,
type='str',
description=dict(type='str'),
display_name=dict(type='str',
aliases=['displayName']),
email=dict(default=[''],
type='list'),
employee_number=dict(default=None,
type='str',
employee_number=dict(type='str',
aliases=['employeeNumber']),
employee_type=dict(default=None,
type='str',
employee_type=dict(type='str',
aliases=['employeeType']),
firstname=dict(default=None,
type='str'),
gecos=dict(default=None,
type='str'),
firstname=dict(type='str'),
gecos=dict(type='str'),
groups=dict(default=[],
type='list'),
home_share=dict(default=None,
type='str',
home_share=dict(type='str',
aliases=['homeShare']),
home_share_path=dict(default=None,
type='str',
home_share_path=dict(type='str',
aliases=['homeSharePath']),
home_telephone_number=dict(default=[],
type='list',
aliases=['homeTelephoneNumber']),
homedrive=dict(default=None,
type='str'),
lastname=dict(default=None,
type='str'),
homedrive=dict(type='str'),
lastname=dict(type='str'),
mail_alternative_address=dict(default=[],
type='list',
aliases=['mailAlternativeAddress']),
mail_home_server=dict(default=None,
type='str',
mail_home_server=dict(type='str',
aliases=['mailHomeServer']),
mail_primary_address=dict(default=None,
type='str',
mail_primary_address=dict(type='str',
aliases=['mailPrimaryAddress']),
mobile_telephone_number=dict(default=[],
type='list',
aliases=['mobileTelephoneNumber']),
organisation=dict(default=None,
type='str',
organisation=dict(type='str',
aliases=['organization']),
overridePWHistory=dict(default=False,
type='bool',
@@ -337,24 +368,18 @@ def main():
pager_telephonenumber=dict(default=[],
type='list',
aliases=['pagerTelephonenumber']),
password=dict(default=None,
type='str',
password=dict(type='str',
no_log=True),
phone=dict(default=[],
type='list'),
postcode=dict(default=None,
type='str'),
primary_group=dict(default=None,
type='str',
postcode=dict(type='str'),
primary_group=dict(type='str',
aliases=['primaryGroup']),
profilepath=dict(default=None,
type='str'),
pwd_change_next_login=dict(default=None,
type='str',
profilepath=dict(type='str'),
pwd_change_next_login=dict(type='str',
choices=['0', '1'],
aliases=['pwdChangeNextLogin']),
room_number=dict(default=None,
type='str',
room_number=dict(type='str',
aliases=['roomNumber']),
samba_privileges=dict(default=[],
type='list',
@@ -362,24 +387,18 @@ def main():
samba_user_workstations=dict(default=[],
type='list',
aliases=['sambaUserWorkstations']),
sambahome=dict(default=None,
type='str'),
scriptpath=dict(default=None,
type='str'),
sambahome=dict(type='str'),
scriptpath=dict(type='str'),
secretary=dict(default=[],
type='list'),
serviceprovider=dict(default=[''],
type='list'),
shell=dict(default='/bin/bash',
type='str'),
street=dict(default=None,
type='str'),
title=dict(default=None,
type='str'),
unixhome=dict(default=None,
type='str'),
userexpiry=dict(default=expiry,
type='str'),
street=dict(type='str'),
title=dict(type='str'),
unixhome=dict(type='str'),
userexpiry=dict(type='str'),
username=dict(required=True,
aliases=['name'],
type='str'),
@@ -450,6 +469,8 @@ def main():
obj[k] = module.params[k]
# handle some special values
obj['e-mail'] = module.params['email']
if 'userexpiry' in obj and obj.get('userexpiry') is None:
obj['userexpiry'] = expiry
password = module.params['password']
if obj['password'] is None:
obj['password'] = password

View File

@@ -122,6 +122,7 @@ options:
- ' - C(sr) (string): Storage Repository to create disk on. If not specified, will use default SR. Cannot be used for moving disk to other SR.'
- ' - C(sr_uuid) (string): UUID of a SR to create disk on. Use if SR name is not unique.'
type: list
elements: dict
aliases: [ disk ]
cdrom:
description:
@@ -151,6 +152,7 @@ options:
- ' - C(ip6) (string): Static IPv6 address (implies C(type6: static)) with prefix in format <IPv6 address>/<prefix>.'
- ' - C(gateway6) (string): Static IPv6 gateway.'
type: list
elements: dict
aliases: [ network ]
home_server:
description:
@@ -163,6 +165,7 @@ options:
- Useful for advanced users familiar with managing VM params trough xe CLI.
- A custom value object takes two fields C(key) and C(value) (see example below).
type: list
elements: dict
wait_for_ip_address:
description:
- Wait until XenServer detects an IP address for the VM. If C(state) is set to C(absent), this parameter is ignored.

View File

@@ -21,6 +21,7 @@ options:
mgmt_token:
description:
- a management token is required to manipulate the acl lists
required: true
state:
description:
- whether the ACL pair should be present or absent

View File

@@ -49,7 +49,6 @@ options:
- The value should be associated with the given key, required if C(state)
is C(present).
type: str
required: yes
recurse:
description:
- If the key represents a prefix, each entry with the prefix can be

View File

@@ -38,6 +38,7 @@ options:
- the state of the value for the key.
- can be present or absent
required: true
choices: [ present, absent ]
user:
description:
- The etcd user to authenticate with.

View File

@@ -21,15 +21,17 @@ options:
description:
- Indicate desired state of the cluster
choices: [ cleanup, offline, online, restart ]
required: yes
type: str
node:
description:
- Specify which node of the cluster you want to manage. None == the
cluster status itself, 'all' == check the status of all nodes.
type: str
timeout:
description:
- Timeout when the module should considered that the action has failed
default: 300
type: int
force:
description:
- Force the change of the cluster state

View File

@@ -27,9 +27,11 @@ options:
op:
description:
- An operation to perform. Mutually exclusive with state.
choices: [ get, wait, list ]
state:
description:
- The state to enforce. Mutually exclusive with op.
choices: [ present, absent ]
timeout:
description:
- The amount of time to wait for a node to appear.

View File

@@ -20,47 +20,57 @@ options:
- C(config) (new in 1.6), ensures a configuration setting on an instance.
- C(flush) flushes all the instance or a specified db.
- C(slave) sets a redis instance in slave or master mode.
required: true
choices: [ config, flush, slave ]
type: str
login_password:
description:
- The password used to authenticate with (usually not used)
type: str
login_host:
description:
- The host running the database
default: localhost
type: str
login_port:
description:
- The port to connect to
default: 6379
type: int
master_host:
description:
- The host of the master instance [slave command]
type: str
master_port:
description:
- The port of the master instance [slave command]
type: int
slave_mode:
description:
- the mode of the redis instance [slave command]
default: slave
choices: [ master, slave ]
type: str
db:
description:
- The database to flush (used in db mode) [flush command]
type: int
flush_mode:
description:
- Type of flush (all the dbs in a redis instance or a specific one)
[flush command]
default: all
choices: [ all, db ]
type: str
name:
description:
- A redis config key.
type: str
value:
description:
- A redis config value. When memory size is needed, it is possible
to specify it in the usal form of 1KB, 2M, 400MB where the base is 1024.
Units are case insensitive i.e. 1m = 1mb = 1M = 1MB.
type: str
notes:
- Requires the redis-py Python package on the remote host. You can

View File

@@ -21,28 +21,36 @@ options:
- name of the database to add or remove
required: true
aliases: [ db ]
type: str
login_user:
description:
- The username used to authenticate with
type: str
login_password:
description:
- The password used to authenticate with
type: str
login_host:
description:
- Host running the database
type: str
required: true
login_port:
description:
- Port of the MSSQL server. Requires login_host be defined as other than localhost if login_port is used
default: 1433
default: '1433'
type: str
state:
description:
- The database state
default: present
choices: [ "present", "absent", "import" ]
type: str
target:
description:
- Location, on the remote host, of the dump file to read from or write to. Uncompressed SQL
files (C(.sql)) files are supported.
type: str
autocommit:
description:
- Automatically commit the change only if the import succeed. Sometimes it is necessary to use autocommit=true, since some content can't be changed

View File

@@ -19,28 +19,34 @@ options:
- Name of the parameter to update.
required: true
aliases: [parameter]
type: str
value:
description:
- Value of the parameter to be set.
required: true
type: str
db:
description:
- Name of the Vertica database.
type: str
cluster:
description:
- Name of the Vertica cluster.
default: localhost
type: str
port:
description:
- Vertica cluster port to connect to.
default: 5433
default: '5433'
type: str
login_user:
description:
- The username used to authenticate with.
default: dbadmin
type: str
login_password:
description:
- The password used to authenticate with.
type: str
notes:
- The default authentication assumes that you are either logging in as or sudo'ing
to the C(dbadmin) account on the host.

View File

@@ -23,14 +23,12 @@ options:
- This file must exist ahead of time.
- This parameter is required, unless C(xmlstring) is given.
type: path
required: yes
aliases: [ dest, file ]
xmlstring:
description:
- A string containing XML on which to operate.
- This parameter is required, unless C(path) is given.
type: str
required: yes
xpath:
description:
- A valid XPath expression describing the item(s) you want to manipulate.

View File

@@ -101,6 +101,7 @@ options:
- If an empty list if passed all assigned user groups will be removed from the rule.
- If option is omitted user groups will not be checked or changed.
type: list
elements: str
extends_documentation_fragment:
- community.general.ipa.documentation

View File

@@ -37,10 +37,13 @@ options:
- On C(absent), the client will be removed if it exists
choices: ['present', 'absent']
default: 'present'
type: str
realm:
description:
- The realm to create the client in.
type: str
default: master
client_id:
description:
@@ -49,19 +52,23 @@ options:
This is 'clientId' in the Keycloak REST API.
aliases:
- clientId
type: str
id:
description:
- Id of client to be worked on. This is usually an UUID. Either this or I(client_id)
is required. If you specify both, this takes precedence.
type: str
name:
description:
- Name of the client (this is not the same as I(client_id))
type: str
description:
description:
- Description of the client in Keycloak
type: str
root_url:
description:
@@ -69,6 +76,7 @@ options:
This is 'rootUrl' in the Keycloak REST API.
aliases:
- rootUrl
type: str
admin_url:
description:
@@ -76,6 +84,7 @@ options:
This is 'adminUrl' in the Keycloak REST API.
aliases:
- adminUrl
type: str
base_url:
description:
@@ -83,6 +92,7 @@ options:
This is 'baseUrl' in the Keycloak REST API.
aliases:
- baseUrl
type: str
enabled:
description:
@@ -100,6 +110,7 @@ options:
choices: ['client-secret', 'client-jwt']
aliases:
- clientAuthenticatorType
type: str
secret:
description:
@@ -107,6 +118,7 @@ options:
specify a secret here (otherwise one will be generated if it does not exit). If
changing this secret, the module will not register a change currently (but the
changed secret will be saved).
type: str
registration_access_token:
description:
@@ -115,6 +127,7 @@ options:
This is 'registrationAccessToken' in the Keycloak REST API.
aliases:
- registrationAccessToken
type: str
default_roles:
description:
@@ -123,6 +136,7 @@ options:
This is 'defaultRoles' in the Keycloak REST API.
aliases:
- defaultRoles
type: list
redirect_uris:
description:
@@ -130,6 +144,7 @@ options:
This is 'redirectUris' in the Keycloak REST API.
aliases:
- redirectUris
type: list
web_origins:
description:
@@ -137,11 +152,13 @@ options:
This is 'webOrigins' in the Keycloak REST API.
aliases:
- webOrigins
type: list
not_before:
description:
- Revoke any tokens issued before this date for this client (this is a UNIX timestamp).
This is 'notBefore' in the Keycloak REST API.
type: int
aliases:
- notBefore
@@ -220,6 +237,7 @@ options:
protocol:
description:
- Type of client (either C(openid-connect) or C(saml).
type: str
choices: ['openid-connect', 'saml']
full_scope_allowed:
@@ -234,6 +252,7 @@ options:
description:
- Cluster node re-registration timeout for this client.
This is 'nodeReRegistrationTimeout' in the Keycloak REST API.
type: int
aliases:
- nodeReRegistrationTimeout
@@ -242,6 +261,7 @@ options:
- dict of registered cluster nodes (with C(nodename) as the key and last registration
time as the value).
This is 'registeredNodes' in the Keycloak REST API.
type: dict
aliases:
- registeredNodes
@@ -250,6 +270,7 @@ options:
- Client template to use for this client. If it does not exist this field will silently
be dropped.
This is 'clientTemplate' in the Keycloak REST API.
type: str
aliases:
- clientTemplate
@@ -290,6 +311,7 @@ options:
- a data structure defining the authorization settings for this client. For reference,
please see the Keycloak API docs at U(https://www.keycloak.org/docs-api/8.0/rest-api/index.html#_resourceserverrepresentation).
This is 'authorizationSettings' in the Keycloak REST API.
type: dict
aliases:
- authorizationSettings
@@ -299,28 +321,35 @@ options:
This is 'protocolMappers' in the Keycloak REST API.
aliases:
- protocolMappers
type: list
elements: dict
suboptions:
consentRequired:
description:
- Specifies whether a user needs to provide consent to a client for this mapper to be active.
type: bool
consentText:
description:
- The human-readable name of the consent the user is presented to accept.
type: str
id:
description:
- Usually a UUID specifying the internal ID of this protocol mapper instance.
type: str
name:
description:
- The name of this protocol mapper.
type: str
protocol:
description:
- This is either C(openid-connect) or C(saml), this specifies for which protocol this protocol mapper
is active.
choices: ['openid-connect', 'saml']
type: str
protocolMapper:
description:
@@ -352,6 +381,7 @@ options:
- An exhaustive list of available mappers on your installation can be obtained on
the admin console by going to Server Info -> Providers and looking under
'protocol-mapper'.
type: str
config:
description:
@@ -360,6 +390,7 @@ options:
other than by the source of the mappers and its parent class(es). An example is given
below. It is easiest to obtain valid config values by dumping an already-existing
protocol mapper configuration through check-mode in the I(existing) field.
type: dict
attributes:
description:
@@ -368,6 +399,7 @@ options:
permissible options is not available; possible options as of Keycloak 3.4 are listed below. The Keycloak
API does not validate whether a given option is appropriate for the protocol used; if specified
anyway, Keycloak will simply not use it.
type: dict
suboptions:
saml.authnstatement:
description:

View File

@@ -36,27 +36,34 @@ options:
- On C(absent), the client template will be removed if it exists
choices: ['present', 'absent']
default: 'present'
type: str
id:
description:
- Id of client template to be worked on. This is usually a UUID.
type: str
realm:
description:
- Realm this client template is found in.
type: str
default: master
name:
description:
- Name of the client template
type: str
description:
description:
- Description of the client template in Keycloak
type: str
protocol:
description:
- Type of client template (either C(openid-connect) or C(saml).
choices: ['openid-connect', 'saml']
type: str
full_scope_allowed:
description:
@@ -68,28 +75,35 @@ options:
description:
- a list of dicts defining protocol mappers for this client template.
This is 'protocolMappers' in the Keycloak REST API.
type: list
elements: dict
suboptions:
consentRequired:
description:
- Specifies whether a user needs to provide consent to a client for this mapper to be active.
type: bool
consentText:
description:
- The human-readable name of the consent the user is presented to accept.
type: str
id:
description:
- Usually a UUID specifying the internal ID of this protocol mapper instance.
type: str
name:
description:
- The name of this protocol mapper.
type: str
protocol:
description:
- is either 'openid-connect' or 'saml', this specifies for which protocol this protocol mapper
is active.
choices: ['openid-connect', 'saml']
type: str
protocolMapper:
description:
@@ -121,6 +135,7 @@ options:
- An exhaustive list of available mappers on your installation can be obtained on
the admin console by going to Server Info -> Providers and looking under
'protocol-mapper'.
type: str
config:
description:
@@ -129,12 +144,14 @@ options:
other than by the source of the mappers and its parent class(es). An example is given
below. It is easiest to obtain valid config values by dumping an already-existing
protocol mapper configuration through check-mode in the "existing" field.
type: dict
attributes:
description:
- A dict of further attributes for this client template. This can contain various
configuration settings, though in the default installation of Keycloak as of 3.4, none
are documented or known, so this is usually empty.
type: dict
notes:
- The Keycloak REST API defines further fields (namely I(bearerOnly), I(consentRequired), I(standardFlowEnabled),

View File

@@ -41,8 +41,8 @@ options:
type: str
description:
- Name of affected host name. Can be a list.
- If not specified, it defaults to the remote system's hostname.
required: false
default: machine's hostname
aliases: ['host']
env:
type: str
@@ -144,7 +144,7 @@ def main():
version=dict(required=True),
token=dict(required=True, no_log=True),
state=dict(required=True, choices=['started', 'finished', 'failed']),
hosts=dict(required=False, default=[socket.gethostname()], aliases=['host']), # @FIXME
hosts=dict(required=False, aliases=['host']),
env=dict(required=False),
owner=dict(required=False),
description=dict(required=False),
@@ -168,6 +168,8 @@ def main():
v = module.params[k]
if v is not None:
body[k] = v
if body.get('hosts') is None:
body['hosts'] = [socket.gethostname()]
if not isinstance(body['hosts'], list):
body['hosts'] = [body['hosts']]

View File

@@ -102,7 +102,7 @@ options:
variables:
type: dict
description:
- List of variables.
- Dictionary of variables.
extends_documentation_fragment:
- url
'''
@@ -116,6 +116,8 @@ EXAMPLES = '''
state: present
name: "{{ ansible_fqdn }}"
ip: "{{ ansible_default_ipv4.address }}"
variables:
foo: "bar"
delegate_to: 127.0.0.1
'''
@@ -289,7 +291,7 @@ def main():
if ret['code'] == 200:
changed = True
else:
module.fail_json(msg="bad return code deleting host: %s" % (ret['data']))
module.fail_json(msg="bad return code (%s) deleting host: '%s'" % (ret['code'], ret['data']))
except Exception as e:
module.fail_json(msg="exception deleting host: " + str(e))
@@ -305,7 +307,7 @@ def main():
if ret['code'] == 200:
changed = True
else:
module.fail_json(msg="bad return code modifying host: %s" % (ret['data']))
module.fail_json(msg="bad return code (%s) modifying host: '%s'" % (ret['code'], ret['data']))
else:
if state == "present":
@@ -317,7 +319,7 @@ def main():
if ret['code'] == 200:
changed = True
else:
module.fail_json(msg="bad return code creating host: %s" % (ret['data']))
module.fail_json(msg="bad return code (%s) creating host: '%s'" % (ret['code'], ret['data']))
except Exception as e:
module.fail_json(msg="exception creating host: " + str(e))

View File

@@ -62,6 +62,9 @@ STATE_COMMAND_MAP = {
'restarted': 'restart'
}
MONIT_SERVICES = ['Process', 'File', 'Fifo', 'Filesystem', 'Directory', 'Remote host', 'System', 'Program',
'Network']
@python_2_unicode_compatible
class StatusValue(namedtuple("Status", "value, is_pending")):
@@ -151,7 +154,9 @@ class Monit(object):
return self._parse_status(out, err)
def _parse_status(self, output, err):
if "Process '%s'" % self.process_name not in output:
escaped_monit_services = '|'.join([re.escape(x) for x in MONIT_SERVICES])
pattern = "(%s) '%s'" % (escaped_monit_services, re.escape(self.process_name))
if not re.search(pattern, output, re.IGNORECASE):
return Status.MISSING
status_val = re.findall(r"^\s*status\s*([\w\- ]+)", output, re.MULTILINE)

View File

@@ -74,8 +74,8 @@ options:
handle:
description:
- Whether the check should be handled or not
- Default is C(false).
type: bool
default: false
subdue_begin:
type: str
description:
@@ -99,14 +99,14 @@ options:
description:
- Whether the check should be scheduled by the sensu client or server
- This option obviates the need for specifying the I(subscribers) option
- Default is C(false).
type: bool
default: 'no'
publish:
description:
- Whether the check should be scheduled at all.
- You can still issue it via the sensu api
- Default is C(false).
type: bool
default: false
occurrences:
type: int
description:
@@ -120,8 +120,8 @@ options:
description:
- Classifies the check as an aggregate check,
- making it available via the aggregate API
- Default is C(false).
type: bool
default: 'no'
low_flap_threshold:
type: int
description:

View File

@@ -66,8 +66,8 @@ options:
deregister:
description:
- If a deregistration event should be created upon Sensu client process stop.
- Default is C(false).
type: bool
default: 'no'
deregistration:
type: dict
description:

View File

@@ -71,6 +71,7 @@ options:
- Record priority.
- Required for C(type=MX) and C(type=SRV)
default: 1
type: int
proto:
description:
- Service protocol. Required for C(type=SRV) and C(type=TLSA).
@@ -100,6 +101,7 @@ options:
description:
- Record service.
- Required for C(type=SRV)
type: str
solo:
description:
- Whether the record should be the only one for that record type and record name.

View File

@@ -21,17 +21,20 @@ options:
description:
- Account API Key.
required: true
type: str
account_secret:
description:
- Account Secret Key.
required: true
type: str
domain:
description:
- Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNS Made Easy (e.g. "839989") for faster
resolution
required: true
type: str
sandbox:
description:
@@ -43,11 +46,13 @@ options:
description:
- Record name to get/create/delete/update. If record_name is not specified; all records for the domain will be returned in "result" regardless
of the state argument.
type: str
record_type:
description:
- Record type.
choices: [ 'A', 'AAAA', 'CNAME', 'ANAME', 'HTTPRED', 'MX', 'NS', 'PTR', 'SRV', 'TXT' ]
type: str
record_value:
description:
@@ -57,17 +62,20 @@ options:
- >
If record_value is not specified; no changes will be made and the record will be returned in 'result'
(in other words, this module can be used to fetch a record's current id, type, and ttl)
type: str
record_ttl:
description:
- record's "Time to live". Number of seconds the record remains cached in DNS servers.
default: 1800
type: int
state:
description:
- whether the record should exist or not
required: true
choices: [ 'present', 'absent' ]
type: str
validate_certs:
description:
@@ -85,53 +93,56 @@ options:
systemDescription:
description:
- Description used by the monitor.
required: true
default: ''
type: str
maxEmails:
description:
- Number of emails sent to the contact list by the monitor.
required: true
default: 1
type: int
protocol:
description:
- Protocol used by the monitor.
required: true
default: 'HTTP'
choices: ['TCP', 'UDP', 'HTTP', 'DNS', 'SMTP', 'HTTPS']
type: str
port:
description:
- Port used by the monitor.
required: true
default: 80
type: int
sensitivity:
description:
- Number of checks the monitor performs before a failover occurs where Low = 8, Medium = 5,and High = 3.
required: true
default: 'Medium'
choices: ['Low', 'Medium', 'High']
type: str
contactList:
description:
- Name or id of the contact list that the monitor will notify.
- The default C('') means the Account Owner.
required: true
default: ''
type: str
httpFqdn:
description:
- The fully qualified domain name used by the monitor.
type: str
httpFile:
description:
- The file at the Fqdn that the monitor queries for HTTP or HTTPS.
type: str
httpQueryString:
description:
- The string in the httpFile that the monitor queries for HTTP or HTTPS.
type: str
failover:
description:
@@ -150,23 +161,28 @@ options:
description:
- Primary IP address for the failover.
- Required if adding or changing the monitor or failover.
type: str
ip2:
description:
- Secondary IP address for the failover.
- Required if adding or changing the failover.
type: str
ip3:
description:
- Tertiary IP address for the failover.
type: str
ip4:
description:
- Quaternary IP address for the failover.
type: str
ip5:
description:
- Quinary IP address for the failover.
type: str
notes:
- The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone set. Be sure you are within a few

View File

@@ -31,12 +31,14 @@ options:
required: false
description:
- Name of the namespace
type: str
state:
required: false
default: "present"
choices: [ present, absent ]
description:
- Whether the namespace should exist
type: str
'''
EXAMPLES = '''

View File

@@ -21,11 +21,13 @@ options:
- HTTP connection timeout in seconds
required: false
default: 10
type: int
http_agent:
description:
- Set http user agent
required: false
default: "ansible-ipinfoio-module/0.0.1"
type: str
notes:
- "Check http://ipinfo.io/ for more information"
'''

View File

@@ -38,11 +38,14 @@ options:
- If I(state=present), attributes necessary to create an entry. Existing
entries are never modified. To assert specific attribute values on an
existing entry, use M(community.general.ldap_attr) module instead.
type: dict
objectClass:
description:
- If I(state=present), value or list of values to use when creating
the entry. It can either be a string or an actual list of
strings.
type: list
elements: str
state:
description:
- The target state of the entry.
@@ -103,7 +106,6 @@ RETURN = """
import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_native, to_bytes
from ansible_collections.community.general.plugins.module_utils.ldap import LdapGeneric, gen_specs
@@ -137,13 +139,10 @@ class LdapEntry(LdapGeneric):
attrs = {}
for name, value in self.module.params['attributes'].items():
if name not in attrs:
attrs[name] = []
if isinstance(value, list):
attrs[name] = list(map(to_bytes, value))
else:
attrs[name].append(to_bytes(value))
attrs[name] = [to_bytes(value)]
return attrs
@@ -187,10 +186,11 @@ def main():
module = AnsibleModule(
argument_spec=gen_specs(
attributes=dict(default={}, type='dict'),
objectClass=dict(type='raw'),
objectClass=dict(type='list', elements='str'),
params=dict(type='dict'),
state=dict(default='present', choices=['present', 'absent']),
),
required_if=[('state', 'present', ['objectClass'])],
supports_check_mode=True,
)
@@ -203,17 +203,6 @@ def main():
state = module.params['state']
# Check if objectClass is present when needed
if state == 'present' and module.params['objectClass'] is None:
module.fail_json(msg="At least one objectClass must be provided.")
# Check if objectClass is of the correct type
if (
module.params['objectClass'] is not None and not (
isinstance(module.params['objectClass'], string_types) or
isinstance(module.params['objectClass'], list))):
module.fail_json(msg="objectClass must be either a string or a list.")
# Instantiate the LdapEntry object
ldap = LdapEntry(module)

View File

@@ -29,9 +29,9 @@ requirements:
- python-ldap
options:
passwd:
required: true
description:
- The (plaintext) password to be set for I(dn).
type: str
extends_documentation_fragment:
- community.general.ldap.documentation

View File

@@ -112,7 +112,7 @@ def main():
except Exception as exception:
module.fail_json(msg="Attribute action failed.", details=to_native(exception))
module.exit_json(changed=True)
module.exit_json(changed=False)
def _extract_entry(dn, attrs):
@@ -144,24 +144,20 @@ class LdapSearch(LdapGeneric):
self.attrsonly = 0
def _load_scope(self):
scope = self.module.params['scope']
if scope == 'base':
self.scope = ldap.SCOPE_BASE
elif scope == 'onelevel':
self.scope = ldap.SCOPE_ONELEVEL
elif scope == 'subordinate':
self.scope = ldap.SCOPE_SUBORDINATE
elif scope == 'children':
self.scope = ldap.SCOPE_SUBTREE
else:
raise AssertionError('Implementation error')
spec = dict(
base=ldap.SCOPE_BASE,
onelevel=ldap.SCOPE_ONELEVEL,
subordinate=ldap.SCOPE_SUBORDINATE,
children=ldap.SCOPE_SUBTREE,
)
self.scope = spec[self.module.params['scope']]
def _load_attrs(self):
self.attrlist = self.module.params['attrs'] or None
def main(self):
results = self.perform_search()
self.module.exit_json(changed=True, results=results)
self.module.exit_json(changed=False, results=results)
def perform_search(self):
try:

View File

@@ -20,32 +20,39 @@ options:
description:
- API key for authentication, must be obtained via the netcup CCP (U(https://ccp.netcup.net))
required: True
type: str
api_password:
description:
- API password for authentication, must be obtained via the netcup CCP (https://ccp.netcup.net)
required: True
type: str
customer_id:
description:
- Netcup customer id
required: True
type: int
domain:
description:
- Domainname the records should be added / removed
required: True
type: str
record:
description:
- Record to add or delete, supports wildcard (*). Default is C(@) (e.g. the zone name)
default: "@"
aliases: [ name ]
type: str
type:
description:
- Record type
choices: ['A', 'AAAA', 'MX', 'CNAME', 'CAA', 'SRV', 'TXT', 'TLSA', 'NS', 'DS']
required: True
type: str
value:
description:
- Record value
required: true
type: str
solo:
type: bool
default: False
@@ -56,12 +63,14 @@ options:
description:
- Record priority. Required for C(type=MX)
required: False
type: int
state:
description:
- Whether the record should exist or not
required: False
default: present
choices: [ 'present', 'absent' ]
type: str
requirements:
- "nc-dnsapi >= 0.1.3"
author: "Nicolai Buchwitz (@nbuchwitz)"

View File

@@ -25,22 +25,23 @@ options:
- Specifies the fully qualified hostname to add or remove from
the system
required: true
type: str
view:
description:
- Sets the DNS view to associate this A record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
type: str
ipv4addr:
description:
- Configures the IPv4 address for this A record. Users can dynamically
allocate ipv4 address to A record by passing dictionary containing,
I(nios_next_ip) and I(CIDR network range). See example
required: true
aliases:
- ipv4
type: str
ttl:
description:
- Configures the TTL to be associated with this A record

View File

@@ -29,14 +29,12 @@ options:
description:
- Sets the DNS view to associate this AAAA record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
ipv6addr:
description:
- Configures the IPv6 address for this AAAA record.
required: true
aliases:
- ipv6
ttl:

View File

@@ -29,14 +29,12 @@ options:
description:
- Sets the DNS view to associate this CNAME record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
canonical:
description:
- Configures the canonical name for this CNAME record.
required: true
aliases:
- cname
ttl:

View File

@@ -34,7 +34,6 @@ options:
- Specifies the name of the network view to assign the configured
DNS view to. The network view must already be configured on the
target system.
required: true
default: default
extattrs:
description:

View File

@@ -25,7 +25,7 @@ options:
description:
- Specifies the hostname with which fixed DHCP ip-address is stored
for respective mac.
required: false
required: true
ipaddr:
description:
- IPV4/V6 address of the fixed address.
@@ -37,6 +37,7 @@ options:
network:
description:
- Specifies the network range in which ipaddr exists.
required: true
network_view:
description:
- Configures the name of the network view to associate with this
@@ -49,6 +50,8 @@ options:
the configured network instance. This argument accepts a list
of values (see suboptions). When configuring suboptions at
least one of C(name) or C(num) must be specified.
type: list
elements: dict
suboptions:
name:
description:

View File

@@ -31,7 +31,6 @@ options:
description:
- Sets the DNS view to associate this host record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
@@ -50,6 +49,8 @@ options:
accepts a list of values (see suboptions)
aliases:
- ipv4
type: list
elements: dict
suboptions:
ipv4addr:
description:
@@ -95,6 +96,8 @@ options:
accepts a list of values (see options)
aliases:
- ipv6
type: list
elements: dict
suboptions:
ipv6addr:
description:

View File

@@ -28,7 +28,8 @@ options:
vip_setting:
description:
- Configures the network settings for the grid member.
required: true
type: list
elements: dict
suboptions:
address:
description:
@@ -42,7 +43,8 @@ options:
ipv6_setting:
description:
- Configures the IPv6 settings for the grid member.
required: true
type: list
elements: dict
suboptions:
virtual_ip:
description:
@@ -77,6 +79,8 @@ options:
lan2_port_setting:
description:
- Settings for the Grid member LAN2 port if 'lan2_enabled' is set to "true".
type: list
elements: dict
suboptions:
enabled:
description:
@@ -85,6 +89,8 @@ options:
network_setting:
description:
- If the 'enable' field is set to True, this defines IPv4 network settings for LAN2.
type: list
elements: dict
suboptions:
address:
description:
@@ -98,6 +104,8 @@ options:
v6_network_setting:
description:
- If the 'enable' field is set to True, this defines IPv6 network settings for LAN2.
type: list
elements: dict
suboptions:
virtual_ip:
description:
@@ -115,10 +123,14 @@ options:
node_info:
description:
- Configures the node information list with detailed status report on the operations of the Grid Member.
type: list
elements: dict
suboptions:
lan2_physical_setting:
description:
- Physical port settings for the LAN2 interface.
type: list
elements: dict
suboptions:
auto_port_setting_enabled:
description:
@@ -133,6 +145,8 @@ options:
lan_ha_port_setting:
description:
- LAN/HA port settings for the node.
type: list
elements: dict
suboptions:
ha_ip_address:
description:
@@ -140,6 +154,8 @@ options:
ha_port_setting:
description:
- Physical port settings for the HA interface.
type: list
elements: dict
suboptions:
auto_port_setting_enabled:
description:
@@ -154,6 +170,8 @@ options:
lan_port_setting:
description:
- Physical port settings for the LAN interface.
type: list
elements: dict
suboptions:
auto_port_setting_enabled:
description:
@@ -174,6 +192,8 @@ options:
mgmt_network_setting:
description:
- Network settings for the MGMT port of the node.
type: list
elements: dict
suboptions:
address:
description:
@@ -187,6 +207,8 @@ options:
v6_mgmt_network_setting:
description:
- The network settings for the IPv6 MGMT port of the node.
type: list
elements: dict
suboptions:
virtual_ip:
description:
@@ -200,6 +222,8 @@ options:
mgmt_port_setting:
description:
- Settings for the member MGMT port.
type: list
elements: dict
suboptions:
enabled:
description:
@@ -228,6 +252,8 @@ options:
syslog_servers:
description:
- The list of external syslog servers.
type: list
elements: dict
suboptions:
address:
description:
@@ -266,10 +292,14 @@ options:
pre_provisioning:
description:
- Pre-provisioning information.
type: list
elements: dict
suboptions:
hardware_info:
description:
- An array of structures that describe the hardware being pre-provisioned.
type: list
elements: dict
suboptions:
hwmodel:
description:

View File

@@ -29,20 +29,17 @@ options:
description:
- Sets the DNS view to associate this a record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
mail_exchanger:
description:
- Configures the mail exchanger FQDN for this MX record.
required: true
aliases:
- mx
preference:
description:
- Configures the preference (0-65535) for this MX record.
required: true
ttl:
description:
- Configures the TTL to be associated with this host record

View File

@@ -29,7 +29,6 @@ options:
description:
- Sets the DNS view to associate this a record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
@@ -38,19 +37,16 @@ options:
- Configures the order (0-65535) for this NAPTR record. This parameter
specifies the order in which the NAPTR rules are applied when
multiple rules are present.
required: true
preference:
description:
- Configures the preference (0-65535) for this NAPTR record. The
preference field determines the order NAPTR records are processed
when multiple records with the same order parameter are present.
required: true
replacement:
description:
- Configures the replacement field for this NAPTR record.
For nonterminal NAPTR records, this field specifies the
next domain name to look up.
required: true
services:
description:
- Configures the services field (128 characters maximum) for this

View File

@@ -41,6 +41,8 @@ options:
the configured network instance. This argument accepts a list
of values (see suboptions). When configuring suboptions at
least one of C(name) or C(num) must be specified.
type: list
elements: dict
suboptions:
name:
description:

View File

@@ -31,6 +31,8 @@ options:
description:
- This host is to be used as primary server in this nameserver group. It must be a grid member.
This option is required when setting I(use_external_primaries) to C(false).
type: list
elements: dict
suboptions:
name:
description:
@@ -56,10 +58,17 @@ options:
- Configure the external nameserver as stealth server (without NS record) in the zones.
type: bool
default: false
preferred_primaries:
description:
- Provide a list of elements like in I(external_primaries) to set the precedence of preferred primary nameservers.
type: list
elements: dict
grid_secondaries:
description:
- Configures the list of grid member hosts that act as secondary nameservers.
This option is required when setting I(use_external_primaries) to C(true).
type: list
elements: dict
suboptions:
name:
description:
@@ -88,6 +97,8 @@ options:
preferred_primaries:
description:
- Provide a list of elements like in I(external_primaries) to set the precedence of preferred primary nameservers.
type: list
elements: dict
is_grid_default:
description:
- If set to C(True) this nsgroup will become the default nameserver group for new zones.
@@ -105,6 +116,8 @@ options:
description:
- Configures a list of external nameservers (non-members of the grid).
This option is required when setting I(use_external_primaries) to C(true).
type: list
elements: dict
suboptions:
address:
description:
@@ -134,6 +147,8 @@ options:
external_secondaries:
description:
- Allows to provide a list of external secondary nameservers, that are not members of the grid.
type: list
elements: dict
suboptions:
address:
description:

View File

@@ -37,19 +37,16 @@ options:
ipv4addr:
description:
- The IPv4 Address of the record. Mutually exclusive with the ipv6addr.
required: true
aliases:
- ipv4
ipv6addr:
description:
- The IPv6 Address of the record. Mutually exclusive with the ipv4addr.
required: true
aliases:
- ipv6
ptrdname:
description:
- The domain name of the DNS PTR record in FQDN format.
required: true
ttl:
description:
- Time To Live (TTL) value for the record.

View File

@@ -29,26 +29,21 @@ options:
description:
- Sets the DNS view to associate this a record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
port:
description:
- Configures the port (0-65535) of this SRV record.
required: true
priority:
description:
- Configures the priority (0-65535) for this SRV record.
required: true
target:
description:
- Configures the target FQDN for this SRV record.
required: true
weight:
description:
- Configures the weight (0-65535) for this SRV record.
required: true
ttl:
description:
- Configures the TTL to be associated with this host record

View File

@@ -29,7 +29,6 @@ options:
description:
- Sets the DNS view to associate this tst record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
@@ -39,7 +38,6 @@ options:
per substring, up to a total of 512 bytes. To enter leading,
trailing, or embedded spaces in the text, add quotes around the
text to preserve the spaces.
required: true
ttl:
description:
- Configures the TTL to be associated with this tst record

View File

@@ -32,24 +32,29 @@ options:
- Configures the DNS view name for the configured resource. The
specified DNS zone must already exist on the running NIOS instance
prior to configuring zones.
required: true
default: default
aliases:
- dns_view
grid_primary:
description:
- Configures the grid primary servers for this zone.
type: list
elements: dict
suboptions:
name:
description:
- The name of the grid primary server
required: true
grid_secondaries:
description:
- Configures the grid secondary servers for this zone.
type: list
elements: dict
suboptions:
name:
description:
- The name of the grid secondary server
required: true
ns_group:
description:
- Configures the name server group for this zone. Name server group is

View File

@@ -739,7 +739,6 @@ class Nmcli(object):
return self.type in (
'bond',
'bridge',
'bridge-slave',
'ethernet',
'generic',
'team',

View File

@@ -21,51 +21,51 @@ requirements:
options:
host:
description:
- Set to target snmp server (normally C({{ inventory_hostname }})).
- Set to target SNMP server (normally C({{ inventory_hostname }})).
type: str
required: true
version:
description:
- SNMP Version to use, v2/v2c or v3.
- SNMP Version to use, C(v2), C(v2c) or C(v3).
type: str
required: true
choices: [ v2, v2c, v3 ]
community:
description:
- The SNMP community string, required if version is v2/v2c.
- The SNMP community string, required if I(version) is C(v2) or C(v2c).
type: str
level:
description:
- Authentication level.
- Required if version is v3.
- Required if I(version) is C(v3).
type: str
choices: [ authNoPriv, authPriv ]
username:
description:
- Username for SNMPv3.
- Required if version is v3.
- Required if I(version) is C(v3).
type: str
integrity:
description:
- Hashing algorithm.
- Required if version is v3.
- Required if I(version) is C(v3).
type: str
choices: [ md5, sha ]
authkey:
description:
- Authentication key.
- Required if version is v3.
- Required I(version) is C(v3).
type: str
privacy:
description:
- Encryption algorithm.
- Required if level is authPriv.
- Required if I(level) is C(authPriv).
type: str
choices: [ aes, des ]
privkey:
description:
- Encryption key.
- Required if version is authPriv.
- Required if I(level) is C(authPriv).
type: str
'''
@@ -174,10 +174,10 @@ PYSNMP_IMP_ERR = None
try:
from pysnmp.entity.rfc3413.oneliner import cmdgen
from pysnmp.proto.rfc1905 import EndOfMibView
has_pysnmp = True
HAS_PYSNMP = True
except Exception:
PYSNMP_IMP_ERR = traceback.format_exc()
has_pysnmp = False
HAS_PYSNMP = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_text
@@ -221,8 +221,7 @@ def decode_hex(hexstring):
return hexstring
if hexstring[:2] == "0x":
return to_text(binascii.unhexlify(hexstring[2:]))
else:
return hexstring
return hexstring
def decode_mac(hexstring):
@@ -231,8 +230,7 @@ def decode_mac(hexstring):
return hexstring
if hexstring[:2] == "0x":
return hexstring[2:]
else:
return hexstring
return hexstring
def lookup_adminstatus(int_adminstatus):
@@ -243,8 +241,7 @@ def lookup_adminstatus(int_adminstatus):
}
if int_adminstatus in adminstatus_options:
return adminstatus_options[int_adminstatus]
else:
return ""
return ""
def lookup_operstatus(int_operstatus):
@@ -259,8 +256,7 @@ def lookup_operstatus(int_operstatus):
}
if int_operstatus in operstatus_options:
return operstatus_options[int_operstatus]
else:
return ""
return ""
def main():
@@ -285,13 +281,13 @@ def main():
m_args = module.params
if not has_pysnmp:
if not HAS_PYSNMP:
module.fail_json(msg=missing_required_lib('pysnmp'), exception=PYSNMP_IMP_ERR)
cmdGen = cmdgen.CommandGenerator()
# Verify that we receive a community when using snmp v2
if m_args['version'] == "v2" or m_args['version'] == "v2c":
if m_args['version'] in ("v2", "v2c"):
if m_args['community'] is None:
module.fail_json(msg='Community not set when using snmp version 2')
@@ -313,7 +309,7 @@ def main():
privacy_proto = cmdgen.usmDESPrivProtocol
# Use SNMP Version 2
if m_args['version'] == "v2" or m_args['version'] == "v2c":
if m_args['version'] in ("v2", "v2c"):
snmp_auth = cmdgen.CommunityData(m_args['community'])
# Use SNMP Version 3 with authNoPriv

View File

@@ -204,18 +204,18 @@ def do_notify_rocketchat(module, domain, token, protocol, payload):
def main():
module = AnsibleModule(
argument_spec=dict(
domain=dict(type='str', required=True, default=None),
domain=dict(type='str', required=True),
token=dict(type='str', required=True, no_log=True),
protocol=dict(type='str', default='https', choices=['http', 'https']),
msg=dict(type='str', required=False, default=None),
channel=dict(type='str', default=None),
msg=dict(type='str', required=False),
channel=dict(type='str'),
username=dict(type='str', default='Ansible'),
icon_url=dict(type='str', default='https://www.ansible.com/favicon.ico'),
icon_emoji=dict(type='str', default=None),
icon_emoji=dict(type='str'),
link_names=dict(type='int', default=1, choices=[0, 1]),
validate_certs=dict(default=True, type='bool'),
color=dict(type='str', default='normal', choices=['normal', 'good', 'warning', 'danger']),
attachments=dict(type='list', required=False, default=None)
attachments=dict(type='list', required=False)
)
)

View File

@@ -1,14 +1,14 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Matt Makai <matthew.makai@gmail.com>
# Copyright: (c) 2015, Matt Makai <matthew.makai@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: sendgrid
short_description: Sends an email with the SendGrid API
@@ -23,73 +23,73 @@ notes:
account."
- "In order to use api_key, cc, bcc, attachments, from_name, html_body, headers
you must pip install sendgrid"
- "since 2.2 username and password are not required if you supply an api_key"
- "since 2.2 I(username) and I(password) are not required if you supply an I(api_key)"
requirements:
- sendgrid python library
- sendgrid Python library 1.6.22 or lower (Sendgrid API V2 supported)
options:
username:
type: str
description:
- username for logging into the SendGrid account.
- Since 2.2 it is only required if api_key is not supplied.
- Username for logging into the SendGrid account.
- Since 2.2 it is only required if I(api_key) is not supplied.
password:
type: str
description:
- password that corresponds to the username
- Since 2.2 it is only required if api_key is not supplied.
- Password that corresponds to the username.
- Since 2.2 it is only required if I(api_key) is not supplied.
from_address:
type: str
description:
- the address in the "from" field for the email
- The address in the "from" field for the email.
required: true
to_addresses:
type: list
description:
- a list with one or more recipient email addresses
- A list with one or more recipient email addresses.
required: true
subject:
type: str
description:
- the desired subject for the email
- The desired subject for the email.
required: true
api_key:
type: str
description:
- sendgrid API key to use instead of username/password
- Sendgrid API key to use instead of username/password.
cc:
type: list
description:
- a list of email addresses to cc
- A list of email addresses to cc.
bcc:
type: list
description:
- a list of email addresses to bcc
- A list of email addresses to bcc.
attachments:
type: list
description:
- a list of relative or explicit paths of files you want to attach (7MB limit as per SendGrid docs)
- A list of relative or explicit paths of files you want to attach (7MB limit as per SendGrid docs).
from_name:
type: str
description:
- the name you want to appear in the from field, i.e 'John Doe'
- The name you want to appear in the from field, i.e 'John Doe'.
html_body:
description:
- whether the body is html content that should be rendered
- Whether the body is html content that should be rendered.
type: bool
default: 'no'
headers:
type: dict
description:
- a dict to pass on as headers
- A dict to pass on as headers.
body:
type: str
description:
- the e-mail body content
- The e-mail body content.
required: yes
author: "Matt Makai (@makaimc)"
'''
EXAMPLES = '''
EXAMPLES = r'''
- name: Send an email to a single recipient that the deployment was successful
community.general.sendgrid:
username: "{{ sendgrid_username }}"
@@ -120,6 +120,8 @@ EXAMPLES = '''
import os
import traceback
from distutils.version import LooseVersion
SENDGRID_IMP_ERR = None
try:
import sendgrid
@@ -155,6 +157,9 @@ def post_sendgrid_api(module, username, password, from_address, to_addresses,
'Accept': 'application/json'}
return fetch_url(module, SENDGRID_URI, data=encoded_data, headers=headers, method='POST')
else:
# Remove this check when adding Sendgrid API v3 support
if LooseVersion(sendgrid.version.__version__) > LooseVersion("1.6.22"):
module.fail_json(msg="Please install sendgrid==1.6.22 or lower since module uses Sendgrid V2 APIs.")
if api_key:
sg = sendgrid.SendGridClient(api_key)

View File

@@ -5,7 +5,7 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: syslogger
short_description: Log messages in the syslog
@@ -33,7 +33,7 @@ options:
default: "daemon"
log_pid:
description:
- Log the pid in brackets.
- Log the PID in brackets.
type: bool
default: False
ident:
@@ -83,7 +83,7 @@ facility:
type: str
sample: "info"
log_pid:
description: Log pid status
description: Log PID status
returned: always
type: bool
sample: True
@@ -94,11 +94,14 @@ msg:
sample: "Hello from Ansible"
'''
from ansible.module_utils.basic import AnsibleModule
import syslog
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
def get_facility(x):
def get_facility(facility):
return {
'kern': syslog.LOG_KERN,
'user': syslog.LOG_USER,
@@ -118,10 +121,10 @@ def get_facility(x):
'local5': syslog.LOG_LOCAL5,
'local6': syslog.LOG_LOCAL6,
'local7': syslog.LOG_LOCAL7
}.get(x, syslog.LOG_DAEMON)
}.get(facility, syslog.LOG_DAEMON)
def get_priority(x):
def get_priority(priority):
return {
'emerg': syslog.LOG_EMERG,
'alert': syslog.LOG_ALERT,
@@ -131,7 +134,7 @@ def get_priority(x):
'notice': syslog.LOG_NOTICE,
'info': syslog.LOG_INFO,
'debug': syslog.LOG_DEBUG
}.get(x, syslog.LOG_INFO)
}.get(priority, syslog.LOG_INFO)
def main():
@@ -168,20 +171,16 @@ def main():
# do the logging
try:
if module.params['log_pid']:
syslog.openlog(module.params['ident'],
logoption=syslog.LOG_PID,
facility=get_facility(module.params['facility']))
else:
syslog.openlog(module.params['ident'],
facility=get_facility(module.params['facility']))
syslog.openlog(module.params['ident'],
syslog.LOG_PID if module.params['log_pid'] else 0,
get_facility(module.params['facility']))
syslog.syslog(get_priority(module.params['priority']),
module.params['msg'])
syslog.closelog()
result['changed'] = True
except Exception:
module.fail_json(error='Failed to write to syslog', **result)
except Exception as exc:
module.fail_json(error='Failed to write to syslog %s' % to_native(exc), exception=traceback.format_exc(), **result)
module.exit_json(**result)

View File

@@ -139,6 +139,7 @@ EXAMPLES = '''
import os
import re
import tempfile
from distutils import version
from ansible.module_utils._text import to_bytes
from ansible.module_utils.basic import AnsibleModule
@@ -356,6 +357,18 @@ class HomebrewCask(object):
else:
self._current_cask = cask
return cask
@property
def brew_version(self):
try:
return self._brew_version
except AttributeError:
return None
@brew_version.setter
def brew_version(self, brew_version):
self._brew_version = brew_version
# /class properties -------------------------------------------- }}}
def __init__(self, module, path=path, casks=None, state=None,
@@ -434,15 +447,12 @@ class HomebrewCask(object):
if not self.valid_cask(self.current_cask):
return False
cask_is_outdated_command = (
[
self.brew_path,
'cask',
'outdated',
]
+ (['--greedy'] if self.greedy else [])
+ [self.current_cask]
)
if self._brew_cask_command_is_deprecated():
base_opts = [self.brew_path, 'outdated', '--cask']
else:
base_opts = [self.brew_path, 'cask', 'outdated']
cask_is_outdated_command = base_opts + (['--greedy'] if self.greedy else []) + [self.current_cask]
rc, out, err = self.module.run_command(cask_is_outdated_command)
@@ -454,18 +464,35 @@ class HomebrewCask(object):
self.message = 'Invalid cask: {0}.'.format(self.current_cask)
raise HomebrewCaskException(self.message)
cmd = [
"{brew_path}".format(brew_path=self.brew_path),
"cask",
"list",
self.current_cask
]
if self._brew_cask_command_is_deprecated():
base_opts = [self.brew_path, "list", "--cask"]
else:
base_opts = [self.brew_path, "cask", "list"]
cmd = base_opts + [self.current_cask]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True
else:
return False
def _get_brew_version(self):
if self.brew_version:
return self.brew_version
cmd = [self.brew_path, '--version']
rc, out, err = self.module.run_command(cmd, check_rc=True)
# get version string from first line of "brew --version" output
version = out.split('\n')[0].split(' ')[1]
self.brew_version = version
return self.brew_version
def _brew_cask_command_is_deprecated(self):
# The `brew cask` replacements were fully available in 2.6.0 (https://brew.sh/2020/12/01/homebrew-2.6.0/)
return version.LooseVersion(self._get_brew_version()) >= version.LooseVersion('2.6.0')
# /checks ------------------------------------------------------ }}}
# commands ----------------------------------------------------- {{{
@@ -537,11 +564,10 @@ class HomebrewCask(object):
self.message = 'Casks would be upgraded.'
raise HomebrewCaskException(self.message)
opts = (
[self.brew_path, 'cask', 'upgrade']
)
cmd = [opt for opt in opts if opt]
if self._brew_cask_command_is_deprecated():
cmd = [self.brew_path, 'upgrade', '--cask']
else:
cmd = [self.brew_path, 'cask', 'upgrade']
rc, out, err = '', '', ''
@@ -586,10 +612,12 @@ class HomebrewCask(object):
)
raise HomebrewCaskException(self.message)
opts = (
[self.brew_path, 'cask', 'install', self.current_cask]
+ self.install_options
)
if self._brew_cask_command_is_deprecated():
base_opts = [self.brew_path, 'install', '--cask']
else:
base_opts = [self.brew_path, 'cask', 'install']
opts = base_opts + [self.current_cask] + self.install_options
cmd = [opt for opt in opts if opt]
@@ -650,11 +678,13 @@ class HomebrewCask(object):
)
raise HomebrewCaskException(self.message)
opts = (
[self.brew_path, 'cask', command]
+ self.install_options
+ [self.current_cask]
)
if self._brew_cask_command_is_deprecated():
base_opts = [self.brew_path, command, '--cask']
else:
base_opts = [self.brew_path, 'cask', command]
opts = base_opts + self.install_options + [self.current_cask]
cmd = [opt for opt in opts if opt]
rc, out, err = '', '', ''
@@ -703,10 +733,12 @@ class HomebrewCask(object):
)
raise HomebrewCaskException(self.message)
opts = (
[self.brew_path, 'cask', 'uninstall', self.current_cask]
+ self.install_options
)
if self._brew_cask_command_is_deprecated():
base_opts = [self.brew_path, 'uninstall', '--cask']
else:
base_opts = [self.brew_path, 'cask', 'uninstall']
opts = base_opts + [self.current_cask] + self.install_options
cmd = [opt for opt in opts if opt]

Some files were not shown because too many files have changed in this diff Show More