Compare commits

...

75 Commits

Author SHA1 Message Date
Felix Fontein
acea082a7c Add changelog for 1.0.0. 2020-07-31 13:54:10 +02:00
Felix Fontein
0cff1f116f Add release summary. 2020-07-31 13:52:28 +02:00
Felix Fontein
bac14c2f01 Wildfly didn't start because of some permission problems. (#718) 2020-07-31 13:47:38 +02:00
Abhijeet Kasurde
291ceffecb sanity: Fix sanity check for modules (#587)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-07-31 10:57:57 +02:00
chris93111
623817b0b7 add ini config to logstash callback (#610)
* Update logstash.py

* remove version with collection

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>

* rename the callback name with migration collection

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>

* Update plugins/callback/logstash.py

v1

Co-authored-by: Felix Fontein <felix@fontein.de>

* Create 610_logstash_callback_add_ini_config.yml

* Update changelogs/fragments/610_logstash_callback_add_ini_config.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update logstash.py

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-31 10:56:12 +02:00
Roman
6df7fd3026 module lxd_container - added target parameter for cluster deployments (#711)
* module lxd_container - added target support for LXD cluster deployments.
https://github.com/ansible-collections/community.general/issues/637

* lxd_container.py fixed PEP8 issues.

* Update plugins/modules/cloud/lxd/lxd_container.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/lxd/lxd_container.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cloud/lxd/lxd_container.py

Added type: str for target parameter description.

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* lxd_container.py - added example of using target parameter

* lxd_container.py - fixed PEP8 issue, trailing whitespace.

* Update plugins/modules/cloud/lxd/lxd_container.py

Cosmetic fix, adding newline between two blocks of examples.

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-30 12:30:25 +03:00
Felix Fontein
95531d24ea nmcli: fix package name (#712)
* Type: Wrong package names

In Red Hat systems, python packages are preceeded by `python3-`

* Use Python 2 packages on CentOS 7 and Fedora <= 28.

Co-authored-by: Frank Brütting <fbruetting@users.noreply.github.com>
2020-07-30 06:35:12 +00:00
Sean Reifschneider
848d63fa38 haproxy: Enable/disable health and agent checks (#689)
* Enable/disable health and agent checks

Health and agent checks can cause a disabled service to re-enable
itself.  This adds "health" and "agent" options that will also
enable or disable those checks, matching if the service is to be
enabled/disabled.

* Update plugins/modules/net_tools/haproxy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/haproxy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/haproxy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/haproxy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Changes to documentation and changelog.

Changes for the haproxy documentation to resolve issues with
the CI/CD, and adding a changelog fragment.

* Update changelogs/fragments/689-haproxy_agent_and_health.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/689-haproxy_agent_and_health.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/haproxy.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add an example of health/agent disable.

* Update plugins/modules/net_tools/haproxy.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-29 23:39:48 +02:00
Ken Dreyer
b29af922eb jira: cast error messages to strings (#707)
Sometimes Jira returns dicts as "errors" instead of simple strings.
For example, when a user specifies a field that cannot be set, Jira
returns a dict with the field name as a key and the error message as the
value.

In the rare case that we have both a "errorMessages" list and an
"errors" dict, when we combine those values later with join(), Python
raises a TypeError.

Transform each individual error message into a string, and then join()
the list of strings.
2020-07-29 23:38:51 +02:00
Lee Goolsbee
9822d8172b slack: support for blocks (#702)
* Slack: add support for blocks

* Slack: drop unused validate_certs option

* Slack: update docs to reflect thread_id can be sent with tokens other than WebAPI

* Slack: drop escaping of quotes and apostrophes

* Slack: typo

* Revert "Slack: drop escaping of quotes and apostrophes"

This reverts commit bc6120907e.

* Revert "Slack: drop unused validate_certs option"

This reverts commit a981ee6bca.

* Slack: other/minor PR feedback

* Slack: add changelog fragment

* Slack: clean-up/clarify use of recursive escaping function

* Slack: PR feedback

Co-authored-by: Lee Goolsbee <lgoolsbee@atlassian.com>
2020-07-29 22:11:32 +02:00
Felix Fontein
08f10d5758 Fix more become plugins (#708)
* Fix more become plugins.

* Don't re-use var.

* Other way around.
2020-07-29 20:27:16 +02:00
Jose Angel Munoz
bc5dde0e25 New docker_stack_module with tests (#576)
* First docker stack info approach without tests

Fixes results when no stack availabl

Fixes sanity test

Fixing links

Fixes tabs

Fixes long line

Improving Json Output

Changes arguments

Lint with autopep8

Moves imports

Adds extra line

Adds Tests

Adds pip and fixes return empty

Fixes silly missing else

* Adds Tests and Fixes comments

* Removes tasks option

* Removes arguments

* Changes error message

* Changes Tests

* Add proposals f

* Improve output

* Change test for output change

* Add debug
2020-07-29 14:56:49 +02:00
Felix Fontein
233617fdfa Add porting guide entries from Ansible-base porting guide (#703)
* Add porting guide entries from Ansible-base porting guide (https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/porting_guides/porting_guide_2.10.rst).

* This is already in the changelog.
2020-07-29 08:02:12 +02:00
Gonéri Le Bouder
15e9f04f86 doas: properly set the default values (#704)
* doas: properly set the default values

The module expects by default:

- `become_user` to be `None` or a string,
- `become_flags` to by an empty string.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-29 07:55:01 +02:00
Robert Osowiecki
2be739ef05 archive: exclude_path documentation (#599)
* archive: exclude_path does not actualy exclude files from archive, but paths generated by glob expantion of path param.

* Update plugins/modules/files/archive.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-28 11:42:17 +03:00
Pierre Roudier
4a1d86c47e Added uint type to xfconf module (#696)
* Added uint type to xfconf module

* Update changelogs/fragments/xfconf_add_uint_type.yml

Updated PR number in changelog

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Pierre Roudier <pierre.roudier@ticksmith.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-28 10:40:40 +03:00
Ken Dreyer
041824b98e jira: log error messages in "errors" key (#311)
Jira's API can return a empty "errorMessages" list, and the real error
is in the "errors" object. This happens, for example, if a user tries to
file a ticket in a project that does not exist (tested with Jira
v7.13.8)

Check both "errorMessages" and "errors", and report both values with
fail_json().

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-28 08:59:33 +03:00
Felix Fontein
f5ed0689c1 Update dependencies (#694)
* Bump dependency versions.

* No longer need to use git checkouts.
2020-07-27 16:14:59 +02:00
Michael Williams
1beabef60e Shameless recommit of changes in jesstruck/ansible:jenkins_plugins_sha1 (#677)
* Shameless recommit of changes in jesstruck/ansible:jenkins_plugins_sha1

* Add changelog fragment.

* Change variable name to remove reference to sha1

Also, update changelog fragment typos/style.

* Update changelog fragment typos/style.
2020-07-27 11:33:08 +02:00
Alexei Znamensky
d40dece6c5 big revamp on xfconf, adding array values (#693)
* big revamp on xfconf, adding array values

* added changelog fragment

* Update changelogs/fragments/693-big-revamp-on-xfconf-adding-array-values.yml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-25 13:58:14 +03:00
Bill Dodd
0f6bf38573 remove jose-delarosa - new job (#691) 2020-07-24 23:36:55 +02:00
Bill Dodd
87423f4a33 Fix decoding of response payloads for python 3.5 (#687)
* fix decoding of response payloads for python 3.5

* add changelog fragment
2020-07-24 23:35:33 +02:00
Felix Fontein
748fb40541 Add missing symlink, sort meta/runtime.yml entries (#681)
* Add missing symlink.

* Sort meta/runtime.yml entries.
2020-07-23 22:40:33 +02:00
Jan Gaßner
eb2369a934 yarn: Fix handling of empty outdated & scoped packages (#474)
* Fixed index out of range in yarn module when no packages are outdated

* Fixed handling of yarn dependencies when scoped modules are installed

* Added changelog fragment for yarn module fixes

* Adhere changelogs/fragments/474-yarn_fix-outdated-fix-list.yml to current standards

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* Added scoped package to yarn integration test

Co-authored-by: Jan Gaßner <jan.gassner@plusserver.com>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-23 10:19:32 +03:00
Martin Migasiewicz
80cd8329e0 launchd: new module to control services on macOS hosts (#305) 2020-07-22 16:54:58 +03:00
Orion Poplawski
669b7bf090 Add cobbler inventory plugin (#627)
* Add cobbler inventory plugin

* Add elements, caps

* Use fail_json if we cannot import xmlrpc_client

* [cobbler] Raise AnsibleError for errors

* [plugins/inventory/cobbler] Add cache_fallback option

* [inventory/cobbler] Use != for comparison

* [inventory/cobbler] Add very basic unit tests

* Update plugins/inventory/cobbler.py

Use full name

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-21 23:37:01 +03:00
Andrew Klychkov
c207b7298c osx_defaults: fix handling negative integers (#676)
* osx_defaults: fix handling negative integers

* add changelog fragment
2020-07-21 16:12:21 +02:00
ernst-s
d7aabcceed Handle 'No more variables left in this MIB View ' in snmp_facts (#613)
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-21 12:40:26 +05:30
Felix Fontein
7ac467a359 Restore removed google modules (#675)
* Revert "Remove entries of modules that no longer exist." partially.

This reverts commit c1e1b37da4.

* Revert "The _info module is in google.cloud."

This reverts commit 26f5c84924.

* Revert "Remove modules that were moved to the google.cloud collection according to ansible/ansible's ansible_builtin_runtime.yml."

This reverts commit a1442ccc35.

* Fix FQCNs in examples and module references.

* Add changelog fragment.

* Update ignore.txt.

* Remove bad lines.
2020-07-21 07:45:32 +01:00
Minh Ha
52cce0b7af removes advertise_addr from required parameters when state is "join" (#646)
* removes advertise_addr from required parameters when state is "join"

addressing this issue: https://github.com/ansible-collections/community.general/issues/439

* adjusts test

* adds changelog
2020-07-20 18:14:16 +02:00
Andrew Klychkov
09d68678ee postgresql_query: add search_path parameter (#653)
* postgresql_query: add search_path parameter

* add CI tests

* add ref to seealso

* add changelog fragment

* fix test syntax

* fix test syntax

* fix

* fix

* fix CI syntax

* cosmetic change

* improve CI test

* move CI tests to the right place

* improve CI
2020-07-20 10:08:02 +03:00
Felix Fontein
ee6baa30cf Prevent CI failure with older pip versions. (#670) 2020-07-19 14:50:40 +02:00
Eric G
bc43694ca9 pacman: Treat .zst package names as files (#650)
* pacman: Treat .zst package names as files

* pacman: add changelog for .zst support

* pacman: refer to pull-request in changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: gileri <e+git8413@linuxw.info>
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-19 12:07:38 +02:00
John Westcott IV
a86195623b Adding ODBC module (#606)
* Adding ODBC module

* Adding symink and fixing docs and argspec

* Another sanity issue

* Hopefully last fix for elements

* Making changes suggested by felixfontein

* Making changes suggested by Andersson007

* Removing defaults and added info in description

* Fixing line too long

* More cleanup suggested by felixfontein

* Changing module call
2020-07-17 07:39:37 +03:00
Mr Bleu
9e76fdc668 Add check for rundeck_acl_policy name (#612)
* Add check for rundeck_acl_policy name

* Update changelogs/fragments/add_argument_check_for_rundeck.yaml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-16 23:17:18 +02:00
Adam Miller
64c4548b7f migrate firewalld to ansible.posix (#623)
* migrate firewalld to ansible.posix

Signed-off-by: Adam Miller <admiller@redhat.com>

* fix removal_version for runtime.yml

Signed-off-by: Adam Miller <admiller@redhat.com>

* add changelog fragment

Signed-off-by: Adam Miller <admiller@redhat.com>

* Update meta/runtime.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/firewalld_migration.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* add module_util routing entry

Signed-off-by: Adam Miller <admiller@redhat.com>

* Update meta/runtime.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-16 22:21:22 +02:00
VMax
a0c8a3034a modules: fix names with hyphens (#656) (#659)
* modules: fix names with hyphens (#656)

* modules: fix names with hyphens (#656)

* Fix param name for postgresql_schema

* Add double quotes for schema name

* Add delete created DB objects

* Fix module code

* Set correct test tasks order

Co-authored-by: Maxim Voskresenskiy <maxim.voskresenskiy@uptick.com>
2020-07-16 17:44:19 +03:00
Andrew Klychkov
4c4a6ab27c modules: fix examples to use FQCN for builtin plugins (#661) 2020-07-16 14:42:12 +03:00
Andrew Klychkov
2c3efea14b Test of using FQCN for some builtin plugins (#660) 2020-07-16 12:24:04 +03:00
Regis A. Despres
831a4962c1 gitlab_runners inventory plugin: runtime en var and list default behavior (#611)
* chore: runtime en var and list default bahevaior 

Ability to pick options values from env vars on gitlab_runners inventory plugin
Remove default 20 items limit to runners list on gitlab_runners inventory plugin

* Changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

* Changelog fragment

* Badly placed fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

* changelog fragment for api token

* changelog fragment

* Update changelogs/fragments/611-gitlab-runners-env-vars-intput-and-default-item-limit.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/611-gitlab-runners-env-vars-intput-and-default-item-limit.yaml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

* fix: remove default filter all due to  #440

* fix: remove default filter all due to  #440

* chore: add os env var for filter input

* chore: add os env var for filter input

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-16 10:41:09 +03:00
Till!
8e45b96a33 Update: only "warn" when restart is required (#651)
* Fix: only "warn" when restart is required

Motivation: my logs are flooded with "warnings" when I run roles against instances.
So even though I understand for this "warning", it's a false positive when nothing
needs to be done.

* Update changelogs/fragments/651-fix-postgresql_set-warning.yaml

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-07-15 16:22:06 +03:00
Andrew Klychkov
c055340ecb modules: fix examples to use FQCN for builtin modules (#648)
* modules: fix examples to use FQCN for builtin modules

* fix

* fix

* fix

* fix

* fix

* fix

* fix
2020-07-14 18:28:08 +03:00
Abhijeet Kasurde
c034e8c04f gitlab_project: add support for merge_method on projects (#636)
Migrated from ansible/ansible#66813

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-07-14 20:17:46 +05:30
Andrew Klychkov
41cfdda6a3 modules: fix examples to use FQCN (#644)
* modules: fix examples to use FQCN

* fix

* fix

* fix
2020-07-13 21:50:31 +02:00
Felix Fontein
8b92e0454d docker_container: make sure to_text() and to_native() are used instead of str() (#642)
* Make sure to_text() and to_native() are used instead of str().

* Add changelog.

* Quoting should stay.
2020-07-13 20:59:29 +02:00
Andrew Klychkov
f62b8027e0 postgresql modules: fix examples to use FQCN (#643) 2020-07-13 12:54:34 +03:00
Adam C. Migus
a424ee71e3 Add Thycotic DevOps Secrets Vault lookup plugin (#90)
* Add the Thycotic DevOps Secrets Vault lookup plugin.

* Update plugins/lookup/dsv.py

Co-Authored-By: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/dsv.py

Co-Authored-By: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/dsv.py

Co-Authored-By: Felix Fontein <felix@fontein.de>

* Fix import error check per code review.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add a unittest for plugins/lookup/dsv.py

* Add copyrights.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fixed formatting bug in test_dsv.py

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-13 05:45:07 +00:00
Adam C. Migus
4c6e2f2a40 Add the Thycotic Secret Server lookup plugin. (#91)
* Add the Thycotic Secret Server lookup plugin.

* Update plugins/lookup/tss.py

Co-Authored-By: Felix Fontein <felix@fontein.de>

* Fix import error check per code review.

* Apply suggestions from code review

Co-Authored-By: Felix Fontein <felix@fontein.de>

* Trivial changes based on suggestions from code review.

* Add a unittest for plugins/lookup/tss.py

* Add copyrights.

* Fixed formatting bug in test_tss.py

* Fix formatting bugs in tss.py and test_tss.py

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-13 07:37:20 +02:00
Abhijeet Kasurde
151551b04f selective: mark task failed correctly (#629)
Added additional condition to detect failed task in
selective callback plugin when ran with loop or with_items.

Fixes: ansible/ansible#63767

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2020-07-12 00:26:58 +02:00
Amin Vakil
b31de003bd Reduce ignored files sanity tests (#639)
* Remove all files ignores to see errors

* Fix

* Revert archive.py as its change should be discussed in another PR

* Ignore archive.py

* Revert xml

* Fix
2020-07-12 00:22:15 +02:00
ekaulberg
7f76d8aff4 Remove old infini modules to prepare for collection implementation (#607)
* Removed old infini modules, added redirects to new collection.

* Fix key names.

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-10 21:08:09 +02:00
Ben Mildren
f420e8f02e Migrating MySQL to community.mysql (#633)
* Migrating MySQL to community.mysql

* Added PR to changelog

* Removed missed tests

* Removed missed changelog fragments

* Update meta/runtime.yml

Co-authored-by: Ben Mildren <bmildren@digitalocean.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-10 18:54:04 +02:00
Davíð Steinn Geirsson
d0b07885f0 pkgng: Add support for upgrading all installed packages (#569)
* pkgng: Add support for upgrading all installed packages

Adds support for ``name: "*", state: latest`` to upgrade all installed
packages, similar to other package providers.

Co-authored-by: Felix Fontein <felix@fontein.de>

* pkgng: Improve wording for warning in example, fix formatting

* pkgng.py: Fix capitalization

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Davíð Steinn Geirsson <david@isnic.is>
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-07 23:04:35 +02:00
Felix Fontein
c1b5b51366 Remove DigitalOcean modules (moved to community.digitalocean) (#622)
* Remove DigitalOcean modules.

* Remove inventory scripts.
2020-07-07 16:41:16 +02:00
Ben Mildren
25aec0d712 migrating ProxySQL to community.proxysql (#624)
* migrating ProxySQL to community.proxysql

* Added PR to changelog

* Removed symlinks, fixed grammer in changelog fragment

* Fixed typo

* Added redirects to meta/runtime.yml

* Added docs_fragment redirect to meta/runtime.yml

Co-authored-by: Ben Mildren <bmildren@digitalocean.com>
2020-07-07 13:38:54 +01:00
Andrew Klychkov
17f905eb35 postgres modules: fix CI timeouts (#628) 2020-07-07 11:33:35 +03:00
John R Barker
c5f0c34190 codeql-analysis.yml cron 2020-07-06 20:24:45 +01:00
John R Barker
24c66fcc43 Create codeql-analysis.yml 2020-07-06 20:19:31 +01:00
Andrew Klychkov
5e23f01a76 mysql_user: fix overriding user passowrd to the same (#609)
* mysql_user: fix overriding user passowrd to the same

* add changelog
2020-07-06 15:16:48 +03:00
Andrew Klychkov
74ba307777 mysql_user: add missed privileges (#618)
* mysql_user: add missed privileges

* add changelog fragment
2020-07-06 09:58:15 +02:00
Felix Fontein
a1c03a3cfe master -> main 2020-07-06 08:53:01 +02:00
James J Porter
6b852d841f Fix bug in digital_ocean_tag_info module (#615)
Co-authored-by: Felix Fontein <felix@fontein.de>
2020-07-05 11:31:52 +05:30
Felix Fontein
c6ec384c24 Disable OSX 10.11 for 2.9 tests, since coverage fails for it. 2020-07-04 14:08:10 +02:00
Felix Fontein
65a8dbad8d Fix coverage parameters for shippable script. 2020-07-03 20:15:31 +02:00
Piotr Wojciechowski
171bc087cf Fix documentation mistakes for docker_swarm_info and docker_swarm_service (#608)
* Fix typo in documentation section

* Fix typo in documentation section

Co-authored-by: WojciechowskiPiotr <devel@it-playground.pl>
2020-07-02 23:17:07 +02:00
Felix Fontein
786f082976 Run tests with Ansible 2.9 as well (#296)
* Run some tests with Ansible 2.9. No need to run extra tests multiple times.

* Update ignore-2.9.txt.

* Adjust README.

* Add changelog fragment.
2020-07-02 13:40:16 +02:00
Andrew Klychkov
3cde447eb8 postgresql CI tests: fix timeouts (#598)
* postgresql CI tests: fix timeouts

* fix

ci_complete
2020-07-02 09:01:20 +02:00
Felix Fontein
e2bd4b34ed Adjust README to template's README. (#601) 2020-07-02 06:44:08 +02:00
Andrew Klychkov
a5f11b085b postgresql_idx: add CI tests (#603) 2020-06-30 17:11:43 +03:00
Abhijeet Kasurde
d0fb125586 docs: misc fixes for typos (#602)
Migrated from https://github.com/ansible/ansible/pull/68367

Signed-off-by: Radoslav Dimitrov dimitrovr@vmware.com
2020-06-30 19:00:41 +05:30
Robert Osowiecki
ba28da9b62 postgresql_db: document when pg_restore is used (#589)
* postgresql_db: document when pg_restore is used (#588)

* postgresql_db: more precise description for target_opts

* Update plugins/modules/database/postgresql/postgresql_db.py

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>

Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
2020-06-30 09:54:31 +03:00
anshulbehl
cca84abeb5 conjur_variable: redirecting to correct collection (#570)
* - Redirecting to correct collection
- Removing the plugin and adding changelog and deprecation

* Making suggested changes

* Earlier version on leftovers

* Update changelogs/fragments/cyberarkconjur-removal.yml

Co-authored-by: Felix Fontein <felix@fontein.de>
2020-06-30 07:36:08 +02:00
Joey Zhang
d2ee51253d nmcli: add idempotent support for any kinds of connections (#562)
* nmcli: add idemptent support for any kinds of connections

Fixes #481: nmcli reports changed status even if nothing needs to change
- Implement show_connection() to retrieve connection profile from command line
- Parse integer enumeration values in show_connection()
- Convert 'bond.options' to alias shortcuts
- Modify connection only if changes are detected
- Support generic alias in during the property comparison

* nmcli: add idemptent support for any kinds of connections

Add mock object for modification cases when connection state changes

* nmcli: add idempotent support for any kinds of connections

- Add more test cases to check idempotent for each type of connections
- Verify 'changed' and 'failed' in the result of each test
- Append prefixlen for 'ip4' values in test data
- Fix the incorrect 'return_value' of execute_command() in previous mockers
- Ignore the empty string in _compare_conn_params()
- Fix the property key mapping of 'bridge-port.hairpin-mode' for bridge-slave
- Add 'override_options' in the result output for playboot debug

* nmcli: add idempotent support for any kinds of connections

Fix pep8 issues in test_nmcli.py: Comparison to False should be 'not expr'

* nmcli: add idempotent support for any kinds of connections

Support setting 'ipv4.method' or 'ipv6.method' via nmcli if the configuration method changes

* nmcli: add idempotent support for any kinds of connections

Simplify the if statements in show_connection() according to vlours's advice

* nmcli: add idempotent support for any kinds of connections

Fix the list argument comparison method with multiple values.

* nmcli: add idempotent support for any kinds of connections

Use ansible --diff option output to show detailed changes instead of a private return value.

* nmcli: add idempotent support for any kinds of connections

Add changelog fragment for bugfix.
2020-06-30 05:43:39 +02:00
Kirill Petrov
706195fb02 use Config MacAddress by default instead of Networks (#564)
* use Config MacAddress by default instead of Networks

* use Config MacAddress by default instead of Networks - fix typo

* #564 docker_container macaddress - add changelog fragment
2020-06-30 05:40:43 +02:00
Baptiste Mille-Mathias
a7a74a6eb7 [splunk] Add an option to not fail when the certificate is not valid (#596)
* [splunk] Add an option to not fail when the certificate is not valid

Add an boolean option validate_certs to not validate the certificate of
the HTTP Event Collector.

* Add changelog

* Fix using tabs indentation

* Fix post-review - fix changelog and version of the parameter

Co-authored-by: Baptiste Mille-Mathias <baptiste.millemathias@gmail.com>
2020-06-29 16:14:44 +02:00
939 changed files with 9631 additions and 23454 deletions

68
.github/BOTMETA.yml vendored
View File

@@ -64,18 +64,10 @@ files:
$doc_fragments/hwc.py:
maintainers: $team_huawei
labels: hwc
$doc_fragments/mysql.py:
maintainers: $team_mysql
labels: database mysql
keywords: mariadb proxysql
$doc_fragments/postgres.py:
maintainers: $team_postgresql
labels: postgres postgresql
keywords: database postgres postgresql
$doc_fragments/proxysql.py:
maintainers: $team_mysql
labels: database mysql proxysql
keywords: mariadb proxysql
$doc_fragments/xenserver.py:
maintainers: bvitnik
labels: xenserver
@@ -159,10 +151,6 @@ files:
$module_utils/memset.py:
maintainers: glitchcrab
labels: cloud memset
$module_utils/mysql.py:
maintainers: $team_mysql
labels: database mysql
keywords: mariadb proxysql
$module_utils/net_tools/nios/api.py:
maintainers: $team_networking sganesh-infoblox
labels: infoblox networking
@@ -200,33 +188,6 @@ files:
authors: krsacme
$modules/cloud/centurylink/:
authors: clc-runner
$modules/cloud/digital_ocean/digital_ocean.py:
authors: zbal
$modules/cloud/digital_ocean/:
authors: Akasurde
maintainers: $team_digital_ocean
keywords:
- digital ocean
- droplet
$modules/cloud/digital_ocean/digital_ocean_firewall_facts.py:
authors: BondAnthony
maintainers: mgregson
$modules/cloud/digital_ocean/digital_ocean_floating_ip_facts.py:
authors: pmarques
$modules/cloud/digital_ocean/digital_ocean_sshkey_facts.py:
authors: pmarques
$modules/cloud/digital_ocean/digital_ocean_block_storage.py:
authors: harneksidhu
$modules/cloud/digital_ocean/digital_ocean_domain.py:
authors: mgregson
maintainers: BondAnthony
$modules/cloud/digital_ocean/digital_ocean_droplet.py:
authors: gurch101
$modules/cloud/digital_ocean/digital_ocean_firewall_info.py:
authors: BondAnthony
maintainers: mgregson
$modules/cloud/digital_ocean/digital_ocean_tag.py:
authors: kontrafiktion
$modules/cloud/dimensiondata/dimensiondata_network.py:
authors: aimonb
maintainers: tintoy
@@ -529,22 +490,6 @@ files:
authors: vedit
maintainers: Jmainguy kenichi-ogawa-1988
labels: mssql_db
$modules/database/mysql/mysql_db.py:
authors: ansible
maintainers: $team_mysql
$modules/database/mysql/:
authors: Andersson007
maintainers: Alexander198961 Xyon bmalynovytch bmildren kurtdavis michaelcoburn oneiroi tolland
keywords: mariadb proxysql
$modules/database/mysql/mysql_replication.py:
authors: Andersson007 banyek
$modules/database/mysql/mysql_user.py:
authors: Jmainguy bmalynovytch
maintainers: Alexander198961 Andersson007 Xyon bmildren kurtdavis michaelcoburn oneiroi tolland
ignore: tomaszkiewicz
$modules/database/mysql/mysql_variables.py:
authors: banyek
maintainers: $team_mysql
$modules/database/postgresql/postgresql_db.py:
authors: ansible
maintainers: $team_postgresql
@@ -583,11 +528,6 @@ files:
$modules/database/postgresql/postgresql_user.py:
authors: ansible
maintainers: $team_postgresql
$modules/database/proxysql/:
authors: bmildren
maintainers: Alexander198961 Andersson007 Xyon bmalynovytch kurtdavis michaelcoburn oneiroi tolland
labels: mysql
keywords: mariadb proxysql
$modules/database/vertica/:
authors: dareko
$modules/files/archive.py:
@@ -1048,7 +988,7 @@ files:
authors: fgbulsoni
$modules/remote_management/redfish/:
authors: jose-delarosa
maintainers: billdodd mraineri tomasg2012
maintainers: $team_redfish
$modules/remote_management/stacki/stacki_host.py:
authors: bbyhuy
maintainers: bsanders
@@ -1175,8 +1115,6 @@ files:
authors: abulimov
maintainers: pilou-
labels: filesystem
$modules/system/firewalld.py:
authors: maxamillion
$modules/system/gconftool2.py:
authors: kevensen
maintainers: Akasurde
@@ -1364,7 +1302,6 @@ macros:
team_aix: MorrisA bcoca d-little flynn1973 gforster kairoaraujo marvin-sinister mator molekuul ramooncamacho wtcross
team_bsd: JoergFiedler MacLemon bcoca dch jasperla mekanix opoplawski overhacked tuxillo
team_cyberark_conjur: jvanderhoof ryanprior
team_digital_ocean: BondAnthony mgregson
team_docker: DBendit WojciechowskiPiotr akshay196 danihodovic dariko felixfontein jwitko kassiansun tbouvet
team_e_spirit: MatrixCrawler getjack
team_extreme: LindsayHill bigmstone ujwalkomarla
@@ -1378,7 +1315,6 @@ macros:
team_linode: InTheCloudDan decentral1se displague rmcintosh
team_macos: akasurde kyleabenson martinm82
team_manageiq: abellotti cben gtanzillo yaacov zgalor
team_mysql: Alexander198961 Andersson007 Xyon bmalynovytch bmildren kurtdavis michaelcoburn oneiroi tolland
team_netapp: amit0701 carchi8py hulquest lmprice lonico ndswartz schmots1
team_netscaler: chiradeep giorgos-nikolopoulos
team_netvisor: Qalthos amitsi csharpe-pn pdam preetiparasar
@@ -1387,7 +1323,7 @@ macros:
team_postgresql: Andersson007 Dorn- amenonsen andytom jbscalia kostiantyn-nemchenko matburt nerzhul sebasmannem tcraxs
team_purestorage: bannaych dnix101 genegr lionmax opslounge raekins sdodsley sile16
team_rabbitmq: chrishoffman manuel-sousa
team_redfish: billdodd jose-delarosa mraineri tomasg2012
team_redfish: billdodd mraineri tomasg2012
team_rhn: FlossWare alikins barnabycourt vritant
team_scaleway: DenBeke QuentinBrosse abarbare jerome-quere kindermoumoute remyleone
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l

49
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: "Code scanning - action"
on:
schedule:
- cron: '26 19 * * 1'
jobs:
CodeQL-Build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
# Override language selection by uncommenting this and choosing your languages
# with:
# languages: go, javascript, csharp, python, cpp, java
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

View File

@@ -5,6 +5,133 @@ Community General Release Notes
.. contents:: Topics
v1.0.0
======
Release Summary
---------------
This is release 1.0.0 of ``community.general``, released on 2020-07-31.
Minor Changes
-------------
- Add the ``gcpubsub``, ``gcpubsub_info`` and ``gcpubsub_facts`` (to be removed in 3.0.0) modules. These were originally in community.general, but removed on the assumption that they have been moved to google.cloud. Since this turned out to be incorrect, we re-added them for 1.0.0.
- Add the deprecated ``gcp_backend_service``, ``gcp_forwarding_rule`` and ``gcp_healthcheck`` modules, which will be removed in 2.0.0. These were originally in community.general, but removed on the assumption that they have been moved to google.cloud. Since this turned out to be incorrect, we re-added them for 1.0.0.
- The collection is now actively tested in CI with the latest Ansible 2.9 release.
- airbrake_deployment - add ``version`` param; clarified docs on ``revision`` param (https://github.com/ansible-collections/community.general/pull/583).
- apk - added ``no_cache`` option (https://github.com/ansible-collections/community.general/pull/548).
- firewalld - the module has been moved to the ``ansible.posix`` collection. A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/623).
- gitlab_project - add support for merge_method on projects (https://github.com/ansible/ansible/pull/66813).
- gitlab_runners inventory plugin - permit environment variable input for ``server_url``, ``api_token`` and ``filter`` options (https://github.com/ansible-collections/community.general/pull/611).
- haproxy - add options to dis/enable health and agent checks. When health and agent checks are enabled for a service, a disabled service will re-enable itself automatically. These options also change the state of the agent checks to match the requested state for the backend (https://github.com/ansible-collections/community.general/issues/684).
- log_plays callback - use v2 methods (https://github.com/ansible-collections/community.general/pull/442).
- logstash callback - add ini config (https://github.com/ansible-collections/community.general/pull/610).
- lxd_container - added support of ``--target`` flag for cluster deployments (https://github.com/ansible-collections/community.general/issues/637).
- parted - accept negative numbers in ``part_start`` and ``part_end``
- pkgng - added ``stdout`` and ``stderr`` attributes to the result (https://github.com/ansible-collections/community.general/pull/560).
- pkgng - added support for upgrading all packages using ``name: *, state: latest``, similar to other package providers (https://github.com/ansible-collections/community.general/pull/569).
- postgresql_query - add search_path parameter (https://github.com/ansible-collections/community.general/issues/625).
- rundeck_acl_policy - add check for rundeck_acl_policy name parameter (https://github.com/ansible-collections/community.general/pull/612).
- slack - add support for sending messages built with block kit (https://github.com/ansible-collections/community.general/issues/380).
- splunk callback - add an option to allow not to validate certificate from HEC (https://github.com/ansible-collections/community.general/pull/596).
- xfconf - add arrays support (https://github.com/ansible/ansible/issues/46308).
- xfconf - add support for ``uint`` type (https://github.com/ansible-collections/community.general/pull/696).
Breaking Changes / Porting Guide
--------------------------------
- log_plays callback - add missing information to the logs generated by the callback plugin. This changes the log message format (https://github.com/ansible-collections/community.general/pull/442).
- pkgng - passing ``name: *`` with ``state: absent`` will no longer remove every installed package from the system. It is now a noop. (https://github.com/ansible-collections/community.general/pull/569).
- pkgng - passing ``name: *`` with ``state: latest`` or ``state: present`` will no longer install every package from the configured package repositories. Instead, ``name: *, state: latest`` will upgrade all already-installed packages, and ``name: *, state: present`` is a noop. (https://github.com/ansible-collections/community.general/pull/569).
Deprecated Features
-------------------
- The ldap_attr module has been deprecated and will be removed in a later release; use ldap_attrs instead.
- xbps - the ``force`` option never had any effect. It is now deprecated, and will be removed in 3.0.0 (https://github.com/ansible-collections/community.general/pull/568).
Removed Features (previously deprecated)
----------------------------------------
- conjur_variable lookup - has been moved to the ``cyberark.conjur`` collection. A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/570).
- digital_ocean_* - all DigitalOcean modules have been moved to the ``community.digitalocean`` collection. A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/622).
- infini_* - all infinidat modules have been moved to the ``infinidat.infinibox`` collection. A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/607).
- logicmonitor - the module has been removed in 1.0.0 since it is unmaintained and the API used by the module has been turned off in 2017 (https://github.com/ansible-collections/community.general/issues/539, https://github.com/ansible-collections/community.general/pull/541).
- logicmonitor_facts - the module has been removed in 1.0.0 since it is unmaintained and the API used by the module has been turned off in 2017 (https://github.com/ansible-collections/community.general/issues/539, https://github.com/ansible-collections/community.general/pull/541).
- mysql_* - all MySQL modules have been moved to the ``community.mysql`` collection. A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/633).
- proxysql_* - all ProxySQL modules have been moved to the ``community.proxysql`` collection. A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/624).
Bugfixes
--------
- aix_filesystem - fix issues with ismount module_util pathing for Ansible 2.9 (https://github.com/ansible-collections/community.general/pull/567).
- consul_kv lookup - fix ``ANSIBLE_CONSUL_URL`` environment variable handling (https://github.com/ansible/ansible/issues/51960).
- consul_kv lookup - fix arguments handling (https://github.com/ansible-collections/community.general/pull/303).
- digital_ocean_tag_info - fix crash when querying for an individual tag (https://github.com/ansible-collections/community.general/pull/615).
- doas become plugin - address a bug with the parameters handling that was breaking the plugin in community.general when ``become_flags`` and ``become_user`` were not explicitly specified (https://github.com/ansible-collections/community.general/pull/704).
- docker_compose - add a condition to prevent service startup if parameter ``stopped`` is true. Otherwise, the service will be started on each play and stopped again immediately due to the ``stopped`` parameter and breaks the idempotency of the module (https://github.com/ansible-collections/community.general/issues/532).
- docker_compose - disallow usage of the parameters ``stopped`` and ``restarted`` at the same time. This breaks also the idempotency (https://github.com/ansible-collections/community.general/issues/532).
- docker_container - use Config MacAddress by default instead of Networks. Networks MacAddress is empty in some cases (https://github.com/ansible/ansible/issues/70206).
- docker_container - various error fixes in string handling for Python 2 to avoid crashes when non-ASCII characters are used in strings (https://github.com/ansible-collections/community.general/issues/640).
- docker_swarm - removes ``advertise_addr`` from list of required arguments when ``state`` is ``"join"`` (https://github.com/ansible-collections/community.general/issues/439).
- dzdo become plugin - address a bug with the parameters handling that was breaking the plugin in community.general when ``become_user`` was not explicitly specified (https://github.com/ansible-collections/community.general/pull/708).
- filesystem - resizefs of xfs filesystems is fixed. Filesystem needs to be mounted.
- jenkins_plugin - replace MD5 checksum verification with SHA1 due to MD5 being disabled on systems with FIPS-only algorithms enabled (https://github.com/ansible/ansible/issues/34304).
- jira - improve error message handling (https://github.com/ansible-collections/community.general/pull/311).
- jira - improve error message handling with multiple errors (https://github.com/ansible-collections/community.general/pull/707).
- kubevirt - Add aliases 'interface_name' for network_name (https://github.com/ansible/ansible/issues/55641).
- nmcli - fix idempotetency when modifying an existing connection (https://github.com/ansible-collections/community.general/issues/481).
- osx_defaults - fix handling negative integers (https://github.com/ansible-collections/community.general/issues/134).
- pacman - treat package names containing .zst as package files during installation (https://www.archlinux.org/news/now-using-zstandard-instead-of-xz-for-package-compression/, https://github.com/ansible-collections/community.general/pull/650).
- pbrun become plugin - address a bug with the parameters handling that was breaking the plugin in community.general when ``become_user`` was not explicitly specified (https://github.com/ansible-collections/community.general/pull/708).
- postgresql_privs - fix crash when set privileges on schema with hyphen in the name (https://github.com/ansible-collections/community.general/issues/656).
- postgresql_set - only display a warning about restarts, when restarting is needed (https://github.com/ansible-collections/community.general/pull/651).
- redfish_info, redfish_config, redfish_command - Fix Redfish response payload decode on Python 3.5 (https://github.com/ansible-collections/community.general/issues/686)
- selective - mark task failed correctly (https://github.com/ansible/ansible/issues/63767).
- snmp_facts - skip ``EndOfMibView`` values (https://github.com/ansible/ansible/issues/49044).
- yarn - fixed an index out of range error when no outdated packages where returned by yarn executable (see https://github.com/ansible-collections/community.general/pull/474).
- yarn - fixed an too many values to unpack error when scoped packages are installed (see https://github.com/ansible-collections/community.general/pull/474).
New Plugins
-----------
Inventory
~~~~~~~~~
- cobbler - Cobbler inventory source
Lookup
~~~~~~
- dsv - Get secrets from Thycotic DevOps Secrets Vault
- tss - Get secrets from Thycotic Secret Server
New Modules
-----------
Cloud
~~~~~
docker
^^^^^^
- docker_stack_info - Return information on a docker stack
Database
~~~~~~~~
misc
^^^^
- odbc - Execute SQL via ODBC
System
~~~~~~
- launchd - Manage macOS services
v0.2.0
======

View File

@@ -1,15 +1,22 @@
[![Run Status](https://api.shippable.com/projects/5e664a167c32620006c9fa50/badge?branch=master)](https://app.shippable.com/github/ansible-collections/community.general/dashboard) [![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general)
# Community General Collection
# Ansible Collection: community.general
[![Run Status](https://api.shippable.com/projects/5e664a167c32620006c9fa50/badge?branch=main)](https://app.shippable.com/github/ansible-collections/community.general/dashboard) [![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general)
This repo contains the `community.general` Ansible Collection.
This repo contains the `community.general` Ansible Collection. The collection includes many modules and plugins supported by Ansible community which are not part of more specialized community collections.
The collection includes the modules and plugins supported by Ansible community.
## Tested with Ansible
Tested with the current Ansible 2.9 and 2.10 releases and the current development version of Ansible. Ansible versions before 2.9.10 are not supported.
## Installation and Usage
## External requirements
### Installing the Collection from Ansible Galaxy
Some modules and plugins require external libraries. Please check the requirements for each plugin or module you use in the documentation to find out which requirements are needed.
## Included content
Please check the included content on the [Ansible Galaxy page for this collection](https://galaxy.ansible.com/community/general).
## Using this collection
Before using the General community collection, you need to install the collection with the `ansible-galaxy` CLI:
@@ -22,34 +29,19 @@ collections:
- name: community.general
```
## Testing and Development
See [Ansible Using collections](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html) for more details.
If you want to develop new content for this collection or improve what is already here, the easiest way to work on the collection is to clone it into one of the configured [`COLLECTIONS_PATHS`](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#collections-paths), and work on it there.
## Contributing to this collection
You can find more information in the [developer guide for collections](https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html#contributing-to-collections)
If you want to develop new content for this collection or improve what is already here, the easiest way to work on the collection is to clone it into one of the configured [`COLLECTIONS_PATH`](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#collections-paths), and work on it there.
### Testing with `ansible-test`
You can find more information in the [developer guide for collections](https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html#contributing-to-collections), and in the [Ansible Community Guide](https://docs.ansible.com/ansible/latest/community/index.html).
### Running tests
See [here](https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html#testing-collections).
## Release notes
See [here](https://github.com/ansible-collections/community.general/tree/master/CHANGELOG.rst).
## Publishing New Version
Basic instructions without release branches:
1. Create `changelogs/fragments/<version>.yml` with `release_summary:` section (which must be a string, not a list).
2. Run `antsibull-changelog release --collection-flatmap yes`
3. Make sure `CHANGELOG.rst` and `changelogs/changelog.yaml` are added to git, and the deleted fragments have been removed.
4. Tag the commit with `<version>`. Push changes and tag to the main repository.
## More Information
TBD
## Communication
### Communication
We have a dedicated Working Group for Ansible development.
@@ -62,8 +54,34 @@ For more information about communities, meetings and agendas see [Community Wiki
For more information about [communication](https://docs.ansible.com/ansible/latest/community/communication.html)
## License
### Publishing New Version
GNU General Public License v3.0 or later
Basic instructions without release branches:
See [LICENSE](COPYING) to see the full text.
1. Create `changelogs/fragments/<version>.yml` with `release_summary:` section (which must be a string, not a list).
2. Run `antsibull-changelog release --collection-flatmap yes`
3. Make sure `CHANGELOG.rst` and `changelogs/changelog.yaml` are added to git, and the deleted fragments have been removed.
4. Tag the commit with `<version>`. Push changes and tag to the main repository.
## Release notes
See the [changelog](https://github.com/ansible-collections/community.general/blob/main/CHANGELOG.rst).
## Roadmap
See [this issue](https://github.com/ansible-collections/community.general/issues/582) for information on releasing, versioning and deprecation.
In general, we plan to release a major version every six months, and minor versions every two months. Major versions can contain breaking changes, while minor versions only contain new features and bugfixes.
## More information
- [Ansible Collection overview](https://github.com/ansible-collections/overview)
- [Ansible User guide](https://docs.ansible.com/ansible/latest/user_guide/index.html)
- [Ansible Developer guide](https://docs.ansible.com/ansible/latest/dev_guide/index.html)
- [Ansible Community code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html)
## Licensing
GNU General Public License v3.0 or later.
See [COPYING](https://www.gnu.org/licenses/gpl-3.0.txt) to see the full text.

View File

@@ -905,3 +905,208 @@ releases:
name: lmdb_kv
namespace: null
release_date: '2020-06-20'
1.0.0:
changes:
breaking_changes:
- log_plays callback - add missing information to the logs generated by the
callback plugin. This changes the log message format (https://github.com/ansible-collections/community.general/pull/442).
- 'pkgng - passing ``name: *`` with ``state: absent`` will no longer remove
every installed package from the system. It is now a noop. (https://github.com/ansible-collections/community.general/pull/569).'
- 'pkgng - passing ``name: *`` with ``state: latest`` or ``state: present``
will no longer install every package from the configured package repositories.
Instead, ``name: *, state: latest`` will upgrade all already-installed packages,
and ``name: *, state: present`` is a noop. (https://github.com/ansible-collections/community.general/pull/569).'
bugfixes:
- aix_filesystem - fix issues with ismount module_util pathing for Ansible 2.9
(https://github.com/ansible-collections/community.general/pull/567).
- consul_kv lookup - fix ``ANSIBLE_CONSUL_URL`` environment variable handling
(https://github.com/ansible/ansible/issues/51960).
- consul_kv lookup - fix arguments handling (https://github.com/ansible-collections/community.general/pull/303).
- digital_ocean_tag_info - fix crash when querying for an individual tag (https://github.com/ansible-collections/community.general/pull/615).
- doas become plugin - address a bug with the parameters handling that was breaking
the plugin in community.general when ``become_flags`` and ``become_user``
were not explicitly specified (https://github.com/ansible-collections/community.general/pull/704).
- docker_compose - add a condition to prevent service startup if parameter ``stopped``
is true. Otherwise, the service will be started on each play and stopped again
immediately due to the ``stopped`` parameter and breaks the idempotency of
the module (https://github.com/ansible-collections/community.general/issues/532).
- docker_compose - disallow usage of the parameters ``stopped`` and ``restarted``
at the same time. This breaks also the idempotency (https://github.com/ansible-collections/community.general/issues/532).
- docker_container - use Config MacAddress by default instead of Networks. Networks
MacAddress is empty in some cases (https://github.com/ansible/ansible/issues/70206).
- docker_container - various error fixes in string handling for Python 2 to
avoid crashes when non-ASCII characters are used in strings (https://github.com/ansible-collections/community.general/issues/640).
- docker_swarm - removes ``advertise_addr`` from list of required arguments
when ``state`` is ``"join"`` (https://github.com/ansible-collections/community.general/issues/439).
- dzdo become plugin - address a bug with the parameters handling that was breaking
the plugin in community.general when ``become_user`` was not explicitly specified
(https://github.com/ansible-collections/community.general/pull/708).
- filesystem - resizefs of xfs filesystems is fixed. Filesystem needs to be
mounted.
- jenkins_plugin - replace MD5 checksum verification with SHA1 due to MD5 being
disabled on systems with FIPS-only algorithms enabled (https://github.com/ansible/ansible/issues/34304).
- jira - improve error message handling (https://github.com/ansible-collections/community.general/pull/311).
- jira - improve error message handling with multiple errors (https://github.com/ansible-collections/community.general/pull/707).
- kubevirt - Add aliases 'interface_name' for network_name (https://github.com/ansible/ansible/issues/55641).
- nmcli - fix idempotetency when modifying an existing connection (https://github.com/ansible-collections/community.general/issues/481).
- osx_defaults - fix handling negative integers (https://github.com/ansible-collections/community.general/issues/134).
- pacman - treat package names containing .zst as package files during installation
(https://www.archlinux.org/news/now-using-zstandard-instead-of-xz-for-package-compression/,
https://github.com/ansible-collections/community.general/pull/650).
- pbrun become plugin - address a bug with the parameters handling that was
breaking the plugin in community.general when ``become_user`` was not explicitly
specified (https://github.com/ansible-collections/community.general/pull/708).
- postgresql_privs - fix crash when set privileges on schema with hyphen in
the name (https://github.com/ansible-collections/community.general/issues/656).
- postgresql_set - only display a warning about restarts, when restarting is
needed (https://github.com/ansible-collections/community.general/pull/651).
- redfish_info, redfish_config, redfish_command - Fix Redfish response payload
decode on Python 3.5 (https://github.com/ansible-collections/community.general/issues/686)
- selective - mark task failed correctly (https://github.com/ansible/ansible/issues/63767).
- snmp_facts - skip ``EndOfMibView`` values (https://github.com/ansible/ansible/issues/49044).
- yarn - fixed an index out of range error when no outdated packages where returned
by yarn executable (see https://github.com/ansible-collections/community.general/pull/474).
- yarn - fixed an too many values to unpack error when scoped packages are installed
(see https://github.com/ansible-collections/community.general/pull/474).
deprecated_features:
- The ldap_attr module has been deprecated and will be removed in a later release;
use ldap_attrs instead.
- xbps - the ``force`` option never had any effect. It is now deprecated, and
will be removed in 3.0.0 (https://github.com/ansible-collections/community.general/pull/568).
minor_changes:
- Add the ``gcpubsub``, ``gcpubsub_info`` and ``gcpubsub_facts`` (to be removed
in 3.0.0) modules. These were originally in community.general, but removed
on the assumption that they have been moved to google.cloud. Since this turned
out to be incorrect, we re-added them for 1.0.0.
- Add the deprecated ``gcp_backend_service``, ``gcp_forwarding_rule`` and ``gcp_healthcheck``
modules, which will be removed in 2.0.0. These were originally in community.general,
but removed on the assumption that they have been moved to google.cloud. Since
this turned out to be incorrect, we re-added them for 1.0.0.
- The collection is now actively tested in CI with the latest Ansible 2.9 release.
- airbrake_deployment - add ``version`` param; clarified docs on ``revision``
param (https://github.com/ansible-collections/community.general/pull/583).
- apk - added ``no_cache`` option (https://github.com/ansible-collections/community.general/pull/548).
- firewalld - the module has been moved to the ``ansible.posix`` collection.
A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/623).
- gitlab_project - add support for merge_method on projects (https://github.com/ansible/ansible/pull/66813).
- gitlab_runners inventory plugin - permit environment variable input for ``server_url``,
``api_token`` and ``filter`` options (https://github.com/ansible-collections/community.general/pull/611).
- haproxy - add options to dis/enable health and agent checks. When health
and agent checks are enabled for a service, a disabled service will re-enable
itself automatically. These options also change the state of the agent checks
to match the requested state for the backend (https://github.com/ansible-collections/community.general/issues/684).
- log_plays callback - use v2 methods (https://github.com/ansible-collections/community.general/pull/442).
- logstash callback - add ini config (https://github.com/ansible-collections/community.general/pull/610).
- lxd_container - added support of ``--target`` flag for cluster deployments
(https://github.com/ansible-collections/community.general/issues/637).
- parted - accept negative numbers in ``part_start`` and ``part_end``
- pkgng - added ``stdout`` and ``stderr`` attributes to the result (https://github.com/ansible-collections/community.general/pull/560).
- 'pkgng - added support for upgrading all packages using ``name: *, state:
latest``, similar to other package providers (https://github.com/ansible-collections/community.general/pull/569).'
- postgresql_query - add search_path parameter (https://github.com/ansible-collections/community.general/issues/625).
- rundeck_acl_policy - add check for rundeck_acl_policy name parameter (https://github.com/ansible-collections/community.general/pull/612).
- slack - add support for sending messages built with block kit (https://github.com/ansible-collections/community.general/issues/380).
- splunk callback - add an option to allow not to validate certificate from
HEC (https://github.com/ansible-collections/community.general/pull/596).
- xfconf - add arrays support (https://github.com/ansible/ansible/issues/46308).
- xfconf - add support for ``uint`` type (https://github.com/ansible-collections/community.general/pull/696).
release_summary: 'This is release 1.0.0 of ``community.general``, released on
2020-07-31.
'
removed_features:
- conjur_variable lookup - has been moved to the ``cyberark.conjur`` collection.
A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/570).
- digital_ocean_* - all DigitalOcean modules have been moved to the ``community.digitalocean``
collection. A redirection is active, which will be removed in version 2.0.0
(https://github.com/ansible-collections/community.general/pull/622).
- infini_* - all infinidat modules have been moved to the ``infinidat.infinibox``
collection. A redirection is active, which will be removed in version 2.0.0
(https://github.com/ansible-collections/community.general/pull/607).
- logicmonitor - the module has been removed in 1.0.0 since it is unmaintained
and the API used by the module has been turned off in 2017 (https://github.com/ansible-collections/community.general/issues/539,
https://github.com/ansible-collections/community.general/pull/541).
- logicmonitor_facts - the module has been removed in 1.0.0 since it is unmaintained
and the API used by the module has been turned off in 2017 (https://github.com/ansible-collections/community.general/issues/539,
https://github.com/ansible-collections/community.general/pull/541).
- mysql_* - all MySQL modules have been moved to the ``community.mysql`` collection.
A redirection is active, which will be removed in version 2.0.0 (https://github.com/ansible-collections/community.general/pull/633).
- proxysql_* - all ProxySQL modules have been moved to the ``community.proxysql``
collection. A redirection is active, which will be removed in version 2.0.0
(https://github.com/ansible-collections/community.general/pull/624).
fragments:
- 1.0.0.yml
- 296-ansible-2.9.yml
- 303-consul_kv-fix-env-variables-handling.yaml
- 311-jira-error-handling.yaml
- 33979-xfs_growfs.yml
- 442-log_plays-add_playbook_task_name_and_action.yml
- 474-yarn_fix-outdated-fix-list.yml
- 547-start-service-condition.yaml
- 548_apk.yml
- 55903_kubevirt.yml
- 560-pkgng-add-stdout-and-stderr.yaml
- 562-nmcli-fix-idempotency.yaml
- 564-docker_container_use_config_macaddress_by_default.yaml
- 568_packaging.yml
- 569-pkgng-add-upgrade-action.yaml
- 596-splunk-add-option-to-not-validate-cert.yaml
- 610_logstash_callback_add_ini_config.yml
- 611-gitlab-runners-env-vars-intput-and-default-item-limit.yaml
- 613-snmp_facts-EndOfMibView.yml
- 615-digital-ocean-tag-info-bugfix.yml
- 63767_selective.yml
- 642-docker_container-python-2.yml
- 646-docker_swarm-remove-advertise_addr-from-join-requirement.yaml
- 650_pacman_support_zst_package_files.yaml
- 651-fix-postgresql_set-warning.yaml
- 653-postgresql_query_add_search_path_param.yml
- 656-name-with-hyphen.yml
- 66813_gitlab_project.yml
- 676-osx_defaults_fix_handling_negative_ints.yml
- 677-jenkins_plugins_sha1.yaml
- 687-fix-redfish-payload-decode-python35.yml
- 689-haproxy_agent_and_health.yml
- 693-big-revamp-on-xfconf-adding-array-values.yml
- 702-slack-support-for-blocks.yaml
- 704-doas-set-correct-default-values.yml
- 707-jira-error-handling.yaml
- 708-set-correct-default-values.yml
- 711-lxd-target.yml
- add_argument_check_for_rundeck.yaml
- airbrake_deployment_add_version.yml
- aix_filesystem-module_util-routing-issue.yml
- cyberarkconjur-removal.yml
- digital-ocean.yml
- firewalld_migration.yml
- google-modules.yml
- infinidat-removal.yml
- logicmonitor-removal.yml
- mysql.yml
- parted_negative_numbers.yml
- porting-guide-2.yml
- proxysql.yml
- xfconf_add_uint_type.yml
modules:
- description: Return information on a docker stack
name: docker_stack_info
namespace: cloud.docker
- description: Manage macOS services
name: launchd
namespace: system
- description: Execute SQL via ODBC
name: odbc
namespace: database.misc
plugins:
inventory:
- description: Cobbler inventory source
name: cobbler
namespace: null
lookup:
- description: Get secrets from Thycotic DevOps Secrets Vault
name: dsv
namespace: null
- description: Get secrets from Thycotic Secret Server
name: tss
namespace: null
release_date: '2020-07-31'

View File

@@ -1,4 +0,0 @@
---
bugfixes:
- consul_kv lookup - fix ``ANSIBLE_CONSUL_URL`` environment variable handling (https://github.com/ansible/ansible/issues/51960).
- consul_kv lookup - fix arguments handling (https://github.com/ansible-collections/community.general/pull/303).

View File

@@ -1,2 +0,0 @@
bugfixes:
- "filesystem - resizefs of xfs filesystems is fixed. Filesystem needs to be mounted."

View File

@@ -1,4 +0,0 @@
minor_changes:
- log_plays callback - use v2 methods (https://github.com/ansible-collections/community.general/pull/442).
breaking_changes:
- log_plays callback - add missing information to the logs generated by the callback plugin. This changes the log message format (https://github.com/ansible-collections/community.general/pull/442).

View File

@@ -1,10 +0,0 @@
---
bugfixes:
- docker_compose - add a condition to prevent service startup
if parameter ``stopped`` is true. Otherwise, the service will be
started on each play and stopped again immediately due to
the ``stopped`` parameter and breaks the idempotency of the module
(https://github.com/ansible-collections/community.general/issues/532).
- docker_compose - disallow usage of the parameters ``stopped`` and ``restarted``
at the same time. This breaks also the idempotency
(https://github.com/ansible-collections/community.general/issues/532).

View File

@@ -1,2 +0,0 @@
minor_changes:
- apk - added ``no_cache`` option (https://github.com/ansible-collections/community.general/pull/548).

View File

@@ -1,3 +0,0 @@
---
bugfixes:
- kubevirt - Add aliases 'interface_name' for network_name (https://github.com/ansible/ansible/issues/55641).

View File

@@ -1,2 +0,0 @@
minor_changes:
- pkgng - added ``stdout`` and ``stderr`` attributes to the result (https://github.com/ansible-collections/community.general/pull/560).

View File

@@ -1,2 +0,0 @@
deprecated_features:
- xbps - the ``force`` option never had any effect. It is now deprecated, and will be removed in 3.0.0 (https://github.com/ansible-collections/community.general/pull/568).

View File

@@ -1,3 +0,0 @@
---
minor_changes:
- "airbrake_deployment - add ``version`` param; clarified docs on ``revision`` param (https://github.com/ansible-collections/community.general/pull/583)."

View File

@@ -1,3 +0,0 @@
---
bugfixes:
- aix_filesystem - fix issues with ismount module_util pathing for Ansible 2.9 (https://github.com/ansible-collections/community.general/pull/567).

View File

@@ -1,3 +0,0 @@
removed_features:
- "logicmonitor - the module has been removed in 1.0.0 since it is unmaintained and the API used by the module has been turned off in 2017 (https://github.com/ansible-collections/community.general/issues/539, https://github.com/ansible-collections/community.general/pull/541)."
- "logicmonitor_facts - the module has been removed in 1.0.0 since it is unmaintained and the API used by the module has been turned off in 2017 (https://github.com/ansible-collections/community.general/issues/539, https://github.com/ansible-collections/community.general/pull/541)."

View File

@@ -1,2 +0,0 @@
minor_changes:
- "parted - accept negative numbers in ``part_start`` and ``part_end``"

View File

@@ -9,12 +9,12 @@ license_file: COPYING
tags: null
# NOTE: No more dependencies can be added to this list
dependencies:
ansible.netcommon: '>=0.0.2'
ansible.netcommon: '>=1.0.0'
ansible.posix: '>=1.0.0'
community.kubernetes: '>=0.1.0'
google.cloud: '>=0.0.9'
community.kubernetes: '>=0.11.1' # check https://galaxy.ansible.com/community/kubernetes
google.cloud: '>=0.10.1' # check https://galaxy.ansible.com/google/cloud
repository: https://github.com/ansible-collections/community.general
#documentation: https://github.com/ansible-collection-migration/community.general/tree/master/docs
#documentation: https://github.com/ansible-collection-migration/community.general/tree/main/docs
homepage: https://github.com/ansible-collections/community.general
issues: https://github.com/ansible-collections/community.general/issues
#type: flatmap

View File

@@ -57,6 +57,12 @@ action_groups:
- ovirt_vm_facts
- ovirt_vmpool_facts
plugin_routing:
lookup:
conjur_variable:
redirect: cyberark.conjur.conjur_variable
deprecation:
removal_version: 2.0.0
warning_text: The conjur_variable lookup has been moved to the cyberark.conjur collection.
modules:
ali_instance_facts:
deprecation:
@@ -65,59 +71,173 @@ plugin_routing:
digital_ocean:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
warning_text: The digital_ocean module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean
digital_ocean_account_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_account_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_account_facts
digital_ocean_account_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_account_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_account_info
digital_ocean_block_storage:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_block_storage module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_block_storage
digital_ocean_certificate:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_certificate module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_certificate
digital_ocean_certificate_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_certificate_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_certificate_facts
digital_ocean_certificate_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_certificate_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_certificate_info
digital_ocean_domain:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_domain module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_domain
digital_ocean_domain_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_domain_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_domain_facts
digital_ocean_domain_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_domain_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_domain_info
digital_ocean_droplet:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_droplet module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_droplet
digital_ocean_firewall_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_firewall_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_firewall_facts
digital_ocean_firewall_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_firewall_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_firewall_info
digital_ocean_floating_ip:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_floating_ip module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_floating_ip
digital_ocean_floating_ip_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_floating_ip_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_floating_ip_facts
digital_ocean_floating_ip_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_floating_ip_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_floating_ip_info
digital_ocean_image_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_image_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_image_facts
digital_ocean_image_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_image_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_image_info
digital_ocean_load_balancer_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_load_balancer_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_load_balancer_facts
digital_ocean_load_balancer_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_load_balancer_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_load_balancer_info
digital_ocean_region_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_region_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_region_facts
digital_ocean_region_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_region_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_region_info
digital_ocean_size_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_size_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_size_facts
digital_ocean_size_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_size_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_size_info
digital_ocean_snapshot_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_snapshot_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_snapshot_facts
digital_ocean_snapshot_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_snapshot_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_snapshot_info
digital_ocean_sshkey:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_sshkey module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_sshkey
digital_ocean_sshkey_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_sshkey_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_sshkey_facts
digital_ocean_sshkey_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_sshkey_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_sshkey_info
digital_ocean_tag:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_tag module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_tag
digital_ocean_tag_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_tag_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_tag_facts
digital_ocean_tag_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_tag_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_tag_info
digital_ocean_volume_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
removal_version: 2.0.0
warning_text: The digital_ocean_volume_facts module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_volume_facts
digital_ocean_volume_info:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean_volume_info module has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean_volume_info
docker_image_facts:
deprecation:
removal_version: 2.0.0
@@ -126,6 +246,15 @@ plugin_routing:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
firewalld:
deprecation:
removal_version: 2.0.0
warning_text: The firewalld module has been moved to the ansible.posix collection.
redirect: ansible.posix.firewalld
foreman:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
gcdns_record:
deprecation:
removal_version: 2.0.0
@@ -138,6 +267,18 @@ plugin_routing:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
gcp_backend_service:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
gcp_forwarding_rule:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
gcp_healthcheck:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
gcp_target_proxy:
deprecation:
removal_version: 2.0.0
@@ -154,10 +295,72 @@ plugin_routing:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
github_hooks:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
helm:
deprecation:
removal_version: 3.0.0
warning_text: The helm module in community.general has been deprecated. Use community.kubernetes.helm instead.
hpilo_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
idrac_redfish_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
infini_export:
redirect: infinidat.infinibox.infini_export
deprecation:
removal_version: 2.0.0
warning_text: The infini_export module has been moved to the infinidat collection.
infini_export_client:
redirect: infinidat.infinibox.infini_export_client
deprecation:
removal_version: 2.0.0
warning_text: The infini_export_client module has been moved to the infinidat collection.
infini_fs:
redirect: infinidat.infinibox.infini_fs
deprecation:
removal_version: 2.0.0
warning_text: The infini_fs module has been moved to the infinidat collection.
infini_host:
redirect: infinidat.infinibox.infini_host
deprecation:
removal_version: 2.0.0
warning_text: The infini_host module has been moved to the infinidat collection.
infini_pool:
redirect: infinidat.infinibox.infini_pool
deprecation:
removal_version: 2.0.0
warning_text: The infini_pool module has been moved to the infinidat collection.
infini_vol:
redirect: infinidat.infinibox.infini_vol
deprecation:
removal_version: 2.0.0
warning_text: The infini_vol module has been moved to the infinidat collection.
jenkins_job_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
katello:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
ldap_attr:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
logicmonitor:
tombstone:
removal_version: 1.0.0
warning_text: The logicmonitor_facts module is no longer maintained and the API used has been disabled in 2017.
logicmonitor_facts:
tombstone:
removal_version: 1.0.0
warning_text: The logicmonitor_facts module is no longer maintained and the API used has been disabled in 2017.
memset_memstore_facts:
deprecation:
removal_version: 3.0.0
@@ -166,7 +369,113 @@ plugin_routing:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
ovirt:
mysql_db:
deprecation:
removal_version: 2.0.0
warning_text: The mysql_db module has been moved to the community.mysql collection.
redirect: community.mysql.mysql_db
mysql_info:
deprecation:
removal_version: 2.0.0
warning_text: The mysql_info module has been moved to the community.mysql collection.
redirect: community.mysql.mysql_info
mysql_query:
deprecation:
removal_version: 2.0.0
warning_text: The mysql_query module has been moved to the community.mysql collection.
redirect: community.mysql.mysql_query
mysql_replication:
deprecation:
removal_version: 2.0.0
warning_text: The mysql_replication module has been moved to the community.mysql collection.
redirect: community.mysql.mysql_replication
mysql_user:
deprecation:
removal_version: 2.0.0
warning_text: The mysql_user module has been moved to the community.mysql collection.
redirect: community.mysql.mysql_user
mysql_variables:
deprecation:
removal_version: 2.0.0
warning_text: The mysql_variables module has been moved to the community.mysql collection.
redirect: community.mysql.mysql_variables
na_cdot_aggregate:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_license:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_lun:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_qtree:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_svm:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_user:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_user_role:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_volume:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_ontap_gather_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
nginx_status_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
one_image_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
onepassword_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_datacenter_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_enclosure_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_ethernet_network_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_fc_network_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_fcoe_network_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_logical_interconnect_group_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_network_set_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_san_manager_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
@@ -178,7 +487,7 @@ plugin_routing:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
one_image_facts:
ovirt:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
@@ -278,6 +587,57 @@ plugin_routing:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
proxysql_backend_servers:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_backend_servers module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_backend_servers
proxysql_global_variables:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_global_variables module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_global_variables
proxysql_manage_config:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_manage_config module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_manage_config
proxysql_mysql_users:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_mysql_users module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_mysql_users
proxysql_query_rules:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_query_rules module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_query_rules
proxysql_replication_hostgroups:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_replication_hostgroups module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_replication_hostgroups
proxysql_scheduler:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql_scheduler module has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql_scheduler
purefa_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
purefb_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
python_requirements_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
redfish_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
scaleway_image_facts:
deprecation:
removal_version: 3.0.0
@@ -306,118 +666,6 @@ plugin_routing:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
smartos_image_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
xenserver_guest_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
vertica_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
onepassword_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
ldap_attr:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
foreman:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
katello:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
hpilo_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_datacenter_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_enclosure_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_ethernet_network_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_fc_network_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_fcoe_network_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_logical_interconnect_group_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_network_set_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
oneview_san_manager_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
idrac_redfish_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
redfish_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
github_hooks:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_aggregate:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_license:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_lun:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_qtree:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_svm:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_user:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_user_role:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_cdot_volume:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
na_ontap_gather_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
sf_account_manager:
deprecation:
removal_version: 2.0.0
@@ -438,31 +686,57 @@ plugin_routing:
deprecation:
removal_version: 2.0.0
warning_text: see plugin documentation for details
purefa_facts:
smartos_image_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
purefb_facts:
vertica_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
python_requirements_facts:
xenserver_guest_facts:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
jenkins_job_facts:
doc_fragments:
digital_ocean:
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
nginx_status_facts:
removal_version: 2.0.0
warning_text: The digital_ocean docs_fragment has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean
infinibox:
redirect: infinidat.infinibox.infinibox
deprecation:
removal_version: 3.0.0
warning_text: see plugin documentation for details
logicmonitor_facts:
tombstone:
removal_version: 1.0.0
warning_text: The logicmonitor_facts module is no longer maintained and the API used has been disabled in 2017.
logicmonitor:
tombstone:
removal_version: 1.0.0
warning_text: The logicmonitor_facts module is no longer maintained and the API used has been disabled in 2017.
removal_version: 2.0.0
warning_text: The infinibox doc_fragments plugin has been moved to the infinidat.infinibox collection.
mysql:
deprecation:
removal_version: 2.0.0
warning_text: The mysql docs_fragment has been moved to the community.mysql collection.
redirect: community.mysql.mysql
proxysql:
deprecation:
removal_version: 2.0.0
warning_text: The proxysql docs_fragment has been moved to the community.proxysql collection.
redirect: community.proxysql.proxysql
module_utils:
digital_ocean:
deprecation:
removal_version: 2.0.0
warning_text: The digital_ocean module_utils has been moved to the community.digitalocean collection.
redirect: community.digitalocean.digital_ocean
firewalld:
deprecation:
removal_version: 2.0.0
warning_text: The firewalld module_utils has been moved to the ansible.posix collection.
redirect: ansible.posix.firewalld
infinibox:
redirect: infinidat.infinibox.infinibox
deprecation:
removal_version: 2.0.0
warning_text: The infinibox module_utils plugin has been moved to the infinidat.infinibox collection.
mysql:
deprecation:
removal_version: 2.0.0
warning_text: The mysql module_utils has been moved to the community.mysql collection.
redirect: community.mysql.mysql

View File

@@ -40,7 +40,7 @@ DOCUMENTATION = '''
- name: ANSIBLE_DOAS_EXE
become_flags:
description: Options to pass to doas
default:
default: ''
ini:
- section: privilege_escalation
key: become_flags
@@ -117,9 +117,8 @@ class BecomeModule(BecomeBase):
if not self.get_option('become_pass') and '-n' not in flags:
flags += ' -n'
user = self.get_option('become_user')
if user:
user = '-u %s' % (user)
become_user = self.get_option('become_user')
user = '-u %s' % (become_user) if become_user else ''
success_cmd = self._build_success_command(cmd, shell, noexe=True)
executable = getattr(shell, 'executable', shell.SHELL_FAMILY)

View File

@@ -89,8 +89,7 @@ class BecomeModule(BecomeBase):
self.prompt = '[dzdo via ansible, key=%s] password:' % self._id
flags = '%s -p "%s"' % (flags.replace('-n', ''), self.prompt)
user = self.get_option('become_user')
if user:
user = '-u %s' % (user)
become_user = self.get_option('become_user')
user = '-u %s' % (become_user) if become_user else ''
return ' '.join([becomecmd, flags, user, self._build_success_command(cmd, shell)])

View File

@@ -13,7 +13,6 @@ DOCUMENTATION = '''
options:
become_user:
description: User you 'become' to execute the task
default: ''
ini:
- section: privilege_escalation
key: become_user

View File

@@ -97,9 +97,8 @@ class BecomeModule(BecomeBase):
become_exe = self.get_option('become_exe')
flags = self.get_option('become_flags')
user = self.get_option('become_user')
if user:
user = '-u %s' % (user)
become_user = self.get_option('become_user')
user = '-u %s' % (become_user) if become_user else ''
noexe = not self.get_option('wrap_exe')
return ' '.join([become_exe, flags, user, self._build_success_command(cmd, shell, noexe=noexe)])

View File

@@ -629,7 +629,7 @@ playbook.yml: >
gather_facts: no
tasks:
- name: Default plugin output
debug:
ansible.builtin.debug:
msg: default plugin output
- name: Override from play vars
@@ -687,11 +687,11 @@ playbook.yml: >
tasks:
- name: Custom banner with default plugin result output
debug:
ansible.builtin.debug:
msg: "default plugin output: result example"
- name: Override from task vars
debug:
ansible.builtin.debug:
msg: "example {{ two }}"
changed_when: true
vars:
@@ -703,14 +703,14 @@ playbook.yml: >
ansible_callback_diy_runner_on_ok_msg_color: "{{ 'yellow' if ansible_callback_diy.result.is_changed else 'bright green' }}"
- name: Suppress output
debug:
ansible.builtin.debug:
msg: i should not be displayed
vars:
ansible_callback_diy_playbook_on_task_start_msg: ""
ansible_callback_diy_runner_on_ok_msg: ""
- name: Using alias vars (see ansible.cfg)
debug:
ansible.builtin.debug:
msg:
when: False
vars:
@@ -719,13 +719,13 @@ playbook.yml: >
on_skipped_msg_color: white
- name: Just stdout
command: echo some stdout
ansible.builtin.command: echo some stdout
vars:
ansible_callback_diy_playbook_on_task_start_msg: "\n"
ansible_callback_diy_runner_on_ok_msg: "{{ ansible_callback_diy.result.output.stdout }}\n"
- name: Multiline output
debug:
ansible.builtin.debug:
msg: "{{ multiline }}"
vars:
ansible_callback_diy_playbook_on_task_start_msg: "\nDIY output(via task vars): task example: {{ ansible_callback_diy.task.name }}"
@@ -738,7 +738,7 @@ playbook.yml: >
ansible_callback_diy_playbook_on_task_start_msg_color: bright blue
- name: Indentation
debug:
ansible.builtin.debug:
msg: "{{ item.msg }}"
with_items:
- { indent: 1, msg: one., color: red }
@@ -751,14 +751,14 @@ playbook.yml: >
ansible_callback_diy_runner_on_ok_msg_color: bright green
- name: Using lookup and template as file
shell: "echo {% raw %}'output from {{ file_name }}'{% endraw %} > {{ file_name }}"
ansible.builtin.shell: "echo {% raw %}'output from {{ file_name }}'{% endraw %} > {{ file_name }}"
vars:
ansible_callback_diy_playbook_on_task_start_msg: "\nDIY output(via task vars): task example: {{ ansible_callback_diy.task.name }}"
file_name: diy_file_template_example
ansible_callback_diy_runner_on_ok_msg: "{{ lookup('template', file_name) }}"
- name: 'Look at top level vars available to the "runner_on_ok" callback'
debug:
ansible.builtin.debug:
msg: ''
vars:
ansible_callback_diy_playbook_on_task_start_msg: "\nDIY output(via task vars): task example: {{ ansible_callback_diy.task.name }}"
@@ -771,7 +771,7 @@ playbook.yml: >
ansible_callback_diy_runner_on_ok_msg_color: white
- name: 'Look at event data available to the "runner_on_ok" callback'
debug:
ansible.builtin.debug:
msg: ''
vars:
ansible_callback_diy_playbook_on_task_start_msg: "\nDIY output(via task vars): task example: {{ ansible_callback_diy.task.name }}"

View File

@@ -19,16 +19,28 @@ DOCUMENTATION = '''
description: Address of the Logstash server
env:
- name: LOGSTASH_SERVER
ini:
- section: callback_logstash
key: server
version_added: 1.0.0
default: localhost
port:
description: Port on which logstash is listening
env:
- name: LOGSTASH_PORT
ini:
- section: callback_logstash
key: port
version_added: 1.0.0
default: 5000
type:
description: Message type
env:
- name: LOGSTASH_TYPE
ini:
- section: callback_logstash
key: type
version_added: 1.0.0
default: ansible
'''
@@ -68,7 +80,7 @@ class CallbackModule(CallbackBase):
Requires:
python-logstash
This plugin makes use of the following environment variables:
This plugin makes use of the following environment variables or ini config:
LOGSTASH_SERVER (optional): defaults to localhost
LOGSTASH_PORT (optional): defaults to 5000
LOGSTASH_TYPE (optional): defaults to ansible
@@ -79,30 +91,37 @@ class CallbackModule(CallbackBase):
CALLBACK_NAME = 'community.general.logstash'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super(CallbackModule, self).__init__()
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
if not HAS_LOGSTASH:
self.disabled = True
self._display.warning("The required python-logstash is not installed. "
"pip install python-logstash")
else:
self.logger = logging.getLogger('python-logstash-logger')
self.logger.setLevel(logging.DEBUG)
self.handler = logstash.TCPLogstashHandler(
os.getenv('LOGSTASH_SERVER', 'localhost'),
int(os.getenv('LOGSTASH_PORT', 5000)),
version=1,
message_type=os.getenv('LOGSTASH_TYPE', 'ansible')
)
self.logger.addHandler(self.handler)
self.hostname = socket.gethostname()
self.session = str(uuid.uuid1())
self.errors = 0
self.start_time = datetime.utcnow()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.logger = logging.getLogger('python-logstash-logger')
self.logger.setLevel(logging.DEBUG)
self.logstash_server = self.get_option('server')
self.logstash_port = self.get_option('port')
self.logstash_type = self.get_option('type')
self.handler = logstash.TCPLogstashHandler(
self.logstash_server,
int(self.logstash_port),
version=1,
message_type=self.logstash_type
)
self.logger.addHandler(self.handler)
self.hostname = socket.gethostname()
self.session = str(uuid.uuid1())
self.errors = 0
def v2_playbook_on_start(self, playbook):
self.playbook = playbook._file_name
data = {

View File

@@ -30,8 +30,8 @@ DOCUMENTATION = '''
'''
EXAMPLES = """
- debug: msg="This will not be printed"
- debug: msg="But this will"
- ansible.builtin.debug: msg="This will not be printed"
- ansible.builtin.debug: msg="But this will"
tags: [print_action]
"""
@@ -201,7 +201,7 @@ class CallbackModule(CallbackBase):
)
if 'results' in result._result:
for r in result._result['results']:
failed = 'failed' in r
failed = 'failed' in r and r['failed']
stderr = [r.get('exception', None), r.get('module_stderr', None)]
stderr = "\n".join([e for e in stderr if e]).strip()

View File

@@ -45,6 +45,18 @@ DOCUMENTATION = '''
ini:
- section: callback_splunk
key: authtoken
validate_certs:
description: Whether to validate certificates for connections to HEC. It is not recommended to set to
C(false) except when you are sure that nobody can intercept the connection
between this plugin and HEC, as setting it to C(false) allows man-in-the-middle attacks!
env:
- name: SPLUNK_VALIDATE_CERTS
ini:
- section: callback_splunk
key: validate_certs
type: bool
default: true
version_added: '1.0.0'
'''
EXAMPLES = '''
@@ -84,7 +96,7 @@ class SplunkHTTPCollectorSource(object):
self.ip_address = socket.gethostbyname(socket.gethostname())
self.user = getpass.getuser()
def send_event(self, url, authtoken, state, result, runtime):
def send_event(self, url, authtoken, validate_certs, state, result, runtime):
if result._task_fields['args'].get('_ansible_check_mode') is True:
self.ansible_check_mode = True
@@ -129,7 +141,8 @@ class SplunkHTTPCollectorSource(object):
'Content-type': 'application/json',
'Authorization': 'Splunk ' + authtoken
},
method='POST'
method='POST',
validate_certs=validate_certs
)
@@ -144,6 +157,7 @@ class CallbackModule(CallbackBase):
self.start_datetimes = {} # Collect task start times
self.url = None
self.authtoken = None
self.validate_certs = None
self.splunk = SplunkHTTPCollectorSource()
def _runtime(self, result):
@@ -153,7 +167,9 @@ class CallbackModule(CallbackBase):
).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
self.url = self.get_option('url')
@@ -175,6 +191,8 @@ class CallbackModule(CallbackBase):
'`SPLUNK_AUTHTOKEN` environment variable or '
'in the ansible.cfg file.')
self.validate_certs = self.get_option('validate_certs')
def v2_playbook_on_start(self, playbook):
self.splunk.ansible_playbook = basename(playbook._file_name)
@@ -188,6 +206,7 @@ class CallbackModule(CallbackBase):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
'OK',
result,
self._runtime(result)
@@ -197,6 +216,7 @@ class CallbackModule(CallbackBase):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
'SKIPPED',
result,
self._runtime(result)
@@ -206,6 +226,7 @@ class CallbackModule(CallbackBase):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
'FAILED',
result,
self._runtime(result)
@@ -215,6 +236,7 @@ class CallbackModule(CallbackBase):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
'FAILED',
result,
self._runtime(result)
@@ -224,6 +246,7 @@ class CallbackModule(CallbackBase):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
'UNREACHABLE',
result,
self._runtime(result)

View File

@@ -1,33 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde (akasurde@redhat.com)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Parameters for DigitalOcean modules
DOCUMENTATION = r'''
options:
oauth_token:
description:
- DigitalOcean OAuth token.
- "There are several other environment variables which can be used to provide this value."
- "i.e., - 'DO_API_TOKEN', 'DO_API_KEY', 'DO_OAUTH_TOKEN' and 'OAUTH_TOKEN'"
type: str
aliases: [ api_token ]
timeout:
description:
- The timeout in seconds used for polling DigitalOcean's API.
type: int
default: 30
validate_certs:
description:
- If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates.
type: bool
default: yes
'''

View File

@@ -1,37 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Gregory Shulov <gregory.shulov@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard Infinibox documentation fragment
DOCUMENTATION = r'''
options:
system:
description:
- Infinibox Hostname or IPv4 Address.
type: str
required: true
user:
description:
- Infinibox User username with sufficient priveledges ( see notes ).
required: false
password:
description:
- Infinibox User password.
type: str
notes:
- This module requires infinisdk python library
- You must set INFINIBOX_USER and INFINIBOX_PASSWORD environment variables
if user and password arguments are not passed to the module directly
- Ansible uses the infinisdk configuration file C(~/.infinidat/infinisdk.ini) if no credentials are provided.
See U(http://infinisdk.readthedocs.io/en/latest/getting_started.html)
requirements:
- "python >= 2.7"
- infinisdk
'''

View File

@@ -1,82 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Jonathan Mainguy <jon@soh.re>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard mysql documentation fragment
DOCUMENTATION = r'''
options:
login_user:
description:
- The username used to authenticate with.
type: str
login_password:
description:
- The password used to authenticate with.
type: str
login_host:
description:
- Host running the database.
- In some cases for local connections the I(login_unix_socket=/path/to/mysqld/socket),
that is usually C(/var/run/mysqld/mysqld.sock), needs to be used instead of I(login_host=localhost).
type: str
default: localhost
login_port:
description:
- Port of the MySQL server. Requires I(login_host) be defined as other than localhost if login_port is used.
type: int
default: 3306
login_unix_socket:
description:
- The path to a Unix domain socket for local connections.
type: str
connect_timeout:
description:
- The connection timeout when connecting to the MySQL server.
type: int
default: 30
config_file:
description:
- Specify a config file from which user and password are to be read.
type: path
default: '~/.my.cnf'
ca_cert:
description:
- The path to a Certificate Authority (CA) certificate. This option, if used, must specify the same certificate
as used by the server.
type: path
aliases: [ ssl_ca ]
client_cert:
description:
- The path to a client public key certificate.
type: path
aliases: [ ssl_cert ]
client_key:
description:
- The path to the client private key.
type: path
aliases: [ ssl_key ]
requirements:
- PyMySQL (Python 2.7 and Python 3.X), or
- MySQLdb (Python 2.x)
notes:
- Requires the PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) package on the remote host.
The Python package may be installed with apt-get install python-pymysql (Ubuntu; see M(ansible.builtin.apt)) or
yum install python2-PyMySQL (RHEL/CentOS/Fedora; see M(ansible.builtin.yum)). You can also use dnf install python2-PyMySQL
for newer versions of Fedora; see M(ansible.builtin.dnf).
- Both C(login_password) and C(login_user) are required when you are
passing credentials. If none are present, the module will attempt to read
the credentials from C(~/.my.cnf), and finally fall back to using the MySQL
default login of 'root' with no password.
- If there are problems with local connections, using I(login_unix_socket=/path/to/mysqld/socket)
instead of I(login_host=localhost) might help. As an example, the default MariaDB installation of version 10.4
and later uses the unix_socket authentication plugin by default that
without using I(login_unix_socket=/var/run/mysqld/mysqld.sock) (the default path)
causes the error ``Host '127.0.0.1' is not allowed to connect to this MariaDB server``.
'''

View File

@@ -54,5 +54,5 @@ requirements:
notes:
- "In order to use this module you have to install oVirt Python SDK.
To ensure it's installed with correct version you can create the following task:
pip: name=ovirt-engine-sdk-python version=4.3.0"
ansible.builtin.pip: name=ovirt-engine-sdk-python version=4.3.0"
'''

View File

@@ -1,57 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Documentation fragment for ProxySQL connectivity
CONNECTIVITY = r'''
options:
login_user:
description:
- The username used to authenticate to ProxySQL admin interface.
type: str
login_password:
description:
- The password used to authenticate to ProxySQL admin interface.
type: str
login_host:
description:
- The host used to connect to ProxySQL admin interface.
type: str
default: '127.0.0.1'
login_port:
description:
- The port used to connect to ProxySQL admin interface.
type: int
default: 6032
config_file:
description:
- Specify a config file from which I(login_user) and I(login_password)
are to be read.
type: path
default: ''
requirements:
- PyMySQL (Python 2.7 and Python 3.X), or
- MySQLdb (Python 2.x)
'''
# Documentation fragment for managing ProxySQL configuration
MANAGING_CONFIG = r'''
options:
save_to_disk:
description:
- Save config to sqlite db on disk to persist the configuration.
type: bool
default: 'yes'
load_to_runtime:
description:
- Dynamically load config to runtime memory.
type: bool
default: 'yes'
'''

View File

@@ -29,7 +29,7 @@ except ImportError:
def json_query(data, expr):
'''Query data using jmespath query language ( http://jmespath.org ). Example:
- debug: msg="{{ instance | json_query(tagged_instances[*].block_device_mapping.*.volume_id') }}"
- ansible.builtin.debug: msg="{{ instance | json_query(tagged_instances[*].block_device_mapping.*.volume_id') }}"
'''
if not HAS_LIB:
raise AnsibleError('You need to install "jmespath" prior to running '

View File

@@ -0,0 +1,278 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2020 Orion Poplawski <orion@nwra.com>
# Copyright (c) 2020 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: cobbler
plugin_type: inventory
short_description: Cobbler inventory source
version_added: 1.0.0
description:
- Get inventory hosts from the cobbler service.
- "Uses a configuration file as an inventory source, it must end in C(.cobbler.yml) or C(.cobbler.yaml) and has a C(plugin: cobbler) entry."
extends_documentation_fragment:
- inventory_cache
options:
plugin:
description: The name of this plugin, it should always be set to C(cobbler) for this plugin to recognize it as it's own.
required: yes
choices: ['cobbler']
url:
description: URL to cobbler.
default: 'http://cobbler/cobbler_api'
env:
- name: COBBLER_SERVER
user:
description: Cobbler authentication user.
required: no
env:
- name: COBBLER_USER
password:
description: Cobbler authentication password
required: no
env:
- name: COBBLER_PASSWORD
cache_fallback:
description: Fallback to cached results if connection to cobbler fails
type: boolean
default: no
exclude_profiles:
description: Profiles to exclude from inventory
type: list
default: []
elements: str
group_by:
description: Keys to group hosts by
type: list
default: [ 'mgmt_classes', 'owners', 'status' ]
group:
description: Group to place all hosts into
default: cobbler
group_prefix:
description: Prefix to apply to cobbler groups
default: cobbler_
want_facts:
description: Toggle, if C(true) the plugin will retrieve host facts from the server
type: boolean
default: yes
'''
EXAMPLES = '''
# my.cobbler.yml
plugin: community.general.cobbler
url: http://cobbler/cobbler_api
user: ansible-tester
password: secure
'''
from distutils.version import LooseVersion
import socket
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.module_utils.six import iteritems
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable, to_safe_group_name
# xmlrpc
try:
import xmlrpclib as xmlrpc_client
HAS_XMLRPC_CLIENT = True
except ImportError:
try:
import xmlrpc.client as xmlrpc_client
HAS_XMLRPC_CLIENT = True
except ImportError:
HAS_XMLRPC_CLIENT = False
class InventoryModule(BaseInventoryPlugin, Cacheable):
''' Host inventory parser for ansible using cobbler as source. '''
NAME = 'cobbler'
def __init__(self):
super(InventoryModule, self).__init__()
# from config
self.cobbler_url = None
self.exclude_profiles = [] # A list of profiles to exclude
self.connection = None
self.token = None
self.cache_key = None
self.use_cache = None
def verify_file(self, path):
valid = False
if super(InventoryModule, self).verify_file(path):
if path.endswith(('cobbler.yaml', 'cobbler.yml')):
valid = True
else:
self.display.vvv('Skipping due to inventory source not ending in "cobbler.yaml" nor "cobbler.yml"')
return valid
def _get_connection(self):
if not HAS_XMLRPC_CLIENT:
raise AnsibleError('Could not import xmlrpc client library')
if self.connection is None:
self.display.vvvv('Connecting to %s\n' % self.cobbler_url)
self.connection = xmlrpc_client.Server(self.cobbler_url, allow_none=True)
self.token = None
if self.get_option('user') is not None:
self.token = self.connection.login(self.get_option('user'), self.get_option('password'))
return self.connection
def _init_cache(self):
if self.cache_key not in self._cache:
self._cache[self.cache_key] = {}
def _reload_cache(self):
if self.get_option('cache_fallback'):
self.display.vvv('Cannot connect to server, loading cache\n')
self._options['cache_timeout'] = 0
self.load_cache_plugin()
self._cache.get(self.cache_key, {})
def _get_profiles(self):
if not self.use_cache or 'profiles' not in self._cache.get(self.cache_key, {}):
c = self._get_connection()
try:
if self.token is not None:
data = c.get_profiles(self.token)
else:
data = c.get_profiles()
except (socket.gaierror, socket.error, xmlrpc_client.ProtocolError):
self._reload_cache()
else:
self._init_cache()
self._cache[self.cache_key]['profiles'] = data
return self._cache[self.cache_key]['profiles']
def _get_systems(self):
if not self.use_cache or 'systems' not in self._cache.get(self.cache_key, {}):
c = self._get_connection()
try:
if self.token is not None:
data = c.get_systems(self.token)
else:
data = c.get_systems()
except (socket.gaierror, socket.error, xmlrpc_client.ProtocolError):
self._reload_cache()
else:
self._init_cache()
self._cache[self.cache_key]['systems'] = data
return self._cache[self.cache_key]['systems']
def _add_safe_group_name(self, group, child=None):
group_name = self.inventory.add_group(to_safe_group_name('%s%s' % (self.get_option('group_prefix'), group.lower().replace(" ", ""))))
if child is not None:
self.inventory.add_child(group_name, child)
return group_name
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
# read config from file, this sets 'options'
self._read_config_data(path)
# get connection host
self.cobbler_url = self.get_option('url')
self.cache_key = self.get_cache_key(path)
self.use_cache = cache and self.get_option('cache')
self.exclude_profiles = self.get_option('exclude_profiles')
self.group_by = self.get_option('group_by')
for profile in self._get_profiles():
if profile['parent']:
self.display.vvvv('Processing profile %s with parent %s\n' % (profile['name'], profile['parent']))
if profile['parent'] not in self.exclude_profiles:
parent_group_name = self._add_safe_group_name(profile['parent'])
self.display.vvvv('Added profile parent group %s\n' % parent_group_name)
if profile['name'] not in self.exclude_profiles:
group_name = self._add_safe_group_name(profile['name'])
self.display.vvvv('Added profile group %s\n' % group_name)
self.inventory.add_child(parent_group_name, group_name)
else:
self.display.vvvv('Processing profile %s without parent\n' % profile['name'])
# Create a heirarchy of profile names
profile_elements = profile['name'].split('-')
i = 0
while i < len(profile_elements) - 1:
profile_group = '-'.join(profile_elements[0:i + 1])
profile_group_child = '-'.join(profile_elements[0:i + 2])
if profile_group in self.exclude_profiles:
self.display.vvvv('Excluding profile %s\n' % profile_group)
break
group_name = self._add_safe_group_name(profile_group)
self.display.vvvv('Added profile group %s\n' % group_name)
child_group_name = self._add_safe_group_name(profile_group_child)
self.display.vvvv('Added profile child group %s to %s\n' % (child_group_name, group_name))
self.inventory.add_child(group_name, child_group_name)
i = i + 1
# Add default group for this inventory if specified
self.group = to_safe_group_name(self.get_option('group'))
if self.group is not None and self.group != '':
self.inventory.add_group(self.group)
self.display.vvvv('Added site group %s\n' % self.group)
for host in self._get_systems():
# Get the FQDN for the host and add it to the right groups
hostname = host['hostname'] # None
interfaces = host['interfaces']
if host['profile'] in self.exclude_profiles:
self.display.vvvv('Excluding host %s in profile %s\n' % (host['name'], host['profile']))
continue
# hostname is often empty for non-static IP hosts
if hostname == '':
for (iname, ivalue) in iteritems(interfaces):
if ivalue['management'] or not ivalue['static']:
this_dns_name = ivalue.get('dns_name', None)
if this_dns_name is not None and this_dns_name != "":
hostname = this_dns_name
self.display.vvvv('Set hostname to %s from %s\n' % (hostname, iname))
if hostname == '':
self.display.vvvv('Cannot determine hostname for host %s, skipping\n' % host['name'])
continue
self.inventory.add_host(hostname)
self.display.vvvv('Added host %s hostname %s\n' % (host['name'], hostname))
# Add host to profile group
group_name = self._add_safe_group_name(host['profile'], child=hostname)
self.display.vvvv('Added host %s to profile group %s\n' % (hostname, group_name))
# Add host to groups specified by group_by fields
for group_by in self.group_by:
if host[group_by] == '<<inherit>>':
groups = []
else:
groups = [host[group_by]] if isinstance(host[group_by], str) else host[group_by]
for group in groups:
group_name = self._add_safe_group_name(group, child=hostname)
self.display.vvvv('Added host %s to group_by %s group %s\n' % (hostname, group_by, group_name))
# Add to group for this inventory
if self.group is not None:
self.inventory.add_child(self.group, hostname)
# Add host variables
if self.get_option('want_facts'):
try:
self.inventory.set_variable(hostname, 'cobbler', host)
except ValueError as e:
self.display.warning("Could not set host info for %s: %s" % (hostname, to_text(e)))

View File

@@ -30,17 +30,26 @@ DOCUMENTATION = '''
- gitlab_runners
server_url:
description: The URL of the GitLab server, with protocol (i.e. http or https).
env:
- name: GITLAB_SERVER_URL
version_added: 1.0.0
type: str
required: true
default: https://gitlab.com
api_token:
description: GitLab token for logging in.
env:
- name: GITLAB_API_TOKEN
version_added: 1.0.0
type: str
aliases:
- private_token
- access_token
filter:
description: filter runners from GitLab API
env:
- name: GITLAB_FILTER
version_added: 1.0.0
type: str
choices: ['active', 'paused', 'online', 'specific', 'shared']
verbose_output:

View File

@@ -20,10 +20,10 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: Example of the change in the description
debug: msg="{{ lookup('cartesian', [1,2,3], [a, b])}}"
ansible.builtin.debug: msg="{{ lookup('cartesian', [1,2,3], [a, b])}}"
- name: loops over the cartesian product of the supplied lists
debug: msg="{{item}}"
ansible.builtin.debug: msg="{{item}}"
with_cartesian:
- "{{list1}}"
- "{{list2}}"

View File

@@ -27,7 +27,7 @@ DOCUMENTATION = '''
'''
EXAMPLES = """
- debug:
- ansible.builtin.debug:
msg: "{{ lookup('chef_databag', 'name=data_bag_name item=data_bag_item') }}"
"""

View File

@@ -1,159 +0,0 @@
# (c) 2018, Jason Vanderhoof <jason.vanderhoof@cyberark.com>, Oren Ben Meir <oren.benmeir@cyberark.com>
# (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
lookup: conjur_variable
short_description: Fetch credentials from CyberArk Conjur.
description:
- "Retrieves credentials from Conjur using the controlling host's Conjur identity. Conjur info: U(https://www.conjur.org/)."
requirements:
- 'The controlling host running Ansible has a Conjur identity.
(More: U(https://docs.conjur.org/Latest/en/Content/Get%20Started/key_concepts/machine_identity.html))'
options:
_term:
description: Variable path
required: True
identity_file:
description: Path to the Conjur identity file. The identity file follows the netrc file format convention.
type: path
default: /etc/conjur.identity
required: False
ini:
- section: conjur,
key: identity_file_path
env:
- name: CONJUR_IDENTITY_FILE
config_file:
description: Path to the Conjur configuration file. The configuration file is a YAML file.
type: path
default: /etc/conjur.conf
required: False
ini:
- section: conjur,
key: config_file_path
env:
- name: CONJUR_CONFIG_FILE
'''
EXAMPLES = """
- debug:
msg: "{{ lookup('conjur_variable', '/path/to/secret') }}"
"""
RETURN = """
_raw:
description:
- Value stored in Conjur.
"""
import os.path
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
from base64 import b64encode
from netrc import netrc
from os import environ
from time import time
from ansible.module_utils.six.moves.urllib.parse import quote_plus
import yaml
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
display = Display()
# Load configuration and return as dictionary if file is present on file system
def _load_conf_from_file(conf_path):
display.vvv('conf file: {0}'.format(conf_path))
if not os.path.exists(conf_path):
raise AnsibleError('Conjur configuration file `{0}` was not found on the controlling host'
.format(conf_path))
display.vvvv('Loading configuration from: {0}'.format(conf_path))
with open(conf_path) as f:
config = yaml.safe_load(f.read())
if 'account' not in config or 'appliance_url' not in config:
raise AnsibleError('{0} on the controlling host must contain an `account` and `appliance_url` entry'
.format(conf_path))
return config
# Load identity and return as dictionary if file is present on file system
def _load_identity_from_file(identity_path, appliance_url):
display.vvvv('identity file: {0}'.format(identity_path))
if not os.path.exists(identity_path):
raise AnsibleError('Conjur identity file `{0}` was not found on the controlling host'
.format(identity_path))
display.vvvv('Loading identity from: {0} for {1}'.format(identity_path, appliance_url))
conjur_authn_url = '{0}/authn'.format(appliance_url)
identity = netrc(identity_path)
if identity.authenticators(conjur_authn_url) is None:
raise AnsibleError('The netrc file on the controlling host does not contain an entry for: {0}'
.format(conjur_authn_url))
id, account, api_key = identity.authenticators(conjur_authn_url)
if not id or not api_key:
raise AnsibleError('{0} on the controlling host must contain a `login` and `password` entry for {1}'
.format(identity_path, appliance_url))
return {'id': id, 'api_key': api_key}
# Use credentials to retrieve temporary authorization token
def _fetch_conjur_token(conjur_url, account, username, api_key):
conjur_url = '{0}/authn/{1}/{2}/authenticate'.format(conjur_url, account, username)
display.vvvv('Authentication request to Conjur at: {0}, with user: {1}'.format(conjur_url, username))
response = open_url(conjur_url, data=api_key, method='POST')
code = response.getcode()
if code != 200:
raise AnsibleError('Failed to authenticate as \'{0}\' (got {1} response)'
.format(username, code))
return response.read()
# Retrieve Conjur variable using the temporary token
def _fetch_conjur_variable(conjur_variable, token, conjur_url, account):
token = b64encode(token)
headers = {'Authorization': 'Token token="{0}"'.format(token)}
display.vvvv('Header: {0}'.format(headers))
url = '{0}/secrets/{1}/variable/{2}'.format(conjur_url, account, quote_plus(conjur_variable))
display.vvvv('Conjur Variable URL: {0}'.format(url))
response = open_url(url, headers=headers, method='GET')
if response.getcode() == 200:
display.vvvv('Conjur variable {0} was successfully retrieved'.format(conjur_variable))
return [response.read()]
if response.getcode() == 401:
raise AnsibleError('Conjur request has invalid authorization credentials')
if response.getcode() == 403:
raise AnsibleError('The controlling host\'s Conjur identity does not have authorization to retrieve {0}'
.format(conjur_variable))
if response.getcode() == 404:
raise AnsibleError('The variable {0} does not exist'.format(conjur_variable))
return {}
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
conf_file = self.get_option('config_file')
conf = _load_conf_from_file(conf_file)
identity_file = self.get_option('identity_file')
identity = _load_identity_from_file(identity_file, conf['appliance_url'])
token = _fetch_conjur_token(conf['appliance_url'], conf['account'], identity['id'], identity['api_key'])
return _fetch_conjur_variable(terms[0], token, conf['appliance_url'], conf['account'])

View File

@@ -78,19 +78,19 @@ DOCUMENTATION = '''
'''
EXAMPLES = """
- debug:
- ansible.builtin.debug:
msg: 'key contains {{item}}'
with_consul_kv:
- 'key/to/retrieve'
- name: Parameters can be provided after the key be more specific about what to retrieve
debug:
ansible.builtin.debug:
msg: 'key contains {{item}}'
with_consul_kv:
- 'key/to recurse=true token=E6C060A9-26FB-407A-B83E-12DDAFCB4D98'
- name: retrieving a KV from a remote cluster on non default port
debug:
ansible.builtin.debug:
msg: "{{ lookup('consul_kv', 'my/key', host='10.10.10.10', port='2000') }}"
"""

View File

@@ -44,16 +44,16 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: first use credstash to store your secrets
shell: credstash put my-github-password secure123
ansible.builtin.shell: credstash put my-github-password secure123
- name: "Test credstash lookup plugin -- get my github password"
debug: msg="Credstash lookup! {{ lookup('credstash', 'my-github-password') }}"
ansible.builtin.debug: msg="Credstash lookup! {{ lookup('credstash', 'my-github-password') }}"
- name: "Test credstash lookup plugin -- get my other password from us-west-1"
debug: msg="Credstash lookup! {{ lookup('credstash', 'my-other-password', region='us-west-1') }}"
ansible.builtin.debug: msg="Credstash lookup! {{ lookup('credstash', 'my-other-password', region='us-west-1') }}"
- name: "Test credstash lookup plugin -- get the company's github password"
debug: msg="Credstash lookup! {{ lookup('credstash', 'company-github-password', table='company-passwords') }}"
ansible.builtin.debug: msg="Credstash lookup! {{ lookup('credstash', 'company-github-password', table='company-passwords') }}"
- name: Example play using the 'context' feature
hosts: localhost
@@ -64,10 +64,10 @@ EXAMPLES = """
tasks:
- name: "Test credstash lookup plugin -- get the password with a context passed as a variable"
debug: msg="{{ lookup('credstash', 'some-password', context=context) }}"
ansible.builtin.debug: msg="{{ lookup('credstash', 'some-password', context=context) }}"
- name: "Test credstash lookup plugin -- get the password with a context defined here"
debug: msg="{{ lookup('credstash', 'some-password', context=dict(app='my_app', environment='production')) }}"
ansible.builtin.debug: msg="{{ lookup('credstash', 'some-password', context=dict(app='my_app', environment='production')) }}"
"""
RETURN = """

View File

@@ -36,7 +36,7 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: passing options to the lookup
debug: msg={{ lookup("cyberarkpassword", cyquery)}}
ansible.builtin.debug: msg={{ lookup("cyberarkpassword", cyquery)}}
vars:
cyquery:
appid: "app_ansible"
@@ -45,7 +45,7 @@ EXAMPLES = """
- name: used in a loop
debug: msg={{item}}
ansible.builtin.debug: msg={{item}}
with_cyberarkpassword:
appid: 'app_ansible'
query: 'safe=CyberArk_Passwords;folder=root;object=AdminPass'

View File

@@ -44,24 +44,24 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: Simple A record (IPV4 address) lookup for example.com
debug: msg="{{ lookup('dig', 'example.com.')}}"
ansible.builtin.debug: msg="{{ lookup('dig', 'example.com.')}}"
- name: "The TXT record for example.org."
debug: msg="{{ lookup('dig', 'example.org.', 'qtype=TXT') }}"
ansible.builtin.debug: msg="{{ lookup('dig', 'example.org.', 'qtype=TXT') }}"
- name: "The TXT record for example.org, alternative syntax."
debug: msg="{{ lookup('dig', 'example.org./TXT') }}"
ansible.builtin.debug: msg="{{ lookup('dig', 'example.org./TXT') }}"
- name: use in a loop
debug: msg="MX record for gmail.com {{ item }}"
ansible.builtin.debug: msg="MX record for gmail.com {{ item }}"
with_items: "{{ lookup('dig', 'gmail.com./MX', wantlist=True) }}"
- debug: msg="Reverse DNS for 192.0.2.5 is {{ lookup('dig', '192.0.2.5/PTR') }}"
- debug: msg="Reverse DNS for 192.0.2.5 is {{ lookup('dig', '5.2.0.192.in-addr.arpa./PTR') }}"
- debug: msg="Reverse DNS for 192.0.2.5 is {{ lookup('dig', '5.2.0.192.in-addr.arpa.', 'qtype=PTR') }}"
- debug: msg="Querying 198.51.100.23 for IPv4 address for example.com. produces {{ lookup('dig', 'example.com', '@198.51.100.23') }}"
- ansible.builtin.debug: msg="Reverse DNS for 192.0.2.5 is {{ lookup('dig', '192.0.2.5/PTR') }}"
- ansible.builtin.debug: msg="Reverse DNS for 192.0.2.5 is {{ lookup('dig', '5.2.0.192.in-addr.arpa./PTR') }}"
- ansible.builtin.debug: msg="Reverse DNS for 192.0.2.5 is {{ lookup('dig', '5.2.0.192.in-addr.arpa.', 'qtype=PTR') }}"
- ansible.builtin.debug: msg="Querying 198.51.100.23 for IPv4 address for example.com. produces {{ lookup('dig', 'example.com', '@198.51.100.23') }}"
- debug: msg="XMPP service for gmail.com. is available at {{ item.target }} on port {{ item.port }}"
- ansible.builtin.debug: msg="XMPP service for gmail.com. is available at {{ item.target }} on port {{ item.port }}"
with_items: "{{ lookup('dig', '_xmpp-server._tcp.gmail.com./SRV', 'flat=0', wantlist=True) }}"
"""

View File

@@ -21,17 +21,17 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: show txt entry
debug: msg="{{lookup('dnstxt', ['test.example.com'])}}"
ansible.builtin.debug: msg="{{lookup('dnstxt', ['test.example.com'])}}"
- name: iterate over txt entries
debug: msg="{{item}}"
ansible.builtin.debug: msg="{{item}}"
with_dnstxt:
- 'test.example.com'
- 'other.example.com'
- 'last.example.com'
- name: iterate of a comma delimited DNS TXT entry
debug: msg="{{item}}"
ansible.builtin.debug: msg="{{item}}"
with_dnstxt: "{{lookup('dnstxt', ['test.example.com']).split(',')}}"
"""

138
plugins/lookup/dsv.py Normal file
View File

@@ -0,0 +1,138 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2020, Adam Migus <adam@migus.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
lookup: dsv
author: Adam Migus (adam@migus.org)
short_description: Get secrets from Thycotic DevOps Secrets Vault
version_added: 1.0.0
description:
- Uses the Thycotic DevOps Secrets Vault Python SDK to get Secrets from a
DSV I(tenant) using a I(client_id) and I(client_secret).
requirements:
- python-dsv-sdk - https://pypi.org/project/python-dsv-sdk/
options:
_terms:
description: The path to the secret, e.g. C(/staging/servers/web1).
required: true
tenant:
description: The first format parameter in the default I(url_template).
env:
- name: DSV_TENANT
ini:
- section: dsv_lookup
key: tenant
required: true
tld:
default: com
description: The top-level domain of the tenant; the second format
parameter in the default I(url_template).
env:
- name: DSV_TLD
ini:
- section: dsv_lookup
key: tld
required: false
client_id:
description: The client_id with which to request the Access Grant.
env:
- name: DSV_CLIENT_ID
ini:
- section: dsv_lookup
key: client_id
required: true
client_secret:
description: The client secret associated with the specific I(client_id).
env:
- name: DSV_CLIENT_SECRET
ini:
- section: dsv_lookup
key: client_secret
required: true
url_template:
default: https://{}.secretsvaultcloud.{}/v1
description: The path to prepend to the base URL to form a valid REST
API request.
env:
- name: DSV_URL_TEMPLATE
ini:
- section: dsv_lookup
key: url_template
required: false
"""
RETURN = r"""
_list:
description:
- One or more JSON responses to C(GET /secrets/{path}).
- See U(https://dsv.thycotic.com/api/index.html#operation/getSecret).
"""
EXAMPLES = r"""
- hosts: localhost
vars:
secret: "{{ lookup('community.general.dsv', '/test/secret') }}"
tasks:
- ansible.builtin.debug:
msg: 'the password is {{ secret["data"]["password"] }}'
"""
from ansible.errors import AnsibleError, AnsibleOptionsError
sdk_is_missing = False
try:
from thycotic.secrets.vault import (
SecretsVault,
SecretsVaultError,
)
except ImportError:
sdk_is_missing = True
from ansible.utils.display import Display
from ansible.plugins.lookup import LookupBase
display = Display()
class LookupModule(LookupBase):
@staticmethod
def Client(vault_parameters):
return SecretsVault(**vault_parameters)
def run(self, terms, variables, **kwargs):
if sdk_is_missing:
raise AnsibleError("python-dsv-sdk must be installed to use this plugin")
self.set_options(var_options=variables, direct=kwargs)
vault = LookupModule.Client(
**{
"tenant": self.get_option("tenant"),
"client_id": self.get_option("client_id"),
"client_secret": self.get_option("client_secret"),
"url_template": self.get_option("url_template"),
}
)
result = []
for term in terms:
display.debug("dsv_lookup term: %s" % term)
try:
path = term.lstrip("[/:]")
if path == "":
raise AnsibleOptionsError("Invalid secret path: %s" % term)
display.vvv(u"DevOps Secrets Vault GET /secrets/%s" % path)
result.append(vault.get_secret_json(path))
except SecretsVaultError as error:
raise AnsibleError(
"DevOps Secrets Vault lookup failure: %s" % error.message
)
return result

View File

@@ -54,13 +54,13 @@ DOCUMENTATION = '''
EXAMPLES = '''
- name: "a value from a locally running etcd"
debug: msg={{ lookup('etcd', 'foo/bar') }}
ansible.builtin.debug: msg={{ lookup('etcd', 'foo/bar') }}
- name: "values from multiple folders on a locally running etcd"
debug: msg={{ lookup('etcd', 'foo', 'bar', 'baz') }}
ansible.builtin.debug: msg={{ lookup('etcd', 'foo', 'bar', 'baz') }}
- name: "since Ansible 2.5 you can set server options inline"
debug: msg="{{ lookup('etcd', 'foo', version='v2', url='http://192.168.0.27:4001') }}"
ansible.builtin.debug: msg="{{ lookup('etcd', 'foo', version='v2', url='http://192.168.0.27:4001') }}"
'''
RETURN = '''
@@ -82,7 +82,7 @@ from ansible.module_utils.urls import open_url
# If etcd v2 running on host 192.168.1.21 on port 2379
# we can use the following in a playbook to retrieve /tfm/network/config key
#
# - debug: msg={{lookup('etcd','/tfm/network/config', url='http://192.168.1.21:2379' , version='v2')}}
# - ansible.builtin.debug: msg={{lookup('etcd','/tfm/network/config', url='http://192.168.1.21:2379' , version='v2')}}
#
# Example Output:
#

View File

@@ -101,19 +101,19 @@ DOCUMENTATION = '''
EXAMPLES = '''
- name: "a value from a locally running etcd"
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.etcd3', 'foo/bar') }}"
- name: "values from multiple folders on a locally running etcd"
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.etcd3', 'foo', 'bar', 'baz') }}"
- name: "look for a key prefix"
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.etcd3', '/foo/bar', prefix=True) }}"
- name: "connect to etcd3 with a client certificate"
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.etcd3', 'foo/bar', cert_cert='/etc/ssl/etcd/client.pem', cert_key='/etc/ssl/etcd/client.key') }}"
'''

View File

@@ -21,7 +21,7 @@ options:
EXAMPLES = """
- name: Create directories
file:
ansible.builtin.file:
path: /web/{{ item.path }}
state: directory
mode: '{{ item.mode }}'
@@ -29,7 +29,7 @@ EXAMPLES = """
when: item.state == 'directory'
- name: Template files (explicitly skip directories in order to use the 'src' attribute)
template:
ansible.builtin.template:
src: '{{ item.src }}'
dest: /web/{{ item.path }}
mode: '{{ item.mode }}'
@@ -37,7 +37,7 @@ EXAMPLES = """
when: item.state == 'file'
- name: Recreate symlinks
file:
ansible.builtin.file:
src: '{{ item.src }}'
dest: /web/{{ item.path }}
state: link

View File

@@ -21,7 +21,7 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: "'unnest' all elements into single list"
debug: msg="all in one list {{lookup('flattened', [1,2,3,[5,6]], [a,b,c], [[5,6,1,3], [34,a,b,c]])}}"
ansible.builtin.debug: msg="all in one list {{lookup('flattened', [1,2,3,[5,6]], [a,b,c], [[5,6,1,3], [34,a,b,c]])}}"
"""
RETURN = """

View File

@@ -29,7 +29,7 @@ extends_documentation_fragment:
'''
EXAMPLES = '''
- debug: msg="the value of foo.txt is {{ lookup('gcp_storage_file',
- ansible.builtin.debug: msg="the value of foo.txt is {{ lookup('gcp_storage_file',
bucket='gcp-bucket', src='mydir/foo.txt', project='project-name',
auth_kind='serviceaccount', service_account_file='/tmp/myserviceaccountfile.json') }}"
'''

View File

@@ -161,79 +161,79 @@ DOCUMENTATION = """
"""
EXAMPLES = """
- debug:
- ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/hello:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200') }}"
- name: Return all secrets from a path
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200') }}"
- name: Vault that requires authentication via LDAP
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret/hello:value auth_method=ldap mount_point=ldap username=myuser password=mypas') }}"
- name: Vault that requires authentication via username and password
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/hello:value auth_method=userpass username=myuser password=psw url=http://myvault:8200') }}"
- name: Using an ssl vault
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/hola:value token=c975b780-d1be-8016-866b-01d0f9b688a5 validate_certs=False') }}"
- name: using certificate auth
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret/hi:value token=xxxx url=https://myvault:8200 validate_certs=True cacert=/cacert/path/ca.pem') }}"
- name: authenticate with a Vault app role
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/hello:value auth_method=approle role_id=myroleid secret_id=mysecretid') }}"
- name: Return all secrets from a path in a namespace
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 namespace=teama/admins') }}"
# When using KV v2 the PATH should include "data" between the secret engine mount and path (e.g. "secret/data/:path")
# see: https://www.vaultproject.io/api/secret/kv/kv-v2.html#read-secret-version
- name: Return latest KV v2 secret from path
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret=secret/data/hello token=my_vault_token url=http://myvault_url:8200') }}"
# The following examples work in collection releases after community.general 0.2.0
- name: secret= is not required if secret is first
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret/data/hello token=<token> url=http://myvault_url:8200') }}"
- name: options can be specified as parameters rather than put in term string
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret/data/hello', token=my_token_var, url='http://myvault_url:8200') }}"
# return_format (or its alias 'as') can control how secrets are returned to you
- name: return secrets as a dict (default)
set_fact:
ansible.builtin.set_fact:
my_secrets: "{{ lookup('community.general.hashi_vault', 'secret/data/manysecrets', token=my_token_var, url='http://myvault_url:8200') }}"
- debug:
- ansible.builtin.debug:
msg: "{{ my_secrets['secret_key'] }}"
- debug:
- ansible.builtin.debug:
msg: "Secret '{{ item.key }}' has value '{{ item.value }}'"
loop: "{{ my_secrets | dict2items }}"
- name: return secrets as values only
debug:
ansible.builtin.debug:
msg: "A secret value: {{ item }}"
loop: "{{ query('community.general.hashi_vault', 'secret/data/manysecrets', token=my_token_var, url='http://myvault_url:8200', return_format='values') }}"
- name: return raw secret from API, including metadata
set_fact:
ansible.builtin.set_fact:
my_secret: "{{ lookup('community.general.hashi_vault', 'secret/data/hello:value', token=my_token_var, url='http://myvault_url:8200', as='raw') }}"
- debug:
- ansible.builtin.debug:
msg: "This is version {{ my_secret['metadata']['version'] }} of hello:value. The secret data is {{ my_secret['data']['data']['value'] }}"
# AWS IAM authentication method
# uses Ansible standard AWS options
- name: authenticate with aws_iam_login
debug:
ansible.builtin.debug:
msg: "{{ lookup('community.general.hashi_vault', 'secret/hello:value', auth_method='aws_iam_login' role_id='myroleid', profile=my_boto_profile) }}"
"""

View File

@@ -39,13 +39,13 @@ EXAMPLES = """
# All this examples depends on hiera.yml that describes the hierarchy
- name: "a value from Hiera 'DB'"
debug: msg={{ lookup('hiera', 'foo') }}
ansible.builtin.debug: msg={{ lookup('hiera', 'foo') }}
- name: "a value from a Hiera 'DB' on other environment"
debug: msg={{ lookup('hiera', 'foo environment=production') }}
ansible.builtin.debug: msg={{ lookup('hiera', 'foo environment=production') }}
- name: "a value from a Hiera 'DB' for a concrete node"
debug: msg={{ lookup('hiera', 'foo fqdn=puppet01.localdomain') }}
ansible.builtin.debug: msg={{ lookup('hiera', 'foo fqdn=puppet01.localdomain') }}
"""
RETURN = """

View File

@@ -18,7 +18,7 @@ DOCUMENTATION = '''
EXAMPLES = """
- name : output secrets to screen (BAD IDEA)
debug:
ansible.builtin.debug:
msg: "Password: {{item}}"
with_keyring:
- 'servicename username'

View File

@@ -25,7 +25,7 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: get 'custom_field' from lastpass entry 'entry-name'
debug:
ansible.builtin.debug:
msg: "{{ lookup('lastpass', 'entry-name', field='custom_field') }}"
"""

View File

@@ -24,11 +24,11 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: query LMDB for a list of country codes
debug:
ansible.builtin.debug:
msg: "{{ query('lmdb_kv', 'nl', 'be', 'lu', db='jp.mdb') }}"
- name: use list of values in a loop by key wildcard
debug:
ansible.builtin.debug:
msg: "Hello from {{ item.0 }} a.k.a. {{ item.1 }}"
vars:
- lmdb_kv_db: jp.mdb
@@ -36,7 +36,7 @@ EXAMPLES = """
- "n*"
- name: get an item by key
assert:
ansible.builtin.assert:
that:
- item == 'Belgium'
vars:

View File

@@ -40,11 +40,11 @@ DOCUMENTATION = '''
EXAMPLES = '''
- name: all available resources
debug: msg="{{ lookup('manifold', api_token='SecretToken') }}"
ansible.builtin.debug: msg="{{ lookup('manifold', api_token='SecretToken') }}"
- name: all available resources for a specific project in specific team
debug: msg="{{ lookup('manifold', api_token='SecretToken', project='poject-1', team='team-2') }}"
ansible.builtin.debug: msg="{{ lookup('manifold', api_token='SecretToken', project='poject-1', team='team-2') }}"
- name: two specific resources
debug: msg="{{ lookup('manifold', 'resource-1', 'resource-2') }}"
ansible.builtin.debug: msg="{{ lookup('manifold', 'resource-1', 'resource-2') }}"
'''
RETURN = '''

View File

@@ -47,11 +47,11 @@ options:
EXAMPLES = """
- name: fetch all networkview objects
set_fact:
ansible.builtin.set_fact:
networkviews: "{{ lookup('nios', 'networkview', provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
- name: fetch the default dns view
set_fact:
ansible.builtin.set_fact:
dns_views: "{{ lookup('nios', 'view', filter={'name': 'default'}, provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
# all of the examples below use credentials that are set using env variables
@@ -60,20 +60,20 @@ EXAMPLES = """
# export INFOBLOX_PASSWORD=admin
- name: fetch all host records and include extended attributes
set_fact:
ansible.builtin.set_fact:
host_records: "{{ lookup('nios', 'record:host', return_fields=['extattrs', 'name', 'view', 'comment']}) }}"
- name: use env variables to pass credentials
set_fact:
ansible.builtin.set_fact:
networkviews: "{{ lookup('nios', 'networkview') }}"
- name: get a host record
set_fact:
ansible.builtin.set_fact:
host: "{{ lookup('nios', 'record:host', filter={'name': 'hostname.ansible.com'}) }}"
- name: get the authoritative zone from a non default dns view
set_fact:
ansible.builtin.set_fact:
host: "{{ lookup('nios', 'zone_auth', filter={'fqdn': 'ansible.com', 'view': 'ansible-dns'}) }}"
"""

View File

@@ -47,15 +47,15 @@ options:
EXAMPLES = """
- name: return next available IP address for network 192.168.10.0/24
set_fact:
ansible.builtin.set_fact:
ipaddr: "{{ lookup('nios_next_ip', '192.168.10.0/24', provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
- name: return the next 3 available IP addresses for network 192.168.10.0/24
set_fact:
ansible.builtin.set_fact:
ipaddr: "{{ lookup('nios_next_ip', '192.168.10.0/24', num=3, provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
- name: return the next 3 available IP addresses for network 192.168.10.0/24 excluding ip addresses - ['192.168.10.1', '192.168.10.2']
set_fact:
ansible.builtin.set_fact:
ipaddr: "{{ lookup('nios_next_ip', '192.168.10.0/24', num=3, exclude=['192.168.10.1', '192.168.10.2'],
provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
"""

View File

@@ -55,16 +55,16 @@ options:
EXAMPLES = """
- name: return next available network for network-container 192.168.10.0/24
set_fact:
ansible.builtin.set_fact:
networkaddr: "{{ lookup('nios_next_network', '192.168.10.0/24', cidr=25, provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
- name: return the next 2 available network addresses for network-container 192.168.10.0/24
set_fact:
ansible.builtin.set_fact:
networkaddr: "{{ lookup('nios_next_network', '192.168.10.0/24', cidr=25, num=2,
provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
- name: return the available network addresses for network-container 192.168.10.0/24 excluding network range '192.168.10.0/25'
set_fact:
ansible.builtin.set_fact:
networkaddr: "{{ lookup('nios_next_network', '192.168.10.0/24', cidr=25, exclude=['192.168.10.0/25'],
provider={'host': 'nios01', 'username': 'admin', 'password': 'password'}) }}"
"""

View File

@@ -55,26 +55,26 @@ DOCUMENTATION = '''
EXAMPLES = """
# These examples only work when already signed in to 1Password
- name: Retrieve password for KITT when already signed in to 1Password
debug:
ansible.builtin.debug:
var: lookup('onepassword', 'KITT')
- name: Retrieve password for Wintermute when already signed in to 1Password
debug:
ansible.builtin.debug:
var: lookup('onepassword', 'Tessier-Ashpool', section='Wintermute')
- name: Retrieve username for HAL when already signed in to 1Password
debug:
ansible.builtin.debug:
var: lookup('onepassword', 'HAL 9000', field='username', vault='Discovery')
- name: Retrieve password for HAL when not signed in to 1Password
debug:
ansible.builtin.debug:
var: lookup('onepassword'
'HAL 9000'
subdomain='Discovery'
master_password=vault_master_password)
- name: Retrieve password for HAL when never signed in to 1Password
debug:
ansible.builtin.debug:
var: lookup('onepassword'
'HAL 9000'
subdomain='Discovery'

View File

@@ -51,11 +51,11 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: Retrieve all data about Wintermute
debug:
ansible.builtin.debug:
var: lookup('onepassword_raw', 'Wintermute')
- name: Retrieve all data about Wintermute when not signed in to 1Password
debug:
ansible.builtin.debug:
var: lookup('onepassword_raw', 'Wintermute', subdomain='Turing', vault_password='DmbslfLvasjdl')
"""

View File

@@ -57,32 +57,32 @@ DOCUMENTATION = '''
EXAMPLES = """
# Debug is used for examples, BAD IDEA to show passwords on screen
- name: Basic lookup. Fails if example/test doesn't exist
debug:
ansible.builtin.debug:
msg: "{{ lookup('passwordstore', 'example/test')}}"
- name: Create pass with random 16 character password. If password exists just give the password
debug:
ansible.builtin.debug:
var: mypassword
vars:
mypassword: "{{ lookup('passwordstore', 'example/test create=true')}}"
- name: Different size password
debug:
ansible.builtin.debug:
msg: "{{ lookup('passwordstore', 'example/test create=true length=42')}}"
- name: Create password and overwrite the password if it exists. As a bonus, this module includes the old password inside the pass file
debug:
ansible.builtin.debug:
msg: "{{ lookup('passwordstore', 'example/test create=true overwrite=true')}}"
- name: Create an alphanumeric password
debug: msg="{{ lookup('passwordstore', 'example/test create=true nosymbols=true') }}"
ansible.builtin.debug: msg="{{ lookup('passwordstore', 'example/test create=true nosymbols=true') }}"
- name: Return the value for user in the KV pair user, username
debug:
ansible.builtin.debug:
msg: "{{ lookup('passwordstore', 'example/test subkey=user')}}"
- name: Return the entire password file content
set_fact:
ansible.builtin.set_fact:
passfilecontent: "{{ lookup('passwordstore', 'example/test returnall=true')}}"
"""

View File

@@ -46,17 +46,17 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: query redis for somekey (default or configured settings used)
debug: msg="{{ lookup('redis', 'somekey') }}"
ansible.builtin.debug: msg="{{ lookup('redis', 'somekey') }}"
- name: query redis for list of keys and non-default host and port
debug: msg="{{ lookup('redis', item, host='myredis.internal.com', port=2121) }}"
ansible.builtin.debug: msg="{{ lookup('redis', item, host='myredis.internal.com', port=2121) }}"
loop: '{{list_of_redis_keys}}'
- name: use list directly
debug: msg="{{ lookup('redis', 'key1', 'key2', 'key3') }}"
ansible.builtin.debug: msg="{{ lookup('redis', 'key1', 'key2', 'key3') }}"
- name: use list directly with a socket
debug: msg="{{ lookup('redis', 'key1', 'key2', socket='/var/tmp/redis.sock') }}"
ansible.builtin.debug: msg="{{ lookup('redis', 'key1', 'key2', socket='/var/tmp/redis.sock') }}"
"""

View File

@@ -23,7 +23,7 @@ DOCUMENTATION = '''
EXAMPLES = """
- name: retrieve a string value corresponding to a key inside a Python shelve file
debug: msg="{{ lookup('shelvefile', 'file=path_to_some_shelve_file.db key=key_to_retrieve') }}
ansible.builtin.debug: msg="{{ lookup('shelvefile', 'file=path_to_some_shelve_file.db key=key_to_retrieve') }}
"""
RETURN = """

131
plugins/lookup/tss.py Normal file
View File

@@ -0,0 +1,131 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2020, Adam Migus <adam@migus.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
lookup: tss
author: Adam Migus (adam@migus.org)
short_description: Get secrets from Thycotic Secret Server
version_added: 1.0.0
description:
- Uses the Thycotic Secret Server Python SDK to get Secrets from Secret
Server using token authentication with I(username) and I(password) on
the REST API at I(base_url).
requirements:
- python-tss-sdk - https://pypi.org/project/python-tss-sdk/
options:
_terms:
description: The integer ID of the secret.
required: true
type: int
base_url:
description: The base URL of the server, e.g. C(https://localhost/SecretServer).
env:
- name: TSS_BASE_URL
ini:
- section: tss_lookup
key: base_url
required: true
username:
description: The username with which to request the OAuth2 Access Grant.
env:
- name: TSS_USERNAME
ini:
- section: tss_lookup
key: username
required: true
password:
description: The password associated with the supplied username.
env:
- name: TSS_PASSWORD
ini:
- section: tss_lookup
key: password
required: true
api_path_uri:
default: /api/v1
description: The path to append to the base URL to form a valid REST
API request.
env:
- name: TSS_API_PATH_URI
required: false
token_path_uri:
default: /oauth2/token
description: The path to append to the base URL to form a valid OAuth2
Access Grant request.
env:
- name: TSS_TOKEN_PATH_URI
required: false
"""
RETURN = r"""
_list:
description:
- The JSON responses to C(GET /secrets/{id}).
- See U(https://updates.thycotic.net/secretserver/restapiguide/TokenAuth/#operation--secrets--id--get).
"""
EXAMPLES = r"""
- hosts: localhost
vars:
secret: "{{ lookup('community.general.tss', 1) }}"
tasks:
- ansible.builtin.debug: msg="the password is {{ (secret['items'] | items2dict(key_name='slug', value_name='itemValue'))['password'] }}"
"""
from ansible.errors import AnsibleError, AnsibleOptionsError
sdk_is_missing = False
try:
from thycotic.secrets.server import (
SecretServer,
SecretServerAccessError,
SecretServerError,
)
except ImportError:
sdk_is_missing = True
from ansible.utils.display import Display
from ansible.plugins.lookup import LookupBase
display = Display()
class LookupModule(LookupBase):
@staticmethod
def Client(server_parameters):
return SecretServer(**server_parameters)
def run(self, terms, variables, **kwargs):
if sdk_is_missing:
raise AnsibleError("python-tss-sdk must be installed to use this plugin")
self.set_options(var_options=variables, direct=kwargs)
secret_server = LookupModule.Client(
{
"base_url": self.get_option("base_url"),
"username": self.get_option("username"),
"password": self.get_option("password"),
"api_path_uri": self.get_option("api_path_uri"),
"token_path_uri": self.get_option("token_path_uri"),
}
)
result = []
for term in terms:
display.debug("tss_lookup term: %s" % term)
try:
id = int(term)
display.vvv(u"Secret Server lookup of Secret with ID %d" % id)
result.append(secret_server.get_secret_json(id))
except ValueError:
raise AnsibleOptionsError("Secret ID must be an integer")
except SecretServerError as error:
raise AnsibleError("Secret Server lookup failure: %s" % error.message)
return result

View File

@@ -1,131 +0,0 @@
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Ansible Project 2017
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
from ansible.module_utils.urls import fetch_url
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import env_fallback
class Response(object):
def __init__(self, resp, info):
self.body = None
if resp:
self.body = resp.read()
self.info = info
@property
def json(self):
if not self.body:
if "body" in self.info:
return json.loads(to_text(self.info["body"]))
return None
try:
return json.loads(to_text(self.body))
except ValueError:
return None
@property
def status_code(self):
return self.info["status"]
class DigitalOceanHelper:
def __init__(self, module):
self.module = module
self.baseurl = 'https://api.digitalocean.com/v2'
self.timeout = module.params.get('timeout', 30)
self.oauth_token = module.params.get('oauth_token')
self.headers = {'Authorization': 'Bearer {0}'.format(self.oauth_token),
'Content-type': 'application/json'}
# Check if api_token is valid or not
response = self.get('account')
if response.status_code == 401:
self.module.fail_json(msg='Failed to login using API token, please verify validity of API token.')
def _url_builder(self, path):
if path[0] == '/':
path = path[1:]
return '%s/%s' % (self.baseurl, path)
def send(self, method, path, data=None):
url = self._url_builder(path)
data = self.module.jsonify(data)
resp, info = fetch_url(self.module, url, data=data, headers=self.headers, method=method, timeout=self.timeout)
return Response(resp, info)
def get(self, path, data=None):
return self.send('GET', path, data)
def put(self, path, data=None):
return self.send('PUT', path, data)
def post(self, path, data=None):
return self.send('POST', path, data)
def delete(self, path, data=None):
return self.send('DELETE', path, data)
@staticmethod
def digital_ocean_argument_spec():
return dict(
validate_certs=dict(type='bool', required=False, default=True),
oauth_token=dict(
no_log=True,
# Support environment variable for DigitalOcean OAuth Token
fallback=(env_fallback, ['DO_API_TOKEN', 'DO_API_KEY', 'DO_OAUTH_TOKEN', 'OAUTH_TOKEN']),
required=False,
aliases=['api_token'],
),
timeout=dict(type='int', default=30),
)
def get_paginated_data(self, base_url=None, data_key_name=None, data_per_page=40, expected_status_code=200):
"""
Function to get all paginated data from given URL
Args:
base_url: Base URL to get data from
data_key_name: Name of data key value
data_per_page: Number results per page (Default: 40)
expected_status_code: Expected returned code from DigitalOcean (Default: 200)
Returns: List of data
"""
page = 1
has_next = True
ret_data = []
status_code = None
response = None
while has_next or status_code != expected_status_code:
required_url = "{0}page={1}&per_page={2}".format(base_url, page, data_per_page)
response = self.get(required_url)
status_code = response.status_code
# stop if any error during pagination
if status_code != expected_status_code:
break
page += 1
ret_data.extend(response.json[data_key_name])
has_next = "pages" in response.json["links"] and "next" in response.json["links"]["pages"]
if status_code != expected_status_code:
msg = "Failed to fetch %s from %s" % (data_key_name, base_url)
if response:
msg += " due to error : %s" % response.json['message']
self.module.fail_json(msg=msg)
return ret_data

View File

@@ -1,319 +0,0 @@
# -*- coding: utf-8 -*-
#
# (c) 2013-2018, Adam Miller (maxamillion@fedoraproject.org)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# Imports and info for sanity checking
from distutils.version import LooseVersion
FW_VERSION = None
fw = None
fw_offline = False
import_failure = True
try:
import firewall.config
FW_VERSION = firewall.config.VERSION
from firewall.client import FirewallClient
from firewall.client import FirewallClientZoneSettings
from firewall.errors import FirewallError
import_failure = False
try:
fw = FirewallClient()
fw.getDefaultZone()
except (AttributeError, FirewallError):
# Firewalld is not currently running, permanent-only operations
fw_offline = True
# Import other required parts of the firewalld API
#
# NOTE:
# online and offline operations do not share a common firewalld API
try:
from firewall.core.fw_test import Firewall_test
fw = Firewall_test()
except (ModuleNotFoundError):
# In firewalld version 0.7.0 this behavior changed
from firewall.core.fw import Firewall
fw = Firewall(offline=True)
fw.start()
except ImportError:
pass
class FirewallTransaction(object):
"""
FirewallTransaction
This is the base class for all firewalld transactions we might want to have
"""
def __init__(self, module, action_args=(), zone=None, desired_state=None,
permanent=False, immediate=False, enabled_values=None, disabled_values=None):
# type: (firewall.client, tuple, str, bool, bool, bool)
"""
initializer the transaction
:module: AnsibleModule, instance of AnsibleModule
:action_args: tuple, args to pass for the action to take place
:zone: str, firewall zone
:desired_state: str, the desired state (enabled, disabled, etc)
:permanent: bool, action should be permanent
:immediate: bool, action should take place immediately
:enabled_values: str[], acceptable values for enabling something (default: enabled)
:disabled_values: str[], acceptable values for disabling something (default: disabled)
"""
self.module = module
self.fw = fw
self.action_args = action_args
if zone:
self.zone = zone
else:
if fw_offline:
self.zone = fw.get_default_zone()
else:
self.zone = fw.getDefaultZone()
self.desired_state = desired_state
self.permanent = permanent
self.immediate = immediate
self.fw_offline = fw_offline
self.enabled_values = enabled_values or ["enabled"]
self.disabled_values = disabled_values or ["disabled"]
# List of messages that we'll call module.fail_json or module.exit_json
# with.
self.msgs = []
# Allow for custom messages to be added for certain subclass transaction
# types
self.enabled_msg = None
self.disabled_msg = None
#####################
# exception handling
#
def action_handler(self, action_func, action_func_args):
"""
Function to wrap calls to make actions on firewalld in try/except
logic and emit (hopefully) useful error messages
"""
try:
return action_func(*action_func_args)
except Exception as e:
# If there are any commonly known errors that we should provide more
# context for to help the users diagnose what's wrong. Handle that here
if "INVALID_SERVICE" in "%s" % e:
self.msgs.append("Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)")
if len(self.msgs) > 0:
self.module.fail_json(
msg='ERROR: Exception caught: %s %s' % (e, ', '.join(self.msgs))
)
else:
self.module.fail_json(msg='ERROR: Exception caught: %s' % e)
def get_fw_zone_settings(self):
if self.fw_offline:
fw_zone = self.fw.config.get_zone(self.zone)
fw_settings = FirewallClientZoneSettings(
list(self.fw.config.get_zone_config(fw_zone))
)
else:
fw_zone = self.fw.config().getZoneByName(self.zone)
fw_settings = fw_zone.getSettings()
return (fw_zone, fw_settings)
def update_fw_settings(self, fw_zone, fw_settings):
if self.fw_offline:
self.fw.config.set_zone_config(fw_zone, fw_settings.settings)
else:
fw_zone.update(fw_settings)
def get_enabled_immediate(self):
raise NotImplementedError
def get_enabled_permanent(self):
raise NotImplementedError
def set_enabled_immediate(self):
raise NotImplementedError
def set_enabled_permanent(self):
raise NotImplementedError
def set_disabled_immediate(self):
raise NotImplementedError
def set_disabled_permanent(self):
raise NotImplementedError
def run(self):
"""
run
This function contains the "transaction logic" where as all operations
follow a similar pattern in order to perform their action but simply
call different functions to carry that action out.
"""
self.changed = False
if self.immediate and self.permanent:
is_enabled_permanent = self.action_handler(
self.get_enabled_permanent,
self.action_args
)
is_enabled_immediate = self.action_handler(
self.get_enabled_immediate,
self.action_args
)
self.msgs.append('Permanent and Non-Permanent(immediate) operation')
if self.desired_state in self.enabled_values:
if not is_enabled_permanent or not is_enabled_immediate:
if self.module.check_mode:
self.module.exit_json(changed=True)
if not is_enabled_permanent:
self.action_handler(
self.set_enabled_permanent,
self.action_args
)
self.changed = True
if not is_enabled_immediate:
self.action_handler(
self.set_enabled_immediate,
self.action_args
)
self.changed = True
if self.changed and self.enabled_msg:
self.msgs.append(self.enabled_msg)
elif self.desired_state in self.disabled_values:
if is_enabled_permanent or is_enabled_immediate:
if self.module.check_mode:
self.module.exit_json(changed=True)
if is_enabled_permanent:
self.action_handler(
self.set_disabled_permanent,
self.action_args
)
self.changed = True
if is_enabled_immediate:
self.action_handler(
self.set_disabled_immediate,
self.action_args
)
self.changed = True
if self.changed and self.disabled_msg:
self.msgs.append(self.disabled_msg)
elif self.permanent and not self.immediate:
is_enabled = self.action_handler(
self.get_enabled_permanent,
self.action_args
)
self.msgs.append('Permanent operation')
if self.desired_state in self.enabled_values:
if not is_enabled:
if self.module.check_mode:
self.module.exit_json(changed=True)
self.action_handler(
self.set_enabled_permanent,
self.action_args
)
self.changed = True
if self.changed and self.enabled_msg:
self.msgs.append(self.enabled_msg)
elif self.desired_state in self.disabled_values:
if is_enabled:
if self.module.check_mode:
self.module.exit_json(changed=True)
self.action_handler(
self.set_disabled_permanent,
self.action_args
)
self.changed = True
if self.changed and self.disabled_msg:
self.msgs.append(self.disabled_msg)
elif self.immediate and not self.permanent:
is_enabled = self.action_handler(
self.get_enabled_immediate,
self.action_args
)
self.msgs.append('Non-permanent operation')
if self.desired_state in self.enabled_values:
if not is_enabled:
if self.module.check_mode:
self.module.exit_json(changed=True)
self.action_handler(
self.set_enabled_immediate,
self.action_args
)
self.changed = True
if self.changed and self.enabled_msg:
self.msgs.append(self.enabled_msg)
elif self.desired_state in self.disabled_values:
if is_enabled:
if self.module.check_mode:
self.module.exit_json(changed=True)
self.action_handler(
self.set_disabled_immediate,
self.action_args
)
self.changed = True
if self.changed and self.disabled_msg:
self.msgs.append(self.disabled_msg)
return (self.changed, self.msgs)
@staticmethod
def sanity_check(module):
"""
Perform sanity checking, version checks, etc
:module: AnsibleModule instance
"""
if FW_VERSION and fw_offline:
# Pre-run version checking
if LooseVersion(FW_VERSION) < LooseVersion("0.3.9"):
module.fail_json(msg='unsupported version of firewalld, offline operations require >= 0.3.9 - found: {0}'.format(FW_VERSION))
elif FW_VERSION and not fw_offline:
# Pre-run version checking
if LooseVersion(FW_VERSION) < LooseVersion("0.2.11"):
module.fail_json(msg='unsupported version of firewalld, requires >= 0.2.11 - found: {0}'.format(FW_VERSION))
# Check for firewalld running
try:
if fw.connected is False:
module.fail_json(msg='firewalld service must be running, or try with offline=true')
except AttributeError:
module.fail_json(msg="firewalld connection can't be established,\
installed version (%s) likely too old. Requires firewalld >= 0.2.11" % FW_VERSION)
if import_failure:
module.fail_json(
msg='Python Module not found: firewalld and its python module are required for this module, \
version 0.2.11 or newer required (0.3.9 or newer for offline operations)'
)

View File

@@ -1,78 +0,0 @@
# -*- coding: utf-8 -*-
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Gregory Shulov <gregory.shulov@gmail.com>,2016
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
HAS_INFINISDK = True
try:
from infinisdk import InfiniBox, core
except ImportError:
HAS_INFINISDK = False
from functools import wraps
from os import environ
from os import path
def api_wrapper(func):
""" Catch API Errors Decorator"""
@wraps(func)
def __wrapper(*args, **kwargs):
module = args[0]
try:
return func(*args, **kwargs)
except core.exceptions.APICommandException as e:
module.fail_json(msg=e.message)
except core.exceptions.SystemNotFoundException as e:
module.fail_json(msg=e.message)
except Exception:
raise
return __wrapper
@api_wrapper
def get_system(module):
"""Return System Object or Fail"""
box = module.params['system']
user = module.params.get('user', None)
password = module.params.get('password', None)
if user and password:
system = InfiniBox(box, auth=(user, password))
elif environ.get('INFINIBOX_USER') and environ.get('INFINIBOX_PASSWORD'):
system = InfiniBox(box, auth=(environ.get('INFINIBOX_USER'), environ.get('INFINIBOX_PASSWORD')))
elif path.isfile(path.expanduser('~') + '/.infinidat/infinisdk.ini'):
system = InfiniBox(box)
else:
module.fail_json(msg="You must set INFINIBOX_USER and INFINIBOX_PASSWORD environment variables or set username/password module arguments")
try:
system.login()
except Exception:
module.fail_json(msg="Infinibox authentication failed. Check your credentials")
return system
def infinibox_argument_spec():
"""Return standard base dictionary used for the argument_spec argument in AnsibleModule"""
return dict(
system=dict(required=True),
user=dict(),
password=dict(no_log=True),
)
def infinibox_required_together():
"""Return the default list used for the required_together argument to AnsibleModule"""
return [['user', 'password']]

View File

@@ -1,110 +0,0 @@
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Jonathan Mainguy <jon@soh.re>, 2015
# Most of this was originally added by Sven Schliesing @muffl0n in the mysql_user.py module
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible.module_utils.six.moves import configparser
try:
import pymysql as mysql_driver
_mysql_cursor_param = 'cursor'
except ImportError:
try:
import MySQLdb as mysql_driver
import MySQLdb.cursors
_mysql_cursor_param = 'cursorclass'
except ImportError:
mysql_driver = None
mysql_driver_fail_msg = 'The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is required.'
def parse_from_mysql_config_file(cnf):
cp = configparser.ConfigParser()
cp.read(cnf)
return cp
def mysql_connect(module, login_user=None, login_password=None, config_file='', ssl_cert=None,
ssl_key=None, ssl_ca=None, db=None, cursor_class=None,
connect_timeout=30, autocommit=False, config_overrides_defaults=False):
config = {}
if config_file and os.path.exists(config_file):
config['read_default_file'] = config_file
cp = parse_from_mysql_config_file(config_file)
# Override some commond defaults with values from config file if needed
if cp and cp.has_section('client') and config_overrides_defaults:
try:
module.params['login_host'] = cp.get('client', 'host', fallback=module.params['login_host'])
module.params['login_port'] = cp.getint('client', 'port', fallback=module.params['login_port'])
except Exception as e:
if "got an unexpected keyword argument 'fallback'" in e.message:
module.fail_json('To use config_overrides_defaults, '
'it needs Python 3.5+ as the default interpreter on a target host')
if ssl_ca is not None or ssl_key is not None or ssl_cert is not None:
config['ssl'] = {}
if module.params['login_unix_socket']:
config['unix_socket'] = module.params['login_unix_socket']
else:
config['host'] = module.params['login_host']
config['port'] = module.params['login_port']
# If login_user or login_password are given, they should override the
# config file
if login_user is not None:
config['user'] = login_user
if login_password is not None:
config['passwd'] = login_password
if ssl_cert is not None:
config['ssl']['cert'] = ssl_cert
if ssl_key is not None:
config['ssl']['key'] = ssl_key
if ssl_ca is not None:
config['ssl']['ca'] = ssl_ca
if db is not None:
config['db'] = db
if connect_timeout is not None:
config['connect_timeout'] = connect_timeout
if _mysql_cursor_param == 'cursor':
# In case of PyMySQL driver:
db_connection = mysql_driver.connect(autocommit=autocommit, **config)
else:
# In case of MySQLdb driver
db_connection = mysql_driver.connect(**config)
if autocommit:
db_connection.autocommit(True)
if cursor_class == 'DictCursor':
return db_connection.cursor(**{_mysql_cursor_param: mysql_driver.cursors.DictCursor}), db_connection
else:
return db_connection.cursor(), db_connection
def mysql_common_argument_spec():
return dict(
login_user=dict(type='str', default=None),
login_password=dict(type='str', no_log=True),
login_host=dict(type='str', default='localhost'),
login_port=dict(type='int', default=3306),
login_unix_socket=dict(type='str'),
config_file=dict(type='path', default='~/.my.cnf'),
connect_timeout=dict(type='int', default=30),
client_cert=dict(type='path', aliases=['ssl_cert']),
client_key=dict(type='path', aliases=['ssl_key']),
ca_cert=dict(type='path', aliases=['ssl_ca']),
)

View File

@@ -6,6 +6,7 @@ __metaclass__ = type
import json
from ansible.module_utils.urls import open_url
from ansible.module_utils._text import to_native
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves import http_client
from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError
@@ -47,7 +48,7 @@ class RedfishUtils(object):
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
data = json.loads(resp.read())
data = json.loads(to_native(resp.read()))
headers = dict((k.lower(), v) for (k, v) in resp.info().items())
except HTTPError as e:
msg = self._get_extended_message(e)

View File

@@ -278,7 +278,7 @@ EXAMPLES = '''
tasks:
- name: Launch ECS instance in VPC network
ali_instance:
community.general.ali_instance:
alicloud_access_key: '{{ alicloud_access_key }}'
alicloud_secret_key: '{{ alicloud_secret_key }}'
alicloud_region: '{{ alicloud_region }}'
@@ -296,7 +296,7 @@ EXAMPLES = '''
password: '{{ password }}'
- name: With count and count_tag to create a number of instances
ali_instance:
community.general.ali_instance:
alicloud_access_key: '{{ alicloud_access_key }}'
alicloud_secret_key: '{{ alicloud_secret_key }}'
alicloud_region: '{{ alicloud_region }}'
@@ -318,7 +318,7 @@ EXAMPLES = '''
password: '{{ password }}'
- name: Start instance
ali_instance:
community.general.ali_instance:
alicloud_access_key: '{{ alicloud_access_key }}'
alicloud_secret_key: '{{ alicloud_secret_key }}'
alicloud_region: '{{ alicloud_region }}'

View File

@@ -82,23 +82,23 @@ EXAMPLES = '''
# Fetch instances details according to setting different filters
- name: Find all instances in the specified region
ali_instance_info:
community.general.ali_instance_info:
register: all_instances
- name: Find all instances based on the specified ids
ali_instance_info:
community.general.ali_instance_info:
instance_ids:
- "i-35b333d9"
- "i-ddav43kd"
register: instances_by_ids
- name: Find all instances based on the specified name_prefix
ali_instance_info:
community.general.ali_instance_info:
name_prefix: "ecs_instance_"
register: instances_by_name_prefix
- name: Find instances based on tags
ali_instance_info:
community.general.ali_instance_info:
tags:
Test: "add"
'''

View File

@@ -8,13 +8,13 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: atomic_container
short_description: Manage the containers on the atomic host platform
description:
- Manage the containers on the atomic host platform
- Allows to manage the lifecycle of a container on the atomic host platform
- Manage the containers on the atomic host platform.
- Allows to manage the lifecycle of a container on the atomic host platform.
author: "Giuseppe Scrivano (@giuseppe)"
notes:
- Host should support C(atomic) command
@@ -24,41 +24,50 @@ requirements:
options:
backend:
description:
- Define the backend to use for the container
- Define the backend to use for the container.
required: True
choices: ["docker", "ostree"]
type: str
name:
description:
- Name of the container
- Name of the container.
required: True
type: str
image:
description:
- The image to use to install the container
- The image to use to install the container.
required: True
type: str
rootfs:
description:
- Define the rootfs of the image
- Define the rootfs of the image.
type: str
state:
description:
- State of the container
- State of the container.
required: True
choices: ["latest", "present", "absent", "rollback"]
choices: ["absent", "latest", "present", "rollback"]
default: "latest"
type: str
mode:
description:
- Define if it is an user or a system container
- Define if it is an user or a system container.
required: True
choices: ["user", "system"]
type: str
values:
description:
- Values for the installation of the container. This option is permitted only with mode 'user' or 'system'.
The values specified here will be used at installation time as --set arguments for atomic install.
- Values for the installation of the container.
- This option is permitted only with mode 'user' or 'system'.
- The values specified here will be used at installation time as --set arguments for atomic install.
type: list
elements: str
'''
EXAMPLES = '''
EXAMPLES = r'''
- name: Install the etcd system container
atomic_container:
community.general.atomic_container:
name: etcd
image: rhel/etcd
backend: ostree
@@ -68,7 +77,7 @@ EXAMPLES = '''
- ETCD_NAME=etcd.server
- name: Uninstall the etcd system container
atomic_container:
community.general.atomic_container:
name: etcd
image: rhel/etcd
backend: ostree
@@ -76,7 +85,7 @@ EXAMPLES = '''
mode: system
'''
RETURN = '''
RETURN = r'''
msg:
description: The command standard output
returned: always
@@ -174,12 +183,12 @@ def main():
module = AnsibleModule(
argument_spec=dict(
mode=dict(default=None, choices=['user', 'system']),
name=dict(default=None, required=True),
image=dict(default=None, required=True),
name=dict(required=True),
image=dict(required=True),
rootfs=dict(default=None),
state=dict(default='latest', choices=['present', 'absent', 'latest', 'rollback']),
backend=dict(default=None, required=True, choices=['docker', 'ostree']),
values=dict(type='list', default=[]),
backend=dict(required=True, choices=['docker', 'ostree']),
values=dict(type='list', default=[], elements='str'),
),
)

View File

@@ -7,7 +7,7 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: atomic_host
short_description: Manage the atomic host platform
@@ -24,22 +24,24 @@ requirements:
options:
revision:
description:
- The version number of the atomic host to be deployed. Providing C(latest) will upgrade to the latest available version.
default: latest
- The version number of the atomic host to be deployed.
- Providing C(latest) will upgrade to the latest available version.
default: 'latest'
aliases: [ version ]
type: str
'''
EXAMPLES = '''
EXAMPLES = r'''
- name: Upgrade the atomic host platform to the latest version (atomic host upgrade)
atomic_host:
community.general.atomic_host:
revision: latest
- name: Deploy a specific revision as the atomic host (atomic host deploy 23.130)
atomic_host:
community.general.atomic_host:
revision: 23.130
'''
RETURN = '''
RETURN = r'''
msg:
description: The command standard output
returned: always

View File

@@ -7,7 +7,7 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: atomic_image
short_description: Manage the container images on the atomic host platform
@@ -25,17 +25,20 @@ options:
backend:
description:
- Define the backend where the image is pulled.
choices: [ docker, ostree ]
choices: [ 'docker', 'ostree' ]
type: str
name:
description:
- Name of the container image.
required: True
type: str
state:
description:
- The state of the container image.
- The state C(latest) will ensure container image is upgraded to the latest version and forcefully restart container, if running.
choices: [ absent, latest, present ]
default: latest
choices: [ 'absent', 'latest', 'present' ]
default: 'latest'
type: str
started:
description:
- Start or Stop the container.
@@ -43,20 +46,20 @@ options:
default: 'yes'
'''
EXAMPLES = '''
EXAMPLES = r'''
- name: Execute the run command on rsyslog container image (atomic run rhel7/rsyslog)
atomic_image:
community.general.atomic_image:
name: rhel7/rsyslog
state: latest
- name: Pull busybox to the OSTree backend
atomic_image:
community.general.atomic_image:
name: busybox
state: latest
backend: ostree
'''
RETURN = '''
RETURN = r'''
msg:
description: The command standard output
returned: always

View File

@@ -58,14 +58,14 @@ EXAMPLES = '''
connection: local
tasks:
- name: Create an Anti Affinity Policy
clc_aa_policy:
community.general.clc_aa_policy:
name: Hammer Time
location: UK3
state: present
register: policy
- name: debug
debug:
- name: Debug
ansible.builtin.debug:
var: policy
---
@@ -75,14 +75,14 @@ EXAMPLES = '''
connection: local
tasks:
- name: Delete an Anti Affinity Policy
clc_aa_policy:
community.general.clc_aa_policy:
name: Hammer Time
location: UK3
state: absent
register: policy
- name: Debug
debug:
ansible.builtin.debug:
var: policy
'''

View File

@@ -74,7 +74,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Create an Alert Policy for disk above 80% for 5 minutes
clc_alert_policy:
community.general.clc_alert_policy:
alias: wfad
name: 'alert for disk > 80%'
alert_recipients:
@@ -87,7 +87,7 @@ EXAMPLES = '''
register: policy
- name: Debug
debug: var=policy
ansible.builtin.debug: var=policy
---
- name: Delete Alert Policy Example
@@ -96,14 +96,14 @@ EXAMPLES = '''
connection: local
tasks:
- name: Delete an Alert Policy
clc_alert_policy:
community.general.clc_alert_policy:
alias: wfad
name: 'alert for disk > 80%'
state: absent
register: policy
- name: Debug
debug: var=policy
ansible.builtin.debug: var=policy
'''
RETURN = '''

View File

@@ -59,7 +59,7 @@ EXAMPLES = '''
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Deploy package
clc_blueprint_package:
community.general.clc_blueprint_package:
server_ids:
- UC1TEST-SERVER1
- UC1TEST-SERVER2

View File

@@ -70,14 +70,14 @@ EXAMPLES = '''
connection: local
tasks:
- name: Create / Verify a Server Group at CenturyLink Cloud
clc_group:
community.general.clc_group:
name: My Cool Server Group
parent: Default Group
state: present
register: clc
- name: Debug
debug:
ansible.builtin.debug:
var: clc
# Delete a Server Group
@@ -89,14 +89,14 @@ EXAMPLES = '''
connection: local
tasks:
- name: Delete / Verify Absent a Server Group at CenturyLink Cloud
clc_group:
community.general.clc_group:
name: My Cool Server Group
parent: Default Group
state: absent
register: clc
- name: Debug
debug:
ansible.builtin.debug:
var: clc
'''

View File

@@ -79,7 +79,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Actually Create things
clc_loadbalancer:
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
@@ -95,7 +95,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Actually Create things
clc_loadbalancer:
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
@@ -111,7 +111,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Actually Create things
clc_loadbalancer:
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
@@ -127,7 +127,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Actually Delete things
clc_loadbalancer:
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
@@ -143,7 +143,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Actually Delete things
clc_loadbalancer:
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST

View File

@@ -70,7 +70,7 @@ EXAMPLES = '''
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Set the cpu count to 4 on a server
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
@@ -78,7 +78,7 @@ EXAMPLES = '''
state: present
- name: Set the memory to 8GB on a server
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
@@ -86,7 +86,7 @@ EXAMPLES = '''
state: present
- name: Set the anti affinity policy on a server
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
@@ -94,7 +94,7 @@ EXAMPLES = '''
state: present
- name: Remove the anti affinity policy on a server
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
@@ -102,7 +102,7 @@ EXAMPLES = '''
state: absent
- name: Add the alert policy on a server
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
@@ -110,7 +110,7 @@ EXAMPLES = '''
state: present
- name: Remove the alert policy on a server
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
@@ -118,7 +118,7 @@ EXAMPLES = '''
state: absent
- name: Ret the memory to 16GB and cpu to 8 core on a lust if servers
clc_modify_server:
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02

View File

@@ -62,7 +62,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Create Public IP For Servers
clc_publicip:
community.general.clc_publicip:
protocol: TCP
ports:
- 80
@@ -73,7 +73,7 @@ EXAMPLES = '''
register: clc
- name: Debug
debug:
ansible.builtin.debug:
var: clc
- name: Delete Public IP from Server
@@ -82,7 +82,7 @@ EXAMPLES = '''
connection: local
tasks:
- name: Create Public IP For Servers
clc_publicip:
community.general.clc_publicip:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
@@ -90,7 +90,7 @@ EXAMPLES = '''
register: clc
- name: Debug
debug:
ansible.builtin.debug:
var: clc
'''

View File

@@ -175,7 +175,7 @@ EXAMPLES = '''
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Provision a single Ubuntu Server
clc_server:
community.general.clc_server:
name: test
template: ubuntu-14-64
count: 1
@@ -183,7 +183,7 @@ EXAMPLES = '''
state: present
- name: Ensure 'Default Group' has exactly 5 servers
clc_server:
community.general.clc_server:
name: test
template: ubuntu-14-64
exact_count: 5
@@ -191,19 +191,19 @@ EXAMPLES = '''
group: Default Group
- name: Stop a Server
clc_server:
community.general.clc_server:
server_ids:
- UC1ACCT-TEST01
state: stopped
- name: Start a Server
clc_server:
community.general.clc_server:
server_ids:
- UC1ACCT-TEST01
state: started
- name: Delete a Server
clc_server:
community.general.clc_server:
server_ids:
- UC1ACCT-TEST01
state: absent

View File

@@ -55,7 +55,7 @@ EXAMPLES = '''
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Create server snapshot
clc_server_snapshot:
community.general.clc_server_snapshot:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
@@ -64,7 +64,7 @@ EXAMPLES = '''
state: present
- name: Restore server snapshot
clc_server_snapshot:
community.general.clc_server_snapshot:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
@@ -72,7 +72,7 @@ EXAMPLES = '''
state: restore
- name: Delete server snapshot
clc_server_snapshot:
community.general.clc_server_snapshot:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02

View File

@@ -1,475 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean
short_description: Create/delete a droplet/SSH_key in DigitalOcean
deprecated:
removed_in: 2.0.0 # was Ansible 2.12
why: Updated module to remove external dependency with increased functionality.
alternative: Use M(community.general.digital_ocean_droplet) instead.
description:
- Create/delete a droplet in DigitalOcean and optionally wait for it to be 'running', or deploy an SSH key.
author: "Vincent Viallet (@zbal)"
options:
command:
description:
- Which target you want to operate on.
default: droplet
choices: ['droplet', 'ssh']
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'active', 'absent', 'deleted']
api_token:
description:
- DigitalOcean api token.
id:
description:
- Numeric, the droplet id you want to operate on.
aliases: ['droplet_id']
name:
description:
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key.
unique_name:
description:
- Bool, require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host
per name. Useful for idempotence.
type: bool
default: 'no'
size_id:
description:
- This is the slug of the size you would like the droplet created with.
image_id:
description:
- This is the slug of the image you would like the droplet created with.
region_id:
description:
- This is the slug of the region you would like your server to be created in.
ssh_key_ids:
description:
- Optional, array of SSH key (numeric) ID that you would like to be added to the server.
virtio:
description:
- "Bool, turn on virtio driver in droplet for improved network and storage I/O."
type: bool
default: 'yes'
private_networking:
description:
- "Bool, add an additional, private network interface to droplet for inter-droplet communication."
type: bool
default: 'no'
backups_enabled:
description:
- Optional, Boolean, enables backups for your droplet.
type: bool
default: 'no'
user_data:
description:
- opaque blob of data which is made available to the droplet
ipv6:
description:
- Optional, Boolean, enable IPv6 for your droplet.
type: bool
default: 'no'
wait:
description:
- Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned.
type: bool
default: 'yes'
wait_timeout:
description:
- How long before wait gives up, in seconds.
default: 300
ssh_pub_key:
description:
- The public SSH key you want to add to your account.
notes:
- Two environment variables can be used, DO_API_KEY and DO_API_TOKEN. They both refer to the v2 token.
- As of Ansible 1.9.5 and 2.0, Version 2 of the DigitalOcean API is used, this removes C(client_id) and C(api_key) options in favor of C(api_token).
- If you are running Ansible 1.9.4 or earlier you might not be able to use the included version of this module as the API version used has been retired.
Upgrade Ansible or, if unable to, try downloading the latest version of this module from github and putting it into a 'library' directory.
requirements:
- "python >= 2.6"
- dopy
'''
EXAMPLES = '''
# Ensure a SSH key is present
# If a key matches this name, will return the ssh key id and changed = False
# If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False
- name: Ensure a SSH key is present
digital_ocean:
state: present
command: ssh
name: my_ssh_key
ssh_pub_key: 'ssh-rsa AAAA...'
api_token: XXX
# Will return the droplet details including the droplet id (used for idempotence)
- name: Create a new Droplet
digital_ocean:
state: present
command: droplet
name: mydroplet
api_token: XXX
size_id: 2gb
region_id: ams2
image_id: fedora-19-x64
wait_timeout: 500
register: my_droplet
- debug:
msg: "ID is {{ my_droplet.droplet.id }}"
- debug:
msg: "IP is {{ my_droplet.droplet.ip_address }}"
# Ensure a droplet is present
# If droplet id already exist, will return the droplet details and changed = False
# If no droplet matches the id, a new droplet will be created and the droplet details (including the new id) are returned, changed = True.
- name: Ensure a droplet is present
digital_ocean:
state: present
command: droplet
id: 123
name: mydroplet
api_token: XXX
size_id: 2gb
region_id: ams2
image_id: fedora-19-x64
wait_timeout: 500
# Create a droplet with ssh key
# The ssh key id can be passed as argument at the creation of a droplet (see ssh_key_ids).
# Several keys can be added to ssh_key_ids as id1,id2,id3
# The keys are used to connect as root to the droplet.
- name: Create a droplet with ssh key
digital_ocean:
state: present
ssh_key_ids: 123,456
name: mydroplet
api_token: XXX
size_id: 2gb
region_id: ams2
image_id: fedora-19-x64
'''
import os
import time
import traceback
from distutils.version import LooseVersion
try:
# Imported as a dependency for dopy
import ansible.module_utils.six
HAS_SIX = True
except ImportError:
HAS_SIX = False
HAS_DOPY = False
try:
import dopy
from dopy.manager import DoError, DoManager
if LooseVersion(dopy.__version__) >= LooseVersion('0.3.2'):
HAS_DOPY = True
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule, env_fallback
class TimeoutError(Exception):
def __init__(self, msg, id_):
super(TimeoutError, self).__init__(msg)
self.id = id_
class JsonfyMixIn(object):
def to_json(self):
return self.__dict__
class Droplet(JsonfyMixIn):
manager = None
def __init__(self, droplet_json):
self.status = 'new'
self.__dict__.update(droplet_json)
def is_powered_on(self):
return self.status == 'active'
def update_attr(self, attrs=None):
if attrs:
for k, v in attrs.items():
setattr(self, k, v)
networks = attrs.get('networks', {})
for network in networks.get('v6', []):
if network['type'] == 'public':
setattr(self, 'public_ipv6_address', network['ip_address'])
else:
setattr(self, 'private_ipv6_address', network['ip_address'])
else:
json = self.manager.show_droplet(self.id)
if json['ip_address']:
self.update_attr(json)
def power_on(self):
if self.status != 'off':
raise AssertionError('Can only power on a closed one.')
json = self.manager.power_on_droplet(self.id)
self.update_attr(json)
def ensure_powered_on(self, wait=True, wait_timeout=300):
if self.is_powered_on():
return
if self.status == 'off': # powered off
self.power_on()
if wait:
end_time = time.time() + wait_timeout
while time.time() < end_time:
time.sleep(min(20, end_time - time.time()))
self.update_attr()
if self.is_powered_on():
if not self.ip_address:
raise TimeoutError('No ip is found.', self.id)
return
raise TimeoutError('Wait for droplet running timeout', self.id)
def destroy(self):
return self.manager.destroy_droplet(self.id, scrub_data=True)
@classmethod
def setup(cls, api_token):
cls.manager = DoManager(None, api_token, api_version=2)
@classmethod
def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False, backups_enabled=False, user_data=None,
ipv6=False):
private_networking_lower = str(private_networking).lower()
backups_enabled_lower = str(backups_enabled).lower()
ipv6_lower = str(ipv6).lower()
json = cls.manager.new_droplet(name, size_id, image_id, region_id,
ssh_key_ids=ssh_key_ids, virtio=virtio, private_networking=private_networking_lower,
backups_enabled=backups_enabled_lower, user_data=user_data, ipv6=ipv6_lower)
droplet = cls(json)
return droplet
@classmethod
def find(cls, id=None, name=None):
if not id and not name:
return False
droplets = cls.list_all()
# Check first by id. digital ocean requires that it be unique
for droplet in droplets:
if droplet.id == id:
return droplet
# Failing that, check by hostname.
for droplet in droplets:
if droplet.name == name:
return droplet
return False
@classmethod
def list_all(cls):
json = cls.manager.all_active_droplets()
return list(map(cls, json))
class SSH(JsonfyMixIn):
manager = None
def __init__(self, ssh_key_json):
self.__dict__.update(ssh_key_json)
update_attr = __init__
def destroy(self):
self.manager.destroy_ssh_key(self.id)
return True
@classmethod
def setup(cls, api_token):
cls.manager = DoManager(None, api_token, api_version=2)
@classmethod
def find(cls, name):
if not name:
return False
keys = cls.list_all()
for key in keys:
if key.name == name:
return key
return False
@classmethod
def list_all(cls):
json = cls.manager.all_ssh_keys()
return list(map(cls, json))
@classmethod
def add(cls, name, key_pub):
json = cls.manager.new_ssh_key(name, key_pub)
return cls(json)
def core(module):
def getkeyordie(k):
v = module.params[k]
if v is None:
module.fail_json(msg='Unable to load %s' % k)
return v
api_token = module.params['api_token']
changed = True
command = module.params['command']
state = module.params['state']
if command == 'droplet':
Droplet.setup(api_token)
if state in ('active', 'present'):
# First, try to find a droplet by id.
droplet = Droplet.find(id=module.params['id'])
# If we couldn't find the droplet and the user is allowing unique
# hostnames, then check to see if a droplet with the specified
# hostname already exists.
if not droplet and module.params['unique_name']:
droplet = Droplet.find(name=getkeyordie('name'))
# If both of those attempts failed, then create a new droplet.
if not droplet:
droplet = Droplet.add(
name=getkeyordie('name'),
size_id=getkeyordie('size_id'),
image_id=getkeyordie('image_id'),
region_id=getkeyordie('region_id'),
ssh_key_ids=module.params['ssh_key_ids'],
virtio=module.params['virtio'],
private_networking=module.params['private_networking'],
backups_enabled=module.params['backups_enabled'],
user_data=module.params.get('user_data'),
ipv6=module.params['ipv6'],
)
if droplet.is_powered_on():
changed = False
droplet.ensure_powered_on(
wait=getkeyordie('wait'),
wait_timeout=getkeyordie('wait_timeout')
)
module.exit_json(changed=changed, droplet=droplet.to_json())
elif state in ('absent', 'deleted'):
# First, try to find a droplet by id.
droplet = Droplet.find(module.params['id'])
# If we couldn't find the droplet and the user is allowing unique
# hostnames, then check to see if a droplet with the specified
# hostname already exists.
if not droplet and module.params['unique_name']:
droplet = Droplet.find(name=getkeyordie('name'))
if not droplet:
module.exit_json(changed=False, msg='The droplet is not found.')
droplet.destroy()
module.exit_json(changed=True)
elif command == 'ssh':
SSH.setup(api_token)
name = getkeyordie('name')
if state in ('active', 'present'):
key = SSH.find(name)
if key:
module.exit_json(changed=False, ssh_key=key.to_json())
key = SSH.add(name, getkeyordie('ssh_pub_key'))
module.exit_json(changed=True, ssh_key=key.to_json())
elif state in ('absent', 'deleted'):
key = SSH.find(name)
if not key:
module.exit_json(changed=False, msg='SSH key with the name of %s is not found.' % name)
key.destroy()
module.exit_json(changed=True)
def main():
module = AnsibleModule(
argument_spec=dict(
command=dict(choices=['droplet', 'ssh'], default='droplet'),
state=dict(choices=['active', 'present', 'absent', 'deleted'], default='present'),
api_token=dict(
aliases=['API_TOKEN'],
no_log=True,
fallback=(env_fallback, ['DO_API_TOKEN', 'DO_API_KEY'])
),
name=dict(type='str'),
size_id=dict(),
image_id=dict(),
region_id=dict(),
ssh_key_ids=dict(type='list'),
virtio=dict(type='bool', default=True),
private_networking=dict(type='bool', default=False),
backups_enabled=dict(type='bool', default=False),
id=dict(aliases=['droplet_id'], type='int'),
unique_name=dict(type='bool', default=False),
user_data=dict(default=None),
ipv6=dict(type='bool', default=False),
wait=dict(type='bool', default=True),
wait_timeout=dict(default=300, type='int'),
ssh_pub_key=dict(type='str'),
),
required_together=(
['size_id', 'image_id', 'region_id'],
),
mutually_exclusive=(
['size_id', 'ssh_pub_key'],
['image_id', 'ssh_pub_key'],
['region_id', 'ssh_pub_key'],
),
required_one_of=(
['id', 'name'],
),
)
if not HAS_DOPY and not HAS_SIX:
module.fail_json(msg='dopy >= 0.3.2 is required for this module. dopy requires six but six is not installed. '
'Make sure both dopy and six are installed.')
if not HAS_DOPY:
module.fail_json(msg='dopy >= 0.3.2 required for this module')
try:
core(module)
except TimeoutError as e:
module.fail_json(msg=str(e), id=e.id)
except (DoError, Exception) as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
digital_ocean_account_info.py

View File

@@ -1,81 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_account_info
short_description: Gather information about DigitalOcean User account
description:
- This module can be used to gather information about User account.
- This module was called C(digital_ocean_account_facts) before Ansible 2.9. The usage did not change.
author: "Abhijeet Kasurde (@Akasurde)"
requirements:
- "python >= 2.6"
extends_documentation_fragment:
- community.general.digital_ocean.documentation
'''
EXAMPLES = '''
- name: Gather information about user account
digital_ocean_account_info:
oauth_token: "{{ oauth_token }}"
'''
RETURN = '''
data:
description: DigitalOcean account information
returned: success
type: dict
sample: {
"droplet_limit": 10,
"email": "testuser1@gmail.com",
"email_verified": true,
"floating_ip_limit": 3,
"status": "active",
"status_message": "",
"uuid": "aaaaaaaaaaaaaa"
}
'''
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
rest = DigitalOceanHelper(module)
response = rest.get("account")
if response.status_code != 200:
module.fail_json(msg="Failed to fetch 'account' information due to error : %s" % response.json['message'])
module.exit_json(changed=False, data=response.json["account"])
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
module = AnsibleModule(argument_spec=argument_spec)
if module._name in ('digital_ocean_account_facts', 'community.general.digital_ocean_account_facts'):
module.deprecate("The 'digital_ocean_account_facts' module has been renamed to 'digital_ocean_account_info'",
version='3.0.0', collection_name='community.general') # was Ansible 2.13
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e), exception=format_exc())
if __name__ == '__main__':
main()

View File

@@ -1,283 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_block_storage
short_description: Create/destroy or attach/detach Block Storage volumes in DigitalOcean
description:
- Create/destroy Block Storage volume in DigitalOcean, or attach/detach Block Storage volume to a droplet.
options:
command:
description:
- Which operation do you want to perform.
choices: ['create', 'attach']
required: true
state:
description:
- Indicate desired state of the target.
choices: ['present', 'absent']
required: true
block_size:
description:
- The size of the Block Storage volume in gigabytes. Required when command=create and state=present. If snapshot_id is included, this will be ignored.
volume_name:
description:
- The name of the Block Storage volume.
required: true
description:
description:
- Description of the Block Storage volume.
region:
description:
- The slug of the region where your Block Storage volume should be located in. If snapshot_id is included, this will be ignored.
required: true
snapshot_id:
description:
- The snapshot id you would like the Block Storage volume created with. If included, region and block_size will be ignored and changed to null.
droplet_id:
description:
- The droplet id you want to operate on. Required when command=attach.
extends_documentation_fragment:
- community.general.digital_ocean.documentation
notes:
- Two environment variables can be used, DO_API_KEY and DO_API_TOKEN.
They both refer to the v2 token.
- If snapshot_id is used, region and block_size will be ignored and changed to null.
author:
- "Harnek Sidhu (@harneksidhu)"
'''
EXAMPLES = '''
- name: Create new Block Storage
digital_ocean_block_storage:
state: present
command: create
api_token: <TOKEN>
region: nyc1
block_size: 10
volume_name: nyc1-block-storage
- name: Delete Block Storage
digital_ocean_block_storage:
state: absent
command: create
api_token: <TOKEN>
region: nyc1
volume_name: nyc1-block-storage
- name: Attach Block Storage to a Droplet
digital_ocean_block_storage:
state: present
command: attach
api_token: <TOKEN>
volume_name: nyc1-block-storage
region: nyc1
droplet_id: <ID>
- name: Detach Block Storage from a Droplet
digital_ocean_block_storage:
state: absent
command: attach
api_token: <TOKEN>
volume_name: nyc1-block-storage
region: nyc1
droplet_id: <ID>
'''
RETURN = '''
id:
description: Unique identifier of a Block Storage volume returned during creation.
returned: changed
type: str
sample: "69b25d9a-494c-12e6-a5af-001f53126b44"
'''
import time
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
class DOBlockStorageException(Exception):
pass
class DOBlockStorage(object):
def __init__(self, module):
self.module = module
self.rest = DigitalOceanHelper(module)
def get_key_or_fail(self, k):
v = self.module.params[k]
if v is None:
self.module.fail_json(msg='Unable to load %s' % k)
return v
def poll_action_for_complete_status(self, action_id):
url = 'actions/{0}'.format(action_id)
end_time = time.time() + self.module.params['timeout']
while time.time() < end_time:
time.sleep(2)
response = self.rest.get(url)
status = response.status_code
json = response.json
if status == 200:
if json['action']['status'] == 'completed':
return True
elif json['action']['status'] == 'errored':
raise DOBlockStorageException(json['message'])
raise DOBlockStorageException('Unable to reach api.digitalocean.com')
def get_attached_droplet_ID(self, volume_name, region):
url = 'volumes?name={0}&region={1}'.format(volume_name, region)
response = self.rest.get(url)
status = response.status_code
json = response.json
if status == 200:
volumes = json['volumes']
if len(volumes) > 0:
droplet_ids = volumes[0]['droplet_ids']
if len(droplet_ids) > 0:
return droplet_ids[0]
return None
else:
raise DOBlockStorageException(json['message'])
def attach_detach_block_storage(self, method, volume_name, region, droplet_id):
data = {
'type': method,
'volume_name': volume_name,
'region': region,
'droplet_id': droplet_id
}
response = self.rest.post('volumes/actions', data=data)
status = response.status_code
json = response.json
if status == 202:
return self.poll_action_for_complete_status(json['action']['id'])
elif status == 200:
return True
elif status == 422:
return False
else:
raise DOBlockStorageException(json['message'])
def create_block_storage(self):
volume_name = self.get_key_or_fail('volume_name')
snapshot_id = self.module.params['snapshot_id']
if snapshot_id:
self.module.params['block_size'] = None
self.module.params['region'] = None
block_size = None
region = None
else:
block_size = self.get_key_or_fail('block_size')
region = self.get_key_or_fail('region')
description = self.module.params['description']
data = {
'size_gigabytes': block_size,
'name': volume_name,
'description': description,
'region': region,
'snapshot_id': snapshot_id,
}
response = self.rest.post("volumes", data=data)
status = response.status_code
json = response.json
if status == 201:
self.module.exit_json(changed=True, id=json['volume']['id'])
elif status == 409 and json['id'] == 'conflict':
self.module.exit_json(changed=False)
else:
raise DOBlockStorageException(json['message'])
def delete_block_storage(self):
volume_name = self.get_key_or_fail('volume_name')
region = self.get_key_or_fail('region')
url = 'volumes?name={0}&region={1}'.format(volume_name, region)
attached_droplet_id = self.get_attached_droplet_ID(volume_name, region)
if attached_droplet_id is not None:
self.attach_detach_block_storage('detach', volume_name, region, attached_droplet_id)
response = self.rest.delete(url)
status = response.status_code
json = response.json
if status == 204:
self.module.exit_json(changed=True)
elif status == 404:
self.module.exit_json(changed=False)
else:
raise DOBlockStorageException(json['message'])
def attach_block_storage(self):
volume_name = self.get_key_or_fail('volume_name')
region = self.get_key_or_fail('region')
droplet_id = self.get_key_or_fail('droplet_id')
attached_droplet_id = self.get_attached_droplet_ID(volume_name, region)
if attached_droplet_id is not None:
if attached_droplet_id == droplet_id:
self.module.exit_json(changed=False)
else:
self.attach_detach_block_storage('detach', volume_name, region, attached_droplet_id)
changed_status = self.attach_detach_block_storage('attach', volume_name, region, droplet_id)
self.module.exit_json(changed=changed_status)
def detach_block_storage(self):
volume_name = self.get_key_or_fail('volume_name')
region = self.get_key_or_fail('region')
droplet_id = self.get_key_or_fail('droplet_id')
changed_status = self.attach_detach_block_storage('detach', volume_name, region, droplet_id)
self.module.exit_json(changed=changed_status)
def handle_request(module):
block_storage = DOBlockStorage(module)
command = module.params['command']
state = module.params['state']
if command == 'create':
if state == 'present':
block_storage.create_block_storage()
elif state == 'absent':
block_storage.delete_block_storage()
elif command == 'attach':
if state == 'present':
block_storage.attach_block_storage()
elif state == 'absent':
block_storage.detach_block_storage()
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
argument_spec.update(
state=dict(choices=['present', 'absent'], required=True),
command=dict(choices=['create', 'attach'], required=True),
block_size=dict(type='int', required=False),
volume_name=dict(type='str', required=True),
description=dict(type='str'),
region=dict(type='str', required=False),
snapshot_id=dict(type='str', required=False),
droplet_id=dict(type='int')
)
module = AnsibleModule(argument_spec=argument_spec)
try:
handle_request(module)
except DOBlockStorageException as e:
module.fail_json(msg=e.message, exception=traceback.format_exc())
except KeyError as e:
module.fail_json(msg='Unable to load %s' % e.message, exception=traceback.format_exc())
if __name__ == '__main__':
main()

View File

@@ -1,169 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2017, Abhijeet Kasurde <akasurde@redhat.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_certificate
short_description: Manage certificates in DigitalOcean.
description:
- Create, Retrieve and remove certificates DigitalOcean.
author: "Abhijeet Kasurde (@Akasurde)"
options:
name:
description:
- The name of the certificate.
required: true
private_key:
description:
- A PEM-formatted private key content of SSL Certificate.
leaf_certificate:
description:
- A PEM-formatted public SSL Certificate.
certificate_chain:
description:
- The full PEM-formatted trust chain between the certificate authority's certificate and your domain's SSL certificate.
state:
description:
- Whether the certificate should be present or absent.
default: present
choices: ['present', 'absent']
extends_documentation_fragment:
- community.general.digital_ocean.documentation
notes:
- Two environment variables can be used, DO_API_KEY, DO_OAUTH_TOKEN and DO_API_TOKEN.
They both refer to the v2 token.
'''
EXAMPLES = '''
- name: Create a certificate
digital_ocean_certificate:
name: production
state: present
private_key: "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkM8OI7pRpgyj1I\n-----END PRIVATE KEY-----"
leaf_certificate: "-----BEGIN CERTIFICATE-----\nMIIFDmg2Iaw==\n-----END CERTIFICATE-----"
oauth_token: b7d03a6947b217efb6f3ec3bd365652
- name: Create a certificate using file lookup plugin
digital_ocean_certificate:
name: production
state: present
private_key: "{{ lookup('file', 'test.key') }}"
leaf_certificate: "{{ lookup('file', 'test.cert') }}"
oauth_token: "{{ oauth_token }}"
- name: Create a certificate with trust chain
digital_ocean_certificate:
name: production
state: present
private_key: "{{ lookup('file', 'test.key') }}"
leaf_certificate: "{{ lookup('file', 'test.cert') }}"
certificate_chain: "{{ lookup('file', 'chain.cert') }}"
oauth_token: "{{ oauth_token }}"
- name: Remove a certificate
digital_ocean_certificate:
name: production
state: absent
oauth_token: "{{ oauth_token }}"
'''
RETURN = ''' # '''
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
state = module.params['state']
name = module.params['name']
rest = DigitalOceanHelper(module)
results = dict(changed=False)
response = rest.get('certificates')
status_code = response.status_code
resp_json = response.json
if status_code != 200:
module.fail_json(msg="Failed to retrieve certificates for DigitalOcean")
if state == 'present':
for cert in resp_json['certificates']:
if cert['name'] == name:
module.fail_json(msg="Certificate name %s already exists" % name)
# Certificate does not exist, let us create it
cert_data = dict(name=name,
private_key=module.params['private_key'],
leaf_certificate=module.params['leaf_certificate'])
if module.params['certificate_chain'] is not None:
cert_data.update(certificate_chain=module.params['certificate_chain'])
response = rest.post("certificates", data=cert_data)
status_code = response.status_code
if status_code == 500:
module.fail_json(msg="Failed to upload certificates as the certificates are malformed.")
resp_json = response.json
if status_code == 201:
results.update(changed=True, response=resp_json)
elif status_code == 422:
results.update(changed=False, response=resp_json)
elif state == 'absent':
cert_id_del = None
for cert in resp_json['certificates']:
if cert['name'] == name:
cert_id_del = cert['id']
if cert_id_del is not None:
url = "certificates/{0}".format(cert_id_del)
response = rest.delete(url)
if response.status_code == 204:
results.update(changed=True)
else:
results.update(changed=False)
else:
module.fail_json(msg="Failed to find certificate %s" % name)
module.exit_json(**results)
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
argument_spec.update(
name=dict(type='str'),
leaf_certificate=dict(type='str'),
private_key=dict(type='str', no_log=True),
state=dict(choices=['present', 'absent'], default='present'),
certificate_chain=dict(type='str')
)
module = AnsibleModule(
argument_spec=argument_spec,
required_if=[('state', 'present', ['name', 'leaf_certificate', 'private_key']),
('state', 'absent', ['name'])
],
)
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e))
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
digital_ocean_certificate_info.py

View File

@@ -1,113 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_certificate_info
short_description: Gather information about DigitalOcean certificates
description:
- This module can be used to gather information about DigitalOcean provided certificates.
- This module was called C(digital_ocean_certificate_facts) before Ansible 2.9. The usage did not change.
author: "Abhijeet Kasurde (@Akasurde)"
options:
certificate_id:
description:
- Certificate ID that can be used to identify and reference a certificate.
required: false
requirements:
- "python >= 2.6"
extends_documentation_fragment:
- community.general.digital_ocean.documentation
'''
EXAMPLES = '''
- name: Gather information about all certificates
digital_ocean_certificate_info:
oauth_token: "{{ oauth_token }}"
- name: Gather information about certificate with given id
digital_ocean_certificate_info:
oauth_token: "{{ oauth_token }}"
certificate_id: "892071a0-bb95-49bc-8021-3afd67a210bf"
- name: Get not after information about certificate
digital_ocean_certificate_info:
register: resp_out
- set_fact:
not_after_date: "{{ item.not_after }}"
loop: "{{ resp_out.data|json_query(name) }}"
vars:
name: "[?name=='web-cert-01']"
- debug: var=not_after_date
'''
RETURN = '''
data:
description: DigitalOcean certificate information
returned: success
type: list
sample: [
{
"id": "892071a0-bb95-49bc-8021-3afd67a210bf",
"name": "web-cert-01",
"not_after": "2017-02-22T00:23:00Z",
"sha1_fingerprint": "dfcc9f57d86bf58e321c2c6c31c7a971be244ac7",
"created_at": "2017-02-08T16:02:37Z"
},
]
'''
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
certificate_id = module.params.get('certificate_id', None)
rest = DigitalOceanHelper(module)
base_url = 'certificates?'
if certificate_id is not None:
response = rest.get("%s/%s" % (base_url, certificate_id))
status_code = response.status_code
if status_code != 200:
module.fail_json(msg="Failed to retrieve certificates for DigitalOcean")
resp_json = response.json
certificate = resp_json['certificate']
else:
certificate = rest.get_paginated_data(base_url=base_url, data_key_name='certificates')
module.exit_json(changed=False, data=certificate)
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
argument_spec.update(
certificate_id=dict(type='str', required=False),
)
module = AnsibleModule(argument_spec=argument_spec)
if module._name in ('digital_ocean_certificate_facts', 'community.general.digital_ocean_certificate_facts'):
module.deprecate("The 'digital_ocean_certificate_facts' module has been renamed to 'digital_ocean_certificate_info'",
version='3.0.0', collection_name='community.general') # was Ansible 2.13
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e), exception=format_exc())
if __name__ == '__main__':
main()

View File

@@ -1,214 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_domain
short_description: Create/delete a DNS domain in DigitalOcean
description:
- Create/delete a DNS domain in DigitalOcean.
author: "Michael Gregson (@mgregson)"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'absent']
id:
description:
- Numeric, the droplet id you want to operate on.
aliases: ['droplet_id']
name:
description:
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key, or the name of a domain.
ip:
description:
- An 'A' record for '@' ($ORIGIN) will be created with the value 'ip'. 'ip' is an IP version 4 address.
extends_documentation_fragment:
- community.general.digital_ocean.documentation
notes:
- Environment variables DO_OAUTH_TOKEN can be used for the oauth_token.
- As of Ansible 1.9.5 and 2.0, Version 2 of the DigitalOcean API is used, this removes C(client_id) and C(api_key) options in favor of C(oauth_token).
- If you are running Ansible 1.9.4 or earlier you might not be able to use the included version of this module as the API version used has been retired.
requirements:
- "python >= 2.6"
'''
EXAMPLES = '''
- name: Create a domain
digital_ocean_domain:
state: present
name: my.digitalocean.domain
ip: 127.0.0.1
# Create a droplet and corresponding domain
- name: Create a droplet
digital_ocean:
state: present
name: test_droplet
size_id: 1gb
region_id: sgp1
image_id: ubuntu-14-04-x64
register: test_droplet
- name: Create a corresponding domain
digital_ocean_domain:
state: present
name: "{{ test_droplet.droplet.name }}.my.domain"
ip: "{{ test_droplet.droplet.ip_address }}"
'''
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
class DoManager(DigitalOceanHelper, object):
def __init__(self, module):
super(DoManager, self).__init__(module)
self.domain_name = module.params.get('name', None)
self.domain_ip = module.params.get('ip', None)
self.domain_id = module.params.get('id', None)
@staticmethod
def jsonify(response):
return response.status_code, response.json
def all_domains(self):
resp = self.get('domains/')
return resp
def find(self):
if self.domain_name is None and self.domain_id is None:
return False
domains = self.all_domains()
status, json = self.jsonify(domains)
for domain in json['domains']:
if domain['name'] == self.domain_name:
return True
return False
def add(self):
params = {'name': self.domain_name, 'ip_address': self.domain_ip}
resp = self.post('domains/', data=params)
status = resp.status_code
json = resp.json
if status == 201:
return json['domain']
else:
return json
def all_domain_records(self):
resp = self.get('domains/%s/records/' % self.domain_name)
return resp.json
def domain_record(self):
resp = self.get('domains/%s' % self.domain_name)
status, json = self.jsonify(resp)
return json
def destroy_domain(self):
resp = self.delete('domains/%s' % self.domain_name)
status, json = self.jsonify(resp)
if status == 204:
return True
else:
return json
def edit_domain_record(self, record):
params = {'name': '@',
'data': self.module.params.get('ip')}
resp = self.put('domains/%s/records/%s' % (self.domain_name, record['id']), data=params)
status, json = self.jsonify(resp)
return json['domain_record']
def create_domain_record(self):
params = {'name': '@',
'type': 'A',
'data': self.module.params.get('ip')}
resp = self.post('domains/%s/records' % (self.domain_name), data=params)
status, json = self.jsonify(resp)
return json['domain_record']
def core(module):
do_manager = DoManager(module)
state = module.params.get('state')
domain = do_manager.find()
if state == 'present':
if not domain:
domain = do_manager.add()
if 'message' in domain:
module.fail_json(changed=False, msg=domain['message'])
else:
module.exit_json(changed=True, domain=domain)
else:
records = do_manager.all_domain_records()
at_record = None
for record in records['domain_records']:
if record['name'] == "@" and record['type'] == 'A':
at_record = record
if not at_record:
do_manager.create_domain_record()
module.exit_json(changed=True, domain=do_manager.find())
elif not at_record['data'] == module.params.get('ip'):
do_manager.edit_domain_record(at_record)
module.exit_json(changed=True, domain=do_manager.find())
else:
module.exit_json(changed=False, domain=do_manager.domain_record())
elif state == 'absent':
if not domain:
module.exit_json(changed=False, msg="Domain not found")
else:
delete_event = do_manager.destroy_domain()
if not delete_event:
module.fail_json(changed=False, msg=delete_event['message'])
else:
module.exit_json(changed=True, event=None)
delete_event = do_manager.destroy_domain()
module.exit_json(changed=delete_event)
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
argument_spec.update(
state=dict(choices=['present', 'absent'], default='present'),
name=dict(type='str'),
id=dict(aliases=['droplet_id'], type='int'),
ip=dict(type='str')
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=(
['id', 'name'],
),
)
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
digital_ocean_domain_info.py

View File

@@ -1,138 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_domain_info
short_description: Gather information about DigitalOcean Domains
description:
- This module can be used to gather information about DigitalOcean provided Domains.
- This module was called C(digital_ocean_domain_facts) before Ansible 2.9. The usage did not change.
author: "Abhijeet Kasurde (@Akasurde)"
options:
domain_name:
description:
- Name of the domain to gather information for.
required: false
requirements:
- "python >= 2.6"
extends_documentation_fragment:
- community.general.digital_ocean.documentation
'''
EXAMPLES = '''
- name: Gather information about all domains
digital_ocean_domain_info:
oauth_token: "{{ oauth_token }}"
- name: Gather information about domain with given name
digital_ocean_domain_info:
oauth_token: "{{ oauth_token }}"
domain_name: "example.com"
- name: Get ttl from domain
digital_ocean_domain_info:
register: resp_out
- set_fact:
domain_ttl: "{{ item.ttl }}"
loop: "{{ resp_out.data|json_query(name) }}"
vars:
name: "[?name=='example.com']"
- debug: var=domain_ttl
'''
RETURN = '''
data:
description: DigitalOcean Domain information
returned: success
type: list
sample: [
{
"domain_records": [
{
"data": "ns1.digitalocean.com",
"flags": null,
"id": 37826823,
"name": "@",
"port": null,
"priority": null,
"tag": null,
"ttl": 1800,
"type": "NS",
"weight": null
},
],
"name": "myexample123.com",
"ttl": 1800,
"zone_file": "myexample123.com. IN SOA ns1.digitalocean.com. hostmaster.myexample123.com. 1520702984 10800 3600 604800 1800\n",
},
]
'''
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
domain_name = module.params.get('domain_name', None)
rest = DigitalOceanHelper(module)
domain_results = []
if domain_name is not None:
response = rest.get("domains/%s" % domain_name)
status_code = response.status_code
if status_code != 200:
module.fail_json(msg="Failed to retrieve domain for DigitalOcean")
resp_json = response.json
domains = [resp_json['domain']]
else:
domains = rest.get_paginated_data(base_url="domains?", data_key_name='domains')
for temp_domain in domains:
temp_domain_dict = {
"name": temp_domain['name'],
"ttl": temp_domain['ttl'],
"zone_file": temp_domain['zone_file'],
"domain_records": list(),
}
base_url = "domains/%s/records?" % temp_domain['name']
temp_domain_dict["domain_records"] = rest.get_paginated_data(base_url=base_url, data_key_name='domain_records')
domain_results.append(temp_domain_dict)
module.exit_json(changed=False, data=domain_results)
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
argument_spec.update(
domain_name=dict(type='str', required=False),
)
module = AnsibleModule(argument_spec=argument_spec)
if module._name in ('digital_ocean_domain_facts', 'community.general.digital_ocean_domain_facts'):
module.deprecate("The 'digital_ocean_domain_facts' module has been renamed to 'digital_ocean_domain_info'",
version='3.0.0', collection_name='community.general') # was Ansible 2.13
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e), exception=format_exc())
if __name__ == '__main__':
main()

View File

@@ -1,351 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_droplet
short_description: Create and delete a DigitalOcean droplet
description:
- Create and delete a droplet in DigitalOcean and optionally wait for it to be active.
author: "Gurchet Rai (@gurch101)"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'absent']
id:
description:
- Numeric, the droplet id you want to operate on.
aliases: ['droplet_id']
name:
description:
- String, this is the name of the droplet - must be formatted by hostname rules.
unique_name:
description:
- require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host
per name. Useful for idempotence.
default: False
type: bool
size:
description:
- This is the slug of the size you would like the droplet created with.
aliases: ['size_id']
image:
description:
- This is the slug of the image you would like the droplet created with.
aliases: ['image_id']
region:
description:
- This is the slug of the region you would like your server to be created in.
aliases: ['region_id']
ssh_keys:
description:
- array of SSH key Fingerprint that you would like to be added to the server.
required: False
private_networking:
description:
- add an additional, private network interface to droplet for inter-droplet communication.
default: False
type: bool
vpc_uuid:
description:
- A string specifying the UUID of the VPC to which the Droplet will be assigned. If excluded, Droplet will be
assigned to the account's default VPC for the region.
type: str
version_added: 0.2.0
user_data:
description:
- opaque blob of data which is made available to the droplet
required: False
ipv6:
description:
- enable IPv6 for your droplet.
required: False
default: False
type: bool
wait:
description:
- Wait for the droplet to be active before returning. If wait is "no" an ip_address may not be returned.
required: False
default: True
type: bool
wait_timeout:
description:
- How long before wait gives up, in seconds, when creating a droplet.
default: 120
backups:
description:
- indicates whether automated backups should be enabled.
required: False
default: False
type: bool
monitoring:
description:
- indicates whether to install the DigitalOcean agent for monitoring.
required: False
default: False
type: bool
tags:
description:
- List, A list of tag names as strings to apply to the Droplet after it is created. Tag names can either be existing or new tags.
required: False
volumes:
description:
- List, A list including the unique string identifier for each Block Storage volume to be attached to the Droplet.
required: False
oauth_token:
description:
- DigitalOcean OAuth token. Can be specified in C(DO_API_KEY), C(DO_API_TOKEN), or C(DO_OAUTH_TOKEN) environment variables
aliases: ['API_TOKEN']
required: True
requirements:
- "python >= 2.6"
'''
EXAMPLES = '''
- name: Create a new droplet
digital_ocean_droplet:
state: present
name: mydroplet
oauth_token: XXX
size: 2gb
region: sfo1
image: ubuntu-16-04-x64
wait_timeout: 500
ssh_keys: [ .... ]
register: my_droplet
- debug:
msg: "ID is {{ my_droplet.data.droplet.id }}, IP is {{ my_droplet.data.ip_address }}"
- name: Ensure a droplet is present
digital_ocean_droplet:
state: present
id: 123
name: mydroplet
oauth_token: XXX
size: 2gb
region: sfo1
image: ubuntu-16-04-x64
wait_timeout: 500
- name: Ensure a droplet is present with SSH keys installed
digital_ocean_droplet:
state: present
id: 123
name: mydroplet
oauth_token: XXX
size: 2gb
region: sfo1
ssh_keys: ['1534404', '1784768']
image: ubuntu-16-04-x64
wait_timeout: 500
'''
RETURN = '''
# Digital Ocean API info https://developers.digitalocean.com/documentation/v2/#droplets
data:
description: a DigitalOcean Droplet
returned: changed
type: dict
sample: {
"ip_address": "104.248.118.172",
"ipv6_address": "2604:a880:400:d1::90a:6001",
"private_ipv4_address": "10.136.122.141",
"droplet": {
"id": 3164494,
"name": "example.com",
"memory": 512,
"vcpus": 1,
"disk": 20,
"locked": true,
"status": "new",
"kernel": {
"id": 2233,
"name": "Ubuntu 14.04 x64 vmlinuz-3.13.0-37-generic",
"version": "3.13.0-37-generic"
},
"created_at": "2014-11-14T16:36:31Z",
"features": ["virtio"],
"backup_ids": [],
"snapshot_ids": [],
"image": {},
"volume_ids": [],
"size": {},
"size_slug": "512mb",
"networks": {},
"region": {},
"tags": ["web"]
}
}
'''
import time
import json
from ansible.module_utils.basic import AnsibleModule, env_fallback
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
class DODroplet(object):
def __init__(self, module):
self.rest = DigitalOceanHelper(module)
self.module = module
self.wait = self.module.params.pop('wait', True)
self.wait_timeout = self.module.params.pop('wait_timeout', 120)
self.unique_name = self.module.params.pop('unique_name', False)
# pop the oauth token so we don't include it in the POST data
self.module.params.pop('oauth_token')
def get_by_id(self, droplet_id):
if not droplet_id:
return None
response = self.rest.get('droplets/{0}'.format(droplet_id))
json_data = response.json
if response.status_code == 200:
return json_data
return None
def get_by_name(self, droplet_name):
if not droplet_name:
return None
page = 1
while page is not None:
response = self.rest.get('droplets?page={0}'.format(page))
json_data = response.json
if response.status_code == 200:
for droplet in json_data['droplets']:
if droplet['name'] == droplet_name:
return {'droplet': droplet}
if 'links' in json_data and 'pages' in json_data['links'] and 'next' in json_data['links']['pages']:
page += 1
else:
page = None
return None
def get_addresses(self, data):
"""
Expose IP addresses as their own property allowing users extend to additional tasks
"""
_data = data
for k, v in data.items():
setattr(self, k, v)
networks = _data['droplet']['networks']
for network in networks.get('v4', []):
if network['type'] == 'public':
_data['ip_address'] = network['ip_address']
else:
_data['private_ipv4_address'] = network['ip_address']
for network in networks.get('v6', []):
if network['type'] == 'public':
_data['ipv6_address'] = network['ip_address']
else:
_data['private_ipv6_address'] = network['ip_address']
return _data
def get_droplet(self):
json_data = self.get_by_id(self.module.params['id'])
if not json_data and self.unique_name:
json_data = self.get_by_name(self.module.params['name'])
return json_data
def create(self):
json_data = self.get_droplet()
droplet_data = None
if json_data:
droplet_data = self.get_addresses(json_data)
self.module.exit_json(changed=False, data=droplet_data)
if self.module.check_mode:
self.module.exit_json(changed=True)
request_params = dict(self.module.params)
del request_params['id']
response = self.rest.post('droplets', data=request_params)
json_data = response.json
if response.status_code >= 400:
self.module.fail_json(changed=False, msg=json_data['message'])
if self.wait:
json_data = self.ensure_power_on(json_data['droplet']['id'])
droplet_data = self.get_addresses(json_data)
self.module.exit_json(changed=True, data=droplet_data)
def delete(self):
json_data = self.get_droplet()
if json_data:
if self.module.check_mode:
self.module.exit_json(changed=True)
response = self.rest.delete('droplets/{0}'.format(json_data['droplet']['id']))
json_data = response.json
if response.status_code == 204:
self.module.exit_json(changed=True, msg='Droplet deleted')
self.module.fail_json(changed=False, msg='Failed to delete droplet')
else:
self.module.exit_json(changed=False, msg='Droplet not found')
def ensure_power_on(self, droplet_id):
end_time = time.time() + self.wait_timeout
while time.time() < end_time:
response = self.rest.get('droplets/{0}'.format(droplet_id))
json_data = response.json
if json_data['droplet']['status'] == 'active':
return json_data
time.sleep(min(2, end_time - time.time()))
self.module.fail_json(msg='Wait for droplet powering on timeout')
def core(module):
state = module.params.pop('state')
droplet = DODroplet(module)
if state == 'present':
droplet.create()
elif state == 'absent':
droplet.delete()
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(choices=['present', 'absent'], default='present'),
oauth_token=dict(
aliases=['API_TOKEN'],
no_log=True,
fallback=(env_fallback, ['DO_API_TOKEN', 'DO_API_KEY', 'DO_OAUTH_TOKEN'])
),
name=dict(type='str'),
size=dict(aliases=['size_id']),
image=dict(aliases=['image_id']),
region=dict(aliases=['region_id']),
ssh_keys=dict(type='list'),
private_networking=dict(type='bool', default=False),
vpc_uuid=dict(type='str'),
backups=dict(type='bool', default=False),
monitoring=dict(type='bool', default=False),
id=dict(aliases=['droplet_id'], type='int'),
user_data=dict(default=None),
ipv6=dict(type='bool', default=False),
volumes=dict(type='list'),
tags=dict(type='list'),
wait=dict(type='bool', default=True),
wait_timeout=dict(default=120, type='int'),
unique_name=dict(type='bool', default=False),
),
required_one_of=(
['id', 'name'],
),
required_if=([
('state', 'present', ['name', 'size', 'image', 'region']),
]),
supports_check_mode=True,
)
core(module)
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
digital_ocean_firewall_info.py

View File

@@ -1,131 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Anthony Bond <ajbond2005@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_firewall_info
short_description: Gather information about DigitalOcean firewalls
description:
- This module can be used to gather information about DigitalOcean firewalls.
- This module was called C(digital_ocean_firewall_facts) before Ansible 2.9. The usage did not change.
author: "Anthony Bond (@BondAnthony)"
options:
name:
description:
- Firewall rule name that can be used to identify and reference a specific firewall rule.
required: false
requirements:
- "python >= 2.6"
extends_documentation_fragment:
- community.general.digital_ocean.documentation
'''
EXAMPLES = '''
- name: Gather information about all firewalls
digital_ocean_firewall_info:
oauth_token: "{{ oauth_token }}"
- name: Gather information about a specific firewall by name
digital_ocean_firewall_info:
oauth_token: "{{ oauth_token }}"
name: "firewall_name"
- name: Gather information from a firewall rule
digital_ocean_firewall_info:
name: SSH
register: resp_out
- set_fact:
firewall_id: "{{ resp_out.data.id }}"
- debug:
msg: "{{ firewall_id }}"
'''
RETURN = '''
data:
description: DigitalOcean firewall information
returned: success
type: list
sample: [
{
"id": "435tbg678-1db53-32b6-t543-28322569t252",
"name": "metrics",
"status": "succeeded",
"inbound_rules": [
{
"protocol": "tcp",
"ports": "9100",
"sources": {
"addresses": [
"1.1.1.1"
]
}
}
],
"outbound_rules": [],
"created_at": "2018-01-15T07:04:25Z",
"droplet_ids": [
87426985
],
"tags": [],
"pending_changes": []
},
]
'''
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
firewall_name = module.params.get('name', None)
rest = DigitalOceanHelper(module)
base_url = 'firewalls?'
response = rest.get("%s" % base_url)
status_code = response.status_code
if status_code != 200:
module.fail_json(msg="Failed to retrieve firewalls from Digital Ocean")
firewalls = rest.get_paginated_data(base_url=base_url, data_key_name='firewalls')
if firewall_name is not None:
rule = {}
for firewall in firewalls:
if firewall['name'] == firewall_name:
rule.update(firewall)
module.exit_json(changed=False, data=rule)
else:
module.exit_json(changed=False, data=firewalls)
def main():
argument_spec = DigitalOceanHelper.digital_ocean_argument_spec()
argument_spec.update(
name=dict(type='str', required=False),
)
module = AnsibleModule(argument_spec=argument_spec)
if module._name in ('digital_ocean_firewall_facts', 'community.general.digital_ocean_firewall_facts'):
module.deprecate("The 'digital_ocean_firewall_facts' module has been renamed to 'digital_ocean_firewall_info'",
version='3.0.0', collection_name='community.general') # was Ansible 2.13
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e), exception=format_exc())
if __name__ == '__main__':
main()

View File

@@ -1,311 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, Patrick F. Marques <patrickfmarques@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_floating_ip
short_description: Manage DigitalOcean Floating IPs
description:
- Create/delete/assign a floating IP.
author: "Patrick Marques (@pmarques)"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'absent']
ip:
description:
- Public IP address of the Floating IP. Used to remove an IP
region:
description:
- The region that the Floating IP is reserved to.
droplet_id:
description:
- The Droplet that the Floating IP has been assigned to.
oauth_token:
description:
- DigitalOcean OAuth token.
required: true
notes:
- Version 2 of DigitalOcean API is used.
requirements:
- "python >= 2.6"
'''
EXAMPLES = '''
- name: "Create a Floating IP in region lon1"
digital_ocean_floating_ip:
state: present
region: lon1
- name: "Create a Floating IP assigned to Droplet ID 123456"
digital_ocean_floating_ip:
state: present
droplet_id: 123456
- name: "Delete a Floating IP with ip 1.2.3.4"
digital_ocean_floating_ip:
state: absent
ip: "1.2.3.4"
'''
RETURN = '''
# Digital Ocean API info https://developers.digitalocean.com/documentation/v2/#floating-ips
data:
description: a DigitalOcean Floating IP resource
returned: success and no resource constraint
type: dict
sample: {
"action": {
"id": 68212728,
"status": "in-progress",
"type": "assign_ip",
"started_at": "2015-10-15T17:45:44Z",
"completed_at": null,
"resource_id": 758603823,
"resource_type": "floating_ip",
"region": {
"name": "New York 3",
"slug": "nyc3",
"sizes": [
"512mb",
"1gb",
"2gb",
"4gb",
"8gb",
"16gb",
"32gb",
"48gb",
"64gb"
],
"features": [
"private_networking",
"backups",
"ipv6",
"metadata"
],
"available": true
},
"region_slug": "nyc3"
}
}
'''
import json
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import env_fallback
from ansible.module_utils.urls import fetch_url
class Response(object):
def __init__(self, resp, info):
self.body = None
if resp:
self.body = resp.read()
self.info = info
@property
def json(self):
if not self.body:
if "body" in self.info:
return json.loads(self.info["body"])
return None
try:
return json.loads(self.body)
except ValueError:
return None
@property
def status_code(self):
return self.info["status"]
class Rest(object):
def __init__(self, module, headers):
self.module = module
self.headers = headers
self.baseurl = 'https://api.digitalocean.com/v2'
def _url_builder(self, path):
if path[0] == '/':
path = path[1:]
return '%s/%s' % (self.baseurl, path)
def send(self, method, path, data=None, headers=None):
url = self._url_builder(path)
data = self.module.jsonify(data)
timeout = self.module.params['timeout']
resp, info = fetch_url(self.module, url, data=data, headers=self.headers, method=method, timeout=timeout)
# Exceptions in fetch_url may result in a status -1, the ensures a
if info['status'] == -1:
self.module.fail_json(msg=info['msg'])
return Response(resp, info)
def get(self, path, data=None, headers=None):
return self.send('GET', path, data, headers)
def put(self, path, data=None, headers=None):
return self.send('PUT', path, data, headers)
def post(self, path, data=None, headers=None):
return self.send('POST', path, data, headers)
def delete(self, path, data=None, headers=None):
return self.send('DELETE', path, data, headers)
def wait_action(module, rest, ip, action_id, timeout=10):
end_time = time.time() + 10
while time.time() < end_time:
response = rest.get('floating_ips/{0}/actions/{1}'.format(ip, action_id))
status_code = response.status_code
status = response.json['action']['status']
# TODO: check status_code == 200?
if status == 'completed':
return True
elif status == 'errored':
module.fail_json(msg='Floating ip action error [ip: {0}: action: {1}]'.format(
ip, action_id), data=json)
module.fail_json(msg='Floating ip action timeout [ip: {0}: action: {1}]'.format(
ip, action_id), data=json)
def core(module):
api_token = module.params['oauth_token']
state = module.params['state']
ip = module.params['ip']
droplet_id = module.params['droplet_id']
rest = Rest(module, {'Authorization': 'Bearer {0}'.format(api_token),
'Content-type': 'application/json'})
if state in ('present'):
if droplet_id is not None and module.params['ip'] is not None:
# Lets try to associate the ip to the specified droplet
associate_floating_ips(module, rest)
else:
create_floating_ips(module, rest)
elif state in ('absent'):
response = rest.delete("floating_ips/{0}".format(ip))
status_code = response.status_code
json_data = response.json
if status_code == 204:
module.exit_json(changed=True)
elif status_code == 404:
module.exit_json(changed=False)
else:
module.exit_json(changed=False, data=json_data)
def get_floating_ip_details(module, rest):
ip = module.params['ip']
response = rest.get("floating_ips/{0}".format(ip))
status_code = response.status_code
json_data = response.json
if status_code == 200:
return json_data['floating_ip']
else:
module.fail_json(msg="Error assigning floating ip [{0}: {1}]".format(
status_code, json_data["message"]), region=module.params['region'])
def assign_floating_id_to_droplet(module, rest):
ip = module.params['ip']
payload = {
"type": "assign",
"droplet_id": module.params['droplet_id'],
}
response = rest.post("floating_ips/{0}/actions".format(ip), data=payload)
status_code = response.status_code
json_data = response.json
if status_code == 201:
wait_action(module, rest, ip, json_data['action']['id'])
module.exit_json(changed=True, data=json_data)
else:
module.fail_json(msg="Error creating floating ip [{0}: {1}]".format(
status_code, json_data["message"]), region=module.params['region'])
def associate_floating_ips(module, rest):
floating_ip = get_floating_ip_details(module, rest)
droplet = floating_ip['droplet']
# TODO: If already assigned to a droplet verify if is one of the specified as valid
if droplet is not None and str(droplet['id']) in [module.params['droplet_id']]:
module.exit_json(changed=False)
else:
assign_floating_id_to_droplet(module, rest)
def create_floating_ips(module, rest):
payload = {
}
if module.params['region'] is not None:
payload["region"] = module.params['region']
if module.params['droplet_id'] is not None:
payload["droplet_id"] = module.params['droplet_id']
response = rest.post("floating_ips", data=payload)
status_code = response.status_code
json_data = response.json
if status_code == 202:
module.exit_json(changed=True, data=json_data)
else:
module.fail_json(msg="Error creating floating ip [{0}: {1}]".format(
status_code, json_data["message"]), region=module.params['region'])
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(choices=['present', 'absent'], default='present'),
ip=dict(aliases=['id'], required=False),
region=dict(required=False),
droplet_id=dict(required=False),
oauth_token=dict(
no_log=True,
# Support environment variable for DigitalOcean OAuth Token
fallback=(env_fallback, ['DO_API_TOKEN', 'DO_API_KEY', 'DO_OAUTH_TOKEN']),
required=True,
),
validate_certs=dict(type='bool', default=True),
timeout=dict(type='int', default=30),
),
required_if=[
('state', 'delete', ['ip'])
],
mutually_exclusive=[
['region', 'droplet_id']
],
)
core(module)
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
digital_ocean_floating_ip_info.py

View File

@@ -1,119 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (C) 2017-18, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: digital_ocean_floating_ip_info
short_description: DigitalOcean Floating IPs information
description:
- This module can be used to fetch DigitalOcean Floating IPs information.
- This module was called C(digital_ocean_floating_ip_facts) before Ansible 2.9. The usage did not change.
author: "Patrick Marques (@pmarques)"
extends_documentation_fragment:
- community.general.digital_ocean.documentation
notes:
- Version 2 of DigitalOcean API is used.
requirements:
- "python >= 2.6"
'''
EXAMPLES = '''
- name: "Gather information about all Floating IPs"
digital_ocean_floating_ip_info:
register: result
- name: "List of current floating ips"
debug: var=result.floating_ips
'''
RETURN = '''
# Digital Ocean API info https://developers.digitalocean.com/documentation/v2/#floating-ips
floating_ips:
description: a DigitalOcean Floating IP resource
returned: success and no resource constraint
type: list
sample: [
{
"ip": "45.55.96.47",
"droplet": null,
"region": {
"name": "New York 3",
"slug": "nyc3",
"sizes": [
"512mb",
"1gb",
"2gb",
"4gb",
"8gb",
"16gb",
"32gb",
"48gb",
"64gb"
],
"features": [
"private_networking",
"backups",
"ipv6",
"metadata"
],
"available": true
},
"locked": false
}
]
'''
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.digital_ocean import DigitalOceanHelper
from ansible.module_utils._text import to_native
def core(module):
rest = DigitalOceanHelper(module)
page = 1
has_next = True
floating_ips = []
status_code = None
while has_next or status_code != 200:
response = rest.get("floating_ips?page={0}&per_page=20".format(page))
status_code = response.status_code
# stop if any error during pagination
if status_code != 200:
break
page += 1
floating_ips.extend(response.json["floating_ips"])
has_next = "pages" in response.json["links"] and "next" in response.json["links"]["pages"]
if status_code == 200:
module.exit_json(changed=False, floating_ips=floating_ips)
else:
module.fail_json(msg="Error fetching information [{0}: {1}]".format(
status_code, response.json["message"]))
def main():
module = AnsibleModule(
argument_spec=DigitalOceanHelper.digital_ocean_argument_spec()
)
if module._name in ('digital_ocean_floating_ip_facts', 'community.general.digital_ocean_floating_ip_facts'):
module.deprecate("The 'digital_ocean_floating_ip_facts' module has been renamed to 'digital_ocean_floating_ip_info'",
version='3.0.0', collection_name='community.general') # was Ansible 2.13
try:
core(module)
except Exception as e:
module.fail_json(msg=to_native(e))
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
digital_ocean_image_info.py

Some files were not shown because too many files have changed in this diff Show More