Compare commits

..

32 Commits
2.2.0 ... 2.3.0

Author SHA1 Message Date
Felix Fontein
b44f6b8114 Release 2.3.0. 2021-03-23 12:21:35 +01:00
patchback[bot]
53a145ecb0 Install collections in CI directly with git to work around the Galaxy CloudFlare PITA. (#2082) (#2086)
(cherry picked from commit 7fe9dd7a60)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-03-23 07:42:21 +01:00
patchback[bot]
b22b44088f Temporarily disable copr integration tests due to failures with remote repository. (#2083) (#2085)
(cherry picked from commit 09351d9010)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-03-23 07:17:20 +01:00
patchback[bot]
e0a1aa2f46 Fixed documentation (#2062) (#2081)
(cherry picked from commit 88994ef2b7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-22 20:55:53 +01:00
patchback[bot]
53e7e48834 improve force_archive parameter documentation of archive module (#2052) (#2079)
* improve documentation for force_archive parameter

* add link to unarchive module

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit af441aecfc)

Co-authored-by: Triantafyllos <ttsak@hotmail.com>
2021-03-22 20:55:37 +01:00
Bill Dodd
62e3a2ed2f Add support for Redfish session create, delete, and authenticate (#2027) (#2053)
* Add support for Redfish session create, delete, and authenticate (#2027)

* Add support for Redfish session create and delete

* add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit efd441407f)

* fix inadvertant spaces around equals
2021-03-22 18:27:25 +01:00
Felix Fontein
ecede6ca99 Prepare 2.3.0 release. 2021-03-22 07:58:24 +01:00
patchback[bot]
e1ac1fa6db stacki_host - configured params to use fallback instead of default (#2072) (#2076)
* configuredd params to use fallback instead of default

* added changelog fragment

(cherry picked from commit 5fc56676c2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-21 15:43:12 +01:00
patchback[bot]
81cef0bd05 New Filter plugin from_csv (#2037) (#2074)
* Added from_csv filter and integration tests

* Cleaning up whitespace

* Adding changelog fragment

* Updated changelog fragment name

* Removed temp fragment

* Refactoring csv functions Part 1

* Syncing refactored csv modules/filters

* Adding unit tests for csv Module_Util

* Updating changelog fragment

* Correcting whitespace in unit test

* Improving changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/2037-add-from-csv-filter.yml

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 6529390901)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-03-21 13:56:32 +01:00
patchback[bot]
a2bb118e95 Add gandi_livedns module (#328) (#2070)
* Add gandi_livedns module

This module uses REST API to register, update and delete domain name
entries in Gandi DNS service (https://www.gandi.net/en/domain).

* Apply suggestions from code review

* Update plugins/module_utils/gandi_livedns_api.py

Co-authored-by: Gregory Thiemonge <greg@thiemonge.org>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 81f3ad45c9)

Co-authored-by: Gregory Thiemonge <44313235+gthiemonge@users.noreply.github.com>
2021-03-21 13:22:14 +01:00
patchback[bot]
bf9bcd9bb4 snmp_facts - added timeout and retries params to module (#2065) (#2073)
* added timeout and retries params to module

* added changelog fragment

* Update plugins/modules/net_tools/snmp_facts.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/net_tools/snmp_facts.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* removed default for retries per suggestion in PR

* Update plugins/modules/net_tools/snmp_facts.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit c147d2fb98)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-21 11:52:40 +01:00
patchback[bot]
9bfd61e117 New module: Add Pritunl VPN user module (net_tools/pritunl/) (#803) (#2071)
(cherry picked from commit 68fc48cd1f)

Co-authored-by: Florian Dambrine <Lowess@users.noreply.github.com>
2021-03-21 11:46:33 +01:00
patchback[bot]
ca81a5cf2f ipa_sudorule add support for setting runasextusers (#2031) (#2068)
* Add support for setting runasextusers

* fix formatting

* add changelog fragment

* Update plugins/modules/identity/ipa/ipa_sudorule.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/2031-ipa_sudorule_add_runasextusers.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: quasd <qquasd@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit ff9f98795e)

Co-authored-by: quasd <quasd@users.noreply.github.com>
2021-03-21 11:24:07 +01:00
patchback[bot]
853dd21eab archive - a first refactoring (#2061) (#2069)
* a first refactoring on archive

* added changelog fragment

* suggestion from PR

(cherry picked from commit 606eb0df15)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-21 11:23:55 +01:00
patchback[bot]
6f267d8f35 archive - created an integration test that archives broken links (#2063) (#2066)
* created an integration test that archives broken links

* sanity fix

(cherry picked from commit f5a9584ae6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-21 10:37:43 +01:00
patchback[bot]
1f975eff56 Fix nios modules to work with ansible-core 2.11 (#2057) (#2059)
* Fix nios modules to work with ansible-core 2.11.

* Adjust tests.

(cherry picked from commit 24f8be834a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-03-20 14:10:24 +01:00
patchback[bot]
0ca922248f Adding xmadsen and renxulei as Redfish maintainers (#2047) (#2056)
(cherry picked from commit a23fc67f1f)

Co-authored-by: Mike Raineri <mraineri@gmail.com>
2021-03-20 10:43:29 +01:00
patchback[bot]
ef7ade6a56 Adding purge parameter to proxmox for use with lxc delete requests (#2013) (#2050)
* added purge as optional module parameter

* Adding changelog fragment

* Adding version to documentation for purge

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changelog

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 79fb3e9852)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-03-19 19:58:44 +01:00
patchback[bot]
d721283846 Fix IndexError in SetManagerNic (#2040) (#2049)
* fix IndexError in SetManagerNic

* add changelog fragment

(cherry picked from commit 0b2ebabd29)

Co-authored-by: Bill Dodd <billdodd@gmail.com>
2021-03-19 19:58:27 +01:00
patchback[bot]
af410f5572 update linode team (#2039) (#2043)
(cherry picked from commit 8225b745f3)

Co-authored-by: Charlie Kenney <Charlesc.kenney@gmail.com>
2021-03-19 07:43:52 +01:00
patchback[bot]
442dabbcc6 fix: scaleway inventory pagination (#2036) (#2042)
* fix: scaleway inventory pagination

* add changelog

* Update changelogs/fragments/2036-scaleway-inventory.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Antoine Barbare <abarbare@online.net>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit fe61be3e11)

Co-authored-by: abarbare <antoinebarbare@gmail.com>
2021-03-18 23:32:01 +01:00
patchback[bot]
bbb155409e Improvements and fixes to ModuleHelper, with (some) tests. (#2024) (#2034)
* Improvements and fixes to ModuleHelper, with (some) tests.

* added changelog fragment

* adjusted changelog frag - get_bin_path() handling is actually a bugfix

(cherry picked from commit 4fbef900e1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-17 14:15:48 +01:00
patchback[bot]
a83556af80 allow passing the --allow-root flag to kibana_plugin module (#2014) (#2022)
* kibana_plugin module parameter force is a boolean

* allow passing the --allow-root flag to kibana_plugin module

* add changelog fragment for kibana_plugin --allow-root

Co-authored-by: Amin Vakil <info@aminvakil.com>
Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Amin Vakil <info@aminvakil.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 3162ed6795)

Co-authored-by: dacodas <dacoda.strack@gmail.com>
2021-03-15 14:05:10 +01:00
patchback[bot]
13a5e5a1ba Adding tags as module parameter to proxmox_kvm (#2000) (#2023)
* Adding tags as module parameter

* Added changelog fragment

* Correcting typo in changelog fragment

* Correcting punctuation in docs

* Including version to tags parameter description

Co-authored-by: Felix Fontein <felix@fontein.de>

* Correct tag validation and parsing logic condition

Original test was for key and not value

Co-authored-by: Felix Fontein <felix@fontein.de>

* Improving usability with default null behavior

* Removing default case and related unneccessary complexity

* Display regex in tags description as code

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 0f61ae4841)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-03-15 14:05:02 +01:00
patchback[bot]
466bd89bd4 Tidy up sanity checks ignore lines modules (batch 8) (#2006) (#2019)
* fixed validation-modules for plugins/modules/cloud/smartos/smartos_image_info.py

* fixed validation-modules for plugins/modules/cloud/rackspace/rax_scaling_group.py

* fixed validation-modules for plugins/modules/cloud/rackspace/rax_cdb_user.py

* fixed validation-modules for plugins/modules/cloud/rackspace/rax.py

* Tidy up sanity checks ignore lines modules (batch 8)

* added changelog fragment

* rolled back removal of parameter from rax.py

(cherry picked from commit f8859af377)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-14 11:55:07 +01:00
patchback[bot]
bd4d5fe9db More false-positives (not flagged by sanity tests yet). (#2010) (#2016)
(cherry picked from commit 49d9a257ef)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-03-13 13:48:18 +01:00
patchback[bot]
cf889faf42 Remove password requirement when creating lxc containers (#1999) (#2011)
* Removed requirement for password

* Updated documentation for password

* Adding changelog fragment

* Update changelogs/fragments/1999-proxmox-fix-issue-1955.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 4676ca584b)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-03-12 20:35:11 +01:00
patchback[bot]
ea313503dd Mark non-secret leaking module options with no_log=False (#2001) (#2005)
* Mark non-secret leaking module options with no_log=False.

* Add changelog fragment.

(cherry picked from commit 1ea080762b)

Co-authored-by: Felix Fontein <felix@fontein.de>
2021-03-12 09:17:05 +01:00
patchback[bot]
57fa6526c4 Excluded qemu templates in pools (#1991) (#2003)
* Excluded qemu templates in pools

* Added changelog fragment

* Made check more robust

(cherry picked from commit 178209be27)

Co-authored-by: Jeffrey van Pelt <jeff@vanpelt.one>
2021-03-12 08:24:24 +01:00
patchback[bot]
ae4bee2627 jenkins_job - added validate_certs parameter, setting the PYTHONHTTPSVERIFY env var (#1977) (#1996)
* added validate_certs parameter, setting the PYTHONHTTPSVERIFY env var

* added changelog fragment

* Update plugins/modules/web_infrastructure/jenkins_job.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/web_infrastructure/jenkins_job.py

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 7452a53647)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2021-03-12 07:25:35 +01:00
patchback[bot]
87000ae491 Allow tags strings containing commas in proxmox inventory plug-in (#1949) (#1998)
* Included explicit parsing for proxmox guest tags and updated corresponding unit test with tags key

* Including changelog fragment for PR 1949

* Removed ellipsis from test

Proxmox only permits periods when surrounded by alphanumeric characters

* Corrected punctuation for changelog entry

Co-authored-by: Felix Fontein <felix@fontein.de>

* Allowing tags string to contain commas

* Incorporated new parsed tags fact with bugfix

* Correcting whitespace issues

* Update changelogs/fragments/1949-proxmox-inventory-tags.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/inventory/proxmox.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/1949-proxmox-inventory-tags.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit d0bb74a03b)

Co-authored-by: Ajpantuso <ajpantuso@gmail.com>
2021-03-12 07:25:18 +01:00
Felix Fontein
46e221cbc6 Next expected release is 2.3.0. 2021-03-08 13:23:12 +01:00
102 changed files with 3565 additions and 287 deletions

4
.github/BOTMETA.yml vendored
View File

@@ -1014,7 +1014,7 @@ macros:
team_ipa: Akasurde Nosmoht fxfitz
team_jboss: Wolfant jairojunior wbrefvem
team_keycloak: eikef ndclt
team_linode: InTheCloudDan decentral1se displague rmcintosh
team_linode: InTheCloudDan decentral1se displague rmcintosh Charliekenney23 LBGarber
team_macos: Akasurde kyleabenson martinm82 danieljaouen indrajitr
team_manageiq: abellotti cben gtanzillo yaacov zgalor dkorn evertmulder
team_netapp: amit0701 carchi8py hulquest lmprice lonico ndswartz schmots1
@@ -1022,7 +1022,7 @@ macros:
team_opennebula: ilicmilan meerkampdvv rsmontero xorel
team_oracle: manojmeda mross22 nalsaber
team_purestorage: bannaych dnix101 genegr lionmax opslounge raekins sdodsley sile16
team_redfish: billdodd mraineri tomasg2012
team_redfish: billdodd mraineri tomasg2012 xmadsen renxulei
team_rhn: FlossWare alikins barnabycourt vritant
team_scaleway: QuentinBrosse abarbare jerome-quere kindermoumoute remyleone sieben
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l

View File

@@ -6,6 +6,69 @@ Community General Release Notes
This changelog describes changes after version 1.0.0.
v2.3.0
======
Release Summary
---------------
Fixes compatibility issues with the latest ansible-core 2.11 beta, some more bugs, and contains several new features, modules and plugins.
Minor Changes
-------------
- archive - refactored some reused code out into a couple of functions (https://github.com/ansible-collections/community.general/pull/2061).
- csv module utils - new module_utils for shared functions between ``from_csv`` filter and ``read_csv`` module (https://github.com/ansible-collections/community.general/pull/2037).
- ipa_sudorule - add support for setting sudo runasuser (https://github.com/ansible-collections/community.general/pull/2031).
- jenkins_job - add a ``validate_certs`` parameter that allows disabling TLS/SSL certificate validation (https://github.com/ansible-collections/community.general/issues/255).
- kibana_plugin - add parameter for passing ``--allow-root`` flag to kibana and kibana-plugin commands (https://github.com/ansible-collections/community.general/pull/2014).
- proxmox - added ``purge`` module parameter for use when deleting lxc's with HA options (https://github.com/ansible-collections/community.general/pull/2013).
- proxmox inventory plugin - added ``tags_parsed`` fact containing tags parsed as a list (https://github.com/ansible-collections/community.general/pull/1949).
- proxmox_kvm - added new module parameter ``tags`` for use with PVE 6+ (https://github.com/ansible-collections/community.general/pull/2000).
- rax - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2006).
- rax_cdb_user - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2006).
- rax_scaling_group - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2006).
- read_csv - refactored read_csv module to use shared csv functions from csv module_utils (https://github.com/ansible-collections/community.general/pull/2037).
- redfish_* modules, redfish_utils module utils - add support for Redfish session create, delete, and authenticate (https://github.com/ansible-collections/community.general/issues/1975).
- snmp_facts - added parameters ``timeout`` and ``retries`` to module (https://github.com/ansible-collections/community.general/issues/980).
Bugfixes
--------
- Mark various module options with ``no_log=False`` which have a name that potentially could leak secrets, but which do not (https://github.com/ansible-collections/community.general/pull/2001).
- module_helper module utils - actually ignoring formatting of parameters with value ``None`` (https://github.com/ansible-collections/community.general/pull/2024).
- module_helper module utils - handling ``ModuleHelperException`` now properly calls ``fail_json()`` (https://github.com/ansible-collections/community.general/pull/2024).
- module_helper module utils - use the command name as-is in ``CmdMixin`` if it fails ``get_bin_path()`` - allowing full path names to be passed (https://github.com/ansible-collections/community.general/pull/2024).
- nios* modules - fix modules to work with ansible-core 2.11 (https://github.com/ansible-collections/community.general/pull/2057).
- proxmox - removed requirement that root password is provided when containter state is ``present`` (https://github.com/ansible-collections/community.general/pull/1999).
- proxmox inventory - exclude qemu templates from inclusion to the inventory via pools (https://github.com/ansible-collections/community.general/issues/1986, https://github.com/ansible-collections/community.general/pull/1991).
- proxmox inventory plugin - allowed proxomox tag string to contain commas when returned as fact (https://github.com/ansible-collections/community.general/pull/1949).
- redfish_config module, redfish_utils module utils - fix IndexError in ``SetManagerNic`` command (https://github.com/ansible-collections/community.general/issues/1692).
- scaleway inventory plugin - fix pagination on scaleway inventory plugin (https://github.com/ansible-collections/community.general/pull/2036).
- stacki_host - replaced ``default`` to environment variables with ``fallback`` to them (https://github.com/ansible-collections/community.general/pull/2072).
New Plugins
-----------
Filter
~~~~~~
- from_csv - Converts CSV text input into list of dicts
New Modules
-----------
Net Tools
~~~~~~~~~
- gandi_livedns - Manage Gandi LiveDNS records
pritunl
^^^^^^^
- pritunl_user - Manage Pritunl Users using the Pritunl API
- pritunl_user_info - List Pritunl Users using the Pritunl API
v2.2.0
======

View File

@@ -1544,3 +1544,88 @@ releases:
name: version_sort
namespace: null
release_date: '2021-03-08'
2.3.0:
changes:
bugfixes:
- Mark various module options with ``no_log=False`` which have a name that potentially
could leak secrets, but which do not (https://github.com/ansible-collections/community.general/pull/2001).
- module_helper module utils - actually ignoring formatting of parameters with
value ``None`` (https://github.com/ansible-collections/community.general/pull/2024).
- module_helper module utils - handling ``ModuleHelperException`` now properly
calls ``fail_json()`` (https://github.com/ansible-collections/community.general/pull/2024).
- module_helper module utils - use the command name as-is in ``CmdMixin`` if
it fails ``get_bin_path()`` - allowing full path names to be passed (https://github.com/ansible-collections/community.general/pull/2024).
- nios* modules - fix modules to work with ansible-core 2.11 (https://github.com/ansible-collections/community.general/pull/2057).
- proxmox - removed requirement that root password is provided when containter
state is ``present`` (https://github.com/ansible-collections/community.general/pull/1999).
- proxmox inventory - exclude qemu templates from inclusion to the inventory
via pools (https://github.com/ansible-collections/community.general/issues/1986,
https://github.com/ansible-collections/community.general/pull/1991).
- proxmox inventory plugin - allowed proxomox tag string to contain commas when
returned as fact (https://github.com/ansible-collections/community.general/pull/1949).
- redfish_config module, redfish_utils module utils - fix IndexError in ``SetManagerNic``
command (https://github.com/ansible-collections/community.general/issues/1692).
- scaleway inventory plugin - fix pagination on scaleway inventory plugin (https://github.com/ansible-collections/community.general/pull/2036).
- stacki_host - replaced ``default`` to environment variables with ``fallback``
to them (https://github.com/ansible-collections/community.general/pull/2072).
minor_changes:
- archive - refactored some reused code out into a couple of functions (https://github.com/ansible-collections/community.general/pull/2061).
- csv module utils - new module_utils for shared functions between ``from_csv``
filter and ``read_csv`` module (https://github.com/ansible-collections/community.general/pull/2037).
- ipa_sudorule - add support for setting sudo runasuser (https://github.com/ansible-collections/community.general/pull/2031).
- jenkins_job - add a ``validate_certs`` parameter that allows disabling TLS/SSL
certificate validation (https://github.com/ansible-collections/community.general/issues/255).
- kibana_plugin - add parameter for passing ``--allow-root`` flag to kibana
and kibana-plugin commands (https://github.com/ansible-collections/community.general/pull/2014).
- proxmox - added ``purge`` module parameter for use when deleting lxc's with
HA options (https://github.com/ansible-collections/community.general/pull/2013).
- proxmox inventory plugin - added ``tags_parsed`` fact containing tags parsed
as a list (https://github.com/ansible-collections/community.general/pull/1949).
- proxmox_kvm - added new module parameter ``tags`` for use with PVE 6+ (https://github.com/ansible-collections/community.general/pull/2000).
- rax - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2006).
- rax_cdb_user - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2006).
- rax_scaling_group - elements of list parameters are now validated (https://github.com/ansible-collections/community.general/pull/2006).
- read_csv - refactored read_csv module to use shared csv functions from csv
module_utils (https://github.com/ansible-collections/community.general/pull/2037).
- redfish_* modules, redfish_utils module utils - add support for Redfish session
create, delete, and authenticate (https://github.com/ansible-collections/community.general/issues/1975).
- snmp_facts - added parameters ``timeout`` and ``retries`` to module (https://github.com/ansible-collections/community.general/issues/980).
release_summary: Fixes compatibility issues with the latest ansible-core 2.11
beta, some more bugs, and contains several new features, modules and plugins.
fragments:
- 1949-proxmox-inventory-tags.yml
- 1977-jenkinsjob-validate-certs.yml
- 1991-proxmox-inventory-fix-template-in-pool.yml
- 1999-proxmox-fix-issue-1955.yml
- 2.3.0.yml
- 2000-proxmox_kvm-tag-support.yml
- 2001-no_log-false.yml
- 2006-valmod-batch8.yml
- 2013-proxmox-purge-parameter.yml
- 2014-allow-root-for-kibana-plugin.yaml
- 2024-module-helper-fixes.yml
- 2027-add-redfish-session-create-delete-authenticate.yml
- 2031-ipa_sudorule_add_runasextusers.yml
- 2036-scaleway-inventory.yml
- 2037-add-from-csv-filter.yml
- 2040-fix-index-error-in-redfish-set-manager-nic.yml
- 2057-nios-devel.yml
- 2061-archive-refactor1.yml
- 2065-snmp-facts-timeout.yml
- 2072-stacki-host-params-fallback.yml
modules:
- description: Manage Gandi LiveDNS records
name: gandi_livedns
namespace: net_tools
- description: Manage Pritunl Users using the Pritunl API
name: pritunl_user
namespace: net_tools.pritunl
- description: List Pritunl Users using the Pritunl API
name: pritunl_user_info
namespace: net_tools.pritunl
plugins:
filter:
- description: Converts CSV text input into list of dicts
name: from_csv
namespace: null
release_date: '2021-03-23'

View File

@@ -1,6 +1,6 @@
namespace: community
name: general
version: 2.2.0
version: 2.3.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View File

@@ -0,0 +1,43 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Florian Dambrine <android.florian@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r"""
options:
pritunl_url:
type: str
required: true
description:
- URL and port of the Pritunl server on which the API is enabled.
pritunl_api_token:
type: str
required: true
description:
- API Token of a Pritunl admin user.
- It needs to be enabled in Administrators > USERNAME > Enable Token Authentication.
pritunl_api_secret:
type: str
required: true
description:
- API Secret found in Administrators > USERNAME > API Secret.
validate_certs:
type: bool
required: false
default: true
description:
- If certificates should be validated or not.
- This should never be set to C(false), except if you are very sure that
your connection to the server can not be subject to a Man In The Middle
attack.
"""

View File

@@ -0,0 +1,49 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Andrew Pantuso (@ajpantuso) <ajpantuso@gmail.com>
# Copyright: (c) 2018, Dag Wieers (@dagwieers) <dag@wieers.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.errors import AnsibleFilterError
from ansible.module_utils._text import to_native
from ansible_collections.community.general.plugins.module_utils.csv import (initialize_dialect, read_csv, CSVError,
DialectNotAvailableError,
CustomDialectFailureError)
def from_csv(data, dialect='excel', fieldnames=None, delimiter=None, skipinitialspace=None, strict=None):
dialect_params = {
"delimiter": delimiter,
"skipinitialspace": skipinitialspace,
"strict": strict,
}
try:
dialect = initialize_dialect(dialect, **dialect_params)
except (CustomDialectFailureError, DialectNotAvailableError) as e:
raise AnsibleFilterError(to_native(e))
reader = read_csv(data, dialect, fieldnames)
data_list = []
try:
for row in reader:
data_list.append(row)
except CSVError as e:
raise AnsibleFilterError("Unable to process file: %s" % to_native(e))
return data_list
class FilterModule(object):
def filters(self):
return {
'from_csv': from_csv
}

View File

@@ -217,6 +217,10 @@ class InventoryModule(BaseInventoryPlugin, Cacheable):
vmtype_key = self.to_safe('%s%s' % (self.get_option('facts_prefix'), vmtype_key.lower()))
self.inventory.set_variable(name, vmtype_key, vmtype)
plaintext_configs = [
'tags',
]
for config in ret:
key = config
key = self.to_safe('%s%s' % (self.get_option('facts_prefix'), key.lower()))
@@ -226,6 +230,12 @@ class InventoryModule(BaseInventoryPlugin, Cacheable):
if config == 'rootfs' or config.startswith(('virtio', 'sata', 'ide', 'scsi')):
value = ('disk_image=' + value)
# Additional field containing parsed tags as list
if config == 'tags':
parsed_key = self.to_safe('%s%s' % (key, "_parsed"))
parsed_value = [tag.strip() for tag in value.split(",")]
self.inventory.set_variable(name, parsed_key, parsed_value)
if not (isinstance(value, int) or ',' not in value):
# split off strings with commas to a dict
# skip over any keys that cannot be processed
@@ -339,7 +349,8 @@ class InventoryModule(BaseInventoryPlugin, Cacheable):
for member in self._get_members_per_pool(pool['poolid']):
if member.get('name'):
self.inventory.add_child(pool_group, member['name'])
if not member.get('template'):
self.inventory.add_child(pool_group, member['name'])
def parse(self, inventory, loader, path, cache=True):
if not HAS_REQUESTS:

View File

@@ -0,0 +1,67 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Andrew Pantuso (@ajpantuso) <ajpantuso@gmail.com>
# Copyright: (c) 2018, Dag Wieers (@dagwieers) <dag@wieers.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import csv
from io import BytesIO, StringIO
from ansible.module_utils._text import to_native
from ansible.module_utils.six import PY3
class CustomDialectFailureError(Exception):
pass
class DialectNotAvailableError(Exception):
pass
CSVError = csv.Error
def initialize_dialect(dialect, **kwargs):
# Add Unix dialect from Python 3
class unix_dialect(csv.Dialect):
"""Describe the usual properties of Unix-generated CSV files."""
delimiter = ','
quotechar = '"'
doublequote = True
skipinitialspace = False
lineterminator = '\n'
quoting = csv.QUOTE_ALL
csv.register_dialect("unix", unix_dialect)
if dialect not in csv.list_dialects():
raise DialectNotAvailableError("Dialect '%s' is not supported by your version of python." % dialect)
# Create a dictionary from only set options
dialect_params = dict((k, v) for k, v in kwargs.items() if v is not None)
if dialect_params:
try:
csv.register_dialect('custom', dialect, **dialect_params)
except TypeError as e:
raise CustomDialectFailureError("Unable to create custom dialect: %s" % to_native(e))
dialect = 'custom'
return dialect
def read_csv(data, dialect, fieldnames=None):
data = to_native(data, errors='surrogate_or_strict')
if PY3:
fake_fh = StringIO(data)
else:
fake_fh = BytesIO(data)
reader = csv.DictReader(fake_fh, fieldnames=fieldnames, dialect=dialect)
return reader

View File

@@ -0,0 +1,234 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2019 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
class GandiLiveDNSAPI(object):
api_endpoint = 'https://api.gandi.net/v5/livedns'
changed = False
error_strings = {
400: 'Bad request',
401: 'Permission denied',
404: 'Resource not found',
}
attribute_map = {
'record': 'rrset_name',
'type': 'rrset_type',
'ttl': 'rrset_ttl',
'values': 'rrset_values'
}
def __init__(self, module):
self.module = module
self.api_key = module.params['api_key']
def _build_error_message(self, module, info):
s = ''
body = info.get('body')
if body:
errors = module.from_json(body).get('errors')
if errors:
error = errors[0]
name = error.get('name')
if name:
s += '{0} :'.format(name)
description = error.get('description')
if description:
s += description
return s
def _gandi_api_call(self, api_call, method='GET', payload=None, error_on_404=True):
headers = {'Authorization': 'Apikey {0}'.format(self.api_key),
'Content-Type': 'application/json'}
data = None
if payload:
try:
data = json.dumps(payload)
except Exception as e:
self.module.fail_json(msg="Failed to encode payload as JSON: %s " % to_native(e))
resp, info = fetch_url(self.module,
self.api_endpoint + api_call,
headers=headers,
data=data,
method=method)
error_msg = ''
if info['status'] >= 400 and (info['status'] != 404 or error_on_404):
err_s = self.error_strings.get(info['status'], '')
error_msg = "API Error {0}: {1}".format(err_s, self._build_error_message(self.module, info))
result = None
try:
content = resp.read()
except AttributeError:
content = None
if content:
try:
result = json.loads(to_text(content, errors='surrogate_or_strict'))
except (getattr(json, 'JSONDecodeError', ValueError)) as e:
error_msg += "; Failed to parse API response with error {0}: {1}".format(to_native(e), content)
if error_msg:
self.module.fail_json(msg=error_msg)
return result, info['status']
def build_result(self, result, domain):
if result is None:
return None
res = {}
for k in self.attribute_map:
v = result.get(self.attribute_map[k], None)
if v is not None:
if k == 'record' and v == '@':
v = ''
res[k] = v
res['domain'] = domain
return res
def build_results(self, results, domain):
if results is None:
return []
return [self.build_result(r, domain) for r in results]
def get_records(self, record, type, domain):
url = '/domains/%s/records' % (domain)
if record:
url += '/%s' % (record)
if type:
url += '/%s' % (type)
records, status = self._gandi_api_call(url, error_on_404=False)
if status == 404:
return []
if not isinstance(records, list):
records = [records]
# filter by type if record is not set
if not record and type:
records = [r
for r in records
if r['rrset_type'] == type]
return records
def create_record(self, record, type, values, ttl, domain):
url = '/domains/%s/records' % (domain)
new_record = {
'rrset_name': record,
'rrset_type': type,
'rrset_values': values,
'rrset_ttl': ttl,
}
record, status = self._gandi_api_call(url, method='POST', payload=new_record)
if status in (200, 201,):
return new_record
return None
def update_record(self, record, type, values, ttl, domain):
url = '/domains/%s/records/%s/%s' % (domain, record, type)
new_record = {
'rrset_values': values,
'rrset_ttl': ttl,
}
record = self._gandi_api_call(url, method='PUT', payload=new_record)[0]
return record
def delete_record(self, record, type, domain):
url = '/domains/%s/records/%s/%s' % (domain, record, type)
self._gandi_api_call(url, method='DELETE')
def delete_dns_record(self, record, type, values, domain):
if record == '':
record = '@'
records = self.get_records(record, type, domain)
if records:
cur_record = records[0]
self.changed = True
if values is not None and set(cur_record['rrset_values']) != set(values):
new_values = set(cur_record['rrset_values']) - set(values)
if new_values:
# Removing one or more values from a record, we update the record with the remaining values
self.update_record(record, type, list(new_values), cur_record['rrset_ttl'], domain)
records = self.get_records(record, type, domain)
return records[0], self.changed
if not self.module.check_mode:
self.delete_record(record, type, domain)
else:
cur_record = None
return None, self.changed
def ensure_dns_record(self, record, type, ttl, values, domain):
if record == '':
record = '@'
records = self.get_records(record, type, domain)
if records:
cur_record = records[0]
do_update = False
if ttl is not None and cur_record['rrset_ttl'] != ttl:
do_update = True
if values is not None and set(cur_record['rrset_values']) != set(values):
do_update = True
if do_update:
if self.module.check_mode:
result = dict(
rrset_type=type,
rrset_name=record,
rrset_values=values,
rrset_ttl=ttl
)
else:
self.update_record(record, type, values, ttl, domain)
records = self.get_records(record, type, domain)
result = records[0]
self.changed = True
return result, self.changed
else:
return cur_record, self.changed
if self.module.check_mode:
new_record = dict(
rrset_type=type,
rrset_name=record,
rrset_values=values,
rrset_ttl=ttl
)
result = new_record
else:
result = self.create_record(record, type, values, ttl, domain)
self.changed = True
return result, self.changed

View File

@@ -55,7 +55,7 @@ def keycloak_argument_spec():
:return: argument_spec dict
"""
return dict(
auth_keycloak_url=dict(type='str', aliases=['url'], required=True),
auth_keycloak_url=dict(type='str', aliases=['url'], required=True, no_log=False),
auth_client_id=dict(type='str', default='admin-cli'),
auth_realm=dict(type='str', required=True),
auth_client_secret=dict(type='str', default=None, no_log=True),

View File

@@ -93,6 +93,8 @@ class ArgFormat(object):
self.arg_format = (self.stars_deco(stars))(self.arg_format)
def to_text(self, value):
if value is None:
return []
func = self.arg_format
return [str(p) for p in func(value)]
@@ -121,6 +123,7 @@ def module_fails_on_exception(func):
except ModuleHelperException as e:
if e.update_output:
self.update_output(e.update_output)
self.module.fail_json(changed=False, msg=e.msg, exception=traceback.format_exc(), output=self.output, vars=self.vars)
except Exception as e:
self.vars.msg = "Module failed with exception: {0}".format(str(e).strip())
self.vars.exception = traceback.format_exc()
@@ -292,7 +295,10 @@ class CmdMixin(object):
extra_params = extra_params or dict()
cmd_args = list([self.command]) if isinstance(self.command, str) else list(self.command)
cmd_args[0] = self.module.get_bin_path(cmd_args[0])
try:
cmd_args[0] = self.module.get_bin_path(cmd_args[0], required=True)
except ValueError:
pass
param_list = params if params else self.module.params.keys()
for param in param_list:

View File

@@ -18,6 +18,7 @@ from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import env_fallback
from ansible.module_utils.common.validation import check_type_dict
try:
from infoblox_client.connector import Connector
@@ -399,11 +400,11 @@ class WapiModule(WapiBase):
if 'ipv4addrs' in proposed_object:
if 'nios_next_ip' in proposed_object['ipv4addrs'][0]['ipv4addr']:
ip_range = self.module._check_type_dict(proposed_object['ipv4addrs'][0]['ipv4addr'])['nios_next_ip']
ip_range = check_type_dict(proposed_object['ipv4addrs'][0]['ipv4addr'])['nios_next_ip']
proposed_object['ipv4addrs'][0]['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
elif 'ipv4addr' in proposed_object:
if 'nios_next_ip' in proposed_object['ipv4addr']:
ip_range = self.module._check_type_dict(proposed_object['ipv4addr'])['nios_next_ip']
ip_range = check_type_dict(proposed_object['ipv4addr'])['nios_next_ip']
proposed_object['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
return proposed_object
@@ -485,7 +486,7 @@ class WapiModule(WapiBase):
if ('name' in obj_filter):
# gets and returns the current object based on name/old_name passed
try:
name_obj = self.module._check_type_dict(obj_filter['name'])
name_obj = check_type_dict(obj_filter['name'])
old_name = name_obj['old_name']
new_name = name_obj['new_name']
except TypeError:
@@ -521,7 +522,7 @@ class WapiModule(WapiBase):
test_obj_filter['name'] = test_obj_filter['name'].lower()
# resolves issue where multiple a_records with same name and different IP address
try:
ipaddr_obj = self.module._check_type_dict(obj_filter['ipv4addr'])
ipaddr_obj = check_type_dict(obj_filter['ipv4addr'])
ipaddr = ipaddr_obj['old_ipv4addr']
except TypeError:
ipaddr = obj_filter['ipv4addr']
@@ -530,7 +531,7 @@ class WapiModule(WapiBase):
# resolves issue where multiple txt_records with same name and different text
test_obj_filter = obj_filter
try:
text_obj = self.module._check_type_dict(obj_filter['text'])
text_obj = check_type_dict(obj_filter['text'])
txt = text_obj['old_text']
except TypeError:
txt = obj_filter['text']
@@ -543,7 +544,7 @@ class WapiModule(WapiBase):
# resolves issue where multiple a_records with same name and different IP address
test_obj_filter = obj_filter
try:
ipaddr_obj = self.module._check_type_dict(obj_filter['ipv4addr'])
ipaddr_obj = check_type_dict(obj_filter['ipv4addr'])
ipaddr = ipaddr_obj['old_ipv4addr']
except TypeError:
ipaddr = obj_filter['ipv4addr']
@@ -553,7 +554,7 @@ class WapiModule(WapiBase):
# resolves issue where multiple txt_records with same name and different text
test_obj_filter = obj_filter
try:
text_obj = self.module._check_type_dict(obj_filter['text'])
text_obj = check_type_dict(obj_filter['text'])
txt = text_obj['old_text']
except TypeError:
txt = obj_filter['text']

View File

@@ -0,0 +1,300 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Florian Dambrine <android.florian@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
Pritunl API that offers CRUD operations on Pritunl Organizations and Users
"""
from __future__ import absolute_import, division, print_function
import base64
import hashlib
import hmac
import json
import time
import uuid
from ansible.module_utils.six import iteritems
from ansible.module_utils.urls import open_url
__metaclass__ = type
class PritunlException(Exception):
pass
def pritunl_argument_spec():
return dict(
pritunl_url=dict(required=True, type="str"),
pritunl_api_token=dict(required=True, type="str", no_log=False),
pritunl_api_secret=dict(required=True, type="str", no_log=True),
validate_certs=dict(required=False, type="bool", default=True),
)
def get_pritunl_settings(module):
"""
Helper function to set required Pritunl request params from module arguments.
"""
return {
"api_token": module.params.get("pritunl_api_token"),
"api_secret": module.params.get("pritunl_api_secret"),
"base_url": module.params.get("pritunl_url"),
"validate_certs": module.params.get("validate_certs"),
}
def _get_pritunl_organizations(api_token, api_secret, base_url, validate_certs=True):
return pritunl_auth_request(
base_url=base_url,
api_token=api_token,
api_secret=api_secret,
method="GET",
path="/organization",
validate_certs=validate_certs,
)
def _get_pritunl_users(
api_token, api_secret, base_url, organization_id, validate_certs=True
):
return pritunl_auth_request(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
method="GET",
path="/user/%s" % organization_id,
validate_certs=validate_certs,
)
def _delete_pritunl_user(
api_token, api_secret, base_url, organization_id, user_id, validate_certs=True
):
return pritunl_auth_request(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
method="DELETE",
path="/user/%s/%s" % (organization_id, user_id),
validate_certs=validate_certs,
)
def _post_pritunl_user(
api_token, api_secret, base_url, organization_id, user_data, validate_certs=True
):
return pritunl_auth_request(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
method="POST",
path="/user/%s" % organization_id,
headers={"Content-Type": "application/json"},
data=json.dumps(user_data),
validate_certs=validate_certs,
)
def _put_pritunl_user(
api_token,
api_secret,
base_url,
organization_id,
user_id,
user_data,
validate_certs=True,
):
return pritunl_auth_request(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
method="PUT",
path="/user/%s/%s" % (organization_id, user_id),
headers={"Content-Type": "application/json"},
data=json.dumps(user_data),
validate_certs=validate_certs,
)
def list_pritunl_organizations(
api_token, api_secret, base_url, validate_certs=True, filters=None
):
orgs = []
response = _get_pritunl_organizations(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
validate_certs=validate_certs,
)
if response.getcode() != 200:
raise PritunlException("Could not retrieve organizations from Pritunl")
else:
for org in json.loads(response.read()):
# No filtering
if filters is None:
orgs.append(org)
else:
if not any(
filter_val != org[filter_key]
for filter_key, filter_val in iteritems(filters)
):
orgs.append(org)
return orgs
def list_pritunl_users(
api_token, api_secret, base_url, organization_id, validate_certs=True, filters=None
):
users = []
response = _get_pritunl_users(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
validate_certs=validate_certs,
organization_id=organization_id,
)
if response.getcode() != 200:
raise PritunlException("Could not retrieve users from Pritunl")
else:
for user in json.loads(response.read()):
# No filtering
if filters is None:
users.append(user)
else:
if not any(
filter_val != user[filter_key]
for filter_key, filter_val in iteritems(filters)
):
users.append(user)
return users
def post_pritunl_user(
api_token,
api_secret,
base_url,
organization_id,
user_data,
user_id=None,
validate_certs=True,
):
# If user_id is provided will do PUT otherwise will do POST
if user_id is None:
response = _post_pritunl_user(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
organization_id=organization_id,
user_data=user_data,
validate_certs=True,
)
if response.getcode() != 200:
raise PritunlException(
"Could not remove user %s from organization %s from Pritunl"
% (user_id, organization_id)
)
# user POST request returns an array of a single item,
# so return this item instead of the list
return json.loads(response.read())[0]
else:
response = _put_pritunl_user(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
organization_id=organization_id,
user_data=user_data,
user_id=user_id,
validate_certs=True,
)
if response.getcode() != 200:
raise PritunlException(
"Could not update user %s from organization %s from Pritunl"
% (user_id, organization_id)
)
# The user PUT request returns the updated user object
return json.loads(response.read())
def delete_pritunl_user(
api_token, api_secret, base_url, organization_id, user_id, validate_certs=True
):
response = _delete_pritunl_user(
api_token=api_token,
api_secret=api_secret,
base_url=base_url,
organization_id=organization_id,
user_id=user_id,
validate_certs=True,
)
if response.getcode() != 200:
raise PritunlException(
"Could not remove user %s from organization %s from Pritunl"
% (user_id, organization_id)
)
return json.loads(response.read())
def pritunl_auth_request(
api_token,
api_secret,
base_url,
method,
path,
validate_certs=True,
headers=None,
data=None,
):
"""
Send an API call to a Pritunl server.
Taken from https://pritunl.com/api and adaped work with Ansible open_url
"""
auth_timestamp = str(int(time.time()))
auth_nonce = uuid.uuid4().hex
auth_string = "&".join(
[api_token, auth_timestamp, auth_nonce, method.upper(), path]
+ ([data] if data else [])
)
auth_signature = base64.b64encode(
hmac.new(
api_secret.encode("utf-8"), auth_string.encode("utf-8"), hashlib.sha256
).digest()
)
auth_headers = {
"Auth-Token": api_token,
"Auth-Timestamp": auth_timestamp,
"Auth-Nonce": auth_nonce,
"Auth-Signature": auth_signature,
}
if headers:
auth_headers.update(headers)
try:
uri = "%s%s" % (base_url, path)
return open_url(
uri,
method=method.upper(),
headers=auth_headers,
data=data,
validate_certs=validate_certs,
)
except Exception as e:
raise PritunlException(e)

View File

@@ -104,7 +104,7 @@ def get_common_arg_spec(supports_create=False, supports_wait=False):
if supports_create:
common_args.update(
key_by=dict(type="list", elements="str"),
key_by=dict(type="list", elements="str", no_log=False),
force_create=dict(type="bool", default=False),
)

View File

@@ -39,13 +39,34 @@ class RedfishUtils(object):
self.data_modification = data_modification
self._init_session()
def _auth_params(self, headers):
"""
Return tuple of required authentication params based on the presence
of a token in the self.creds dict. If using a token, set the
X-Auth-Token header in the `headers` param.
:param headers: dict containing headers to send in request
:return: tuple of username, password and force_basic_auth
"""
if self.creds.get('token'):
username = None
password = None
force_basic_auth = False
headers['X-Auth-Token'] = self.creds['token']
else:
username = self.creds['user']
password = self.creds['pswd']
force_basic_auth = True
return username, password, force_basic_auth
# The following functions are to send GET/POST/PATCH/DELETE requests
def get_request(self, uri):
req_headers = dict(GET_HEADERS)
username, password, basic_auth = self._auth_params(req_headers)
try:
resp = open_url(uri, method="GET", headers=GET_HEADERS,
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
resp = open_url(uri, method="GET", headers=req_headers,
url_username=username, url_password=password,
force_basic_auth=basic_auth, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
data = json.loads(to_native(resp.read()))
@@ -66,14 +87,16 @@ class RedfishUtils(object):
return {'ret': True, 'data': data, 'headers': headers}
def post_request(self, uri, pyld):
req_headers = dict(POST_HEADERS)
username, password, basic_auth = self._auth_params(req_headers)
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=POST_HEADERS, method="POST",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
headers=req_headers, method="POST",
url_username=username, url_password=password,
force_basic_auth=basic_auth, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
headers = dict((k.lower(), v) for (k, v) in resp.info().items())
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
@@ -87,10 +110,10 @@ class RedfishUtils(object):
except Exception as e:
return {'ret': False,
'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
return {'ret': True, 'headers': headers, 'resp': resp}
def patch_request(self, uri, pyld):
headers = PATCH_HEADERS
req_headers = dict(PATCH_HEADERS)
r = self.get_request(uri)
if r['ret']:
# Get etag from etag header or @odata.etag property
@@ -98,15 +121,13 @@ class RedfishUtils(object):
if not etag:
etag = r['data'].get('@odata.etag')
if etag:
# Make copy of headers and add If-Match header
headers = dict(headers)
headers['If-Match'] = etag
req_headers['If-Match'] = etag
username, password, basic_auth = self._auth_params(req_headers)
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=headers, method="PATCH",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
headers=req_headers, method="PATCH",
url_username=username, url_password=password,
force_basic_auth=basic_auth, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
@@ -125,13 +146,14 @@ class RedfishUtils(object):
return {'ret': True, 'resp': resp}
def delete_request(self, uri, pyld=None):
req_headers = dict(DELETE_HEADERS)
username, password, basic_auth = self._auth_params(req_headers)
try:
data = json.dumps(pyld) if pyld else None
resp = open_url(uri, data=data,
headers=DELETE_HEADERS, method="DELETE",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
headers=req_headers, method="DELETE",
url_username=username, url_password=password,
force_basic_auth=basic_auth, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
@@ -1196,6 +1218,54 @@ class RedfishUtils(object):
return {'ret': True, 'changed': True, 'msg': "Clear all sessions successfully"}
def create_session(self):
if not self.creds.get('user') or not self.creds.get('pswd'):
return {'ret': False, 'msg':
'Must provide the username and password parameters for '
'the CreateSession command'}
payload = {
'UserName': self.creds['user'],
'Password': self.creds['pswd']
}
response = self.post_request(self.root_uri + self.sessions_uri, payload)
if response['ret'] is False:
return response
headers = response['headers']
if 'x-auth-token' not in headers:
return {'ret': False, 'msg':
'The service did not return the X-Auth-Token header in '
'the response from the Sessions collection POST'}
if 'location' not in headers:
self.module.warn(
'The service did not return the Location header for the '
'session URL in the response from the Sessions collection '
'POST')
session_uri = None
else:
session_uri = urlparse(headers.get('location')).path
session = dict()
session['token'] = headers.get('x-auth-token')
session['uri'] = session_uri
return {'ret': True, 'changed': True, 'session': session,
'msg': 'Session created successfully'}
def delete_session(self, session_uri):
if not session_uri:
return {'ret': False, 'msg':
'Must provide the session_uri parameter for the '
'DeleteSession command'}
response = self.delete_request(self.root_uri + session_uri)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': 'Session deleted successfully'}
def get_firmware_update_capabilities(self):
result = {}
response = self.get_request(self.root_uri + self.update_uri)
@@ -2676,6 +2746,10 @@ class RedfishUtils(object):
need_change = True
# type is list
if isinstance(set_value, list):
if len(set_value) != len(cur_value):
# if arrays are not the same len, no need to check each element
need_change = True
continue
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in target_ethernet_current_setting[property][i]:

View File

@@ -39,7 +39,7 @@ class ScalewayException(Exception):
R_LINK_HEADER = r'''<[^>]+>;\srel="(first|previous|next|last)"
(,<[^>]+>;\srel="(first|previous|next|last)")*'''
# Specify a single relation, for iteration and string extraction purposes
R_RELATION = r'<(?P<target_IRI>[^>]+)>; rel="(?P<relation>first|previous|next|last)"'
R_RELATION = r'</?(?P<target_IRI>[^>]+)>; rel="(?P<relation>first|previous|next|last)"'
def parse_pagination_link(header):

View File

@@ -242,7 +242,7 @@ def initialise_module():
no_log=True,
fallback=(env_fallback, ['LINODE_ACCESS_TOKEN']),
),
authorized_keys=dict(type='list', elements='str', required=False),
authorized_keys=dict(type='list', elements='str', required=False, no_log=False),
group=dict(type='str', required=False),
image=dict(type='str', required=False),
region=dict(type='str', required=False),

View File

@@ -17,7 +17,6 @@ options:
password:
description:
- the instance root password
- required only for C(state=present)
type: str
hostname:
description:
@@ -124,6 +123,15 @@ options:
- with states C(stopped) , C(restarted) allow to force stop instance
type: bool
default: 'no'
purge:
description:
- Remove container from all related configurations.
- For example backup jobs, replication jobs, or HA.
- Related ACLs and Firewall entries will always be removed.
- Used with state C(absent).
type: bool
default: false
version_added: 2.3.0
state:
description:
- Indicate desired state of the instance
@@ -507,6 +515,7 @@ def main():
searchdomain=dict(),
timeout=dict(type='int', default=30),
force=dict(type='bool', default=False),
purge=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted']),
pubkey=dict(type='str', default=None),
unprivileged=dict(type='bool', default=False),
@@ -514,7 +523,7 @@ def main():
hookscript=dict(type='str'),
proxmox_default_behavior=dict(type='str', choices=['compatibility', 'no_defaults']),
),
required_if=[('state', 'present', ['node', 'hostname', 'password', 'ostemplate'])],
required_if=[('state', 'present', ['node', 'hostname', 'ostemplate'])],
required_together=[('api_token_id', 'api_token_secret')],
required_one_of=[('api_password', 'api_token_id')],
)
@@ -687,7 +696,13 @@ def main():
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'mounted':
module.exit_json(changed=False, msg="VM %s is mounted. Stop it with force option before deletion." % vmid)
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE).delete(vmid)
delete_params = {}
if module.params['purge']:
delete_params['purge'] = 1
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE).delete(vmid, **delete_params)
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):

View File

@@ -425,6 +425,14 @@ options:
option has a default of C(no). Note that the default value of I(proxmox_default_behavior)
changes in community.general 4.0.0.
type: bool
tags:
description:
- List of tags to apply to the VM instance.
- Tags must start with C([a-z0-9_]) followed by zero or more of the following characters C([a-z0-9_-+.]).
- Tags are only available in Proxmox 6+.
type: list
elements: str
version_added: 2.3.0
target:
description:
- Target node. Only allowed if the original VM is on shared storage.
@@ -858,7 +866,7 @@ def wait_for_task(module, proxmox, node, taskid):
def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, update, **kwargs):
# Available only in PVE 4
only_v4 = ['force', 'protection', 'skiplock']
only_v6 = ['ciuser', 'cipassword', 'sshkeys', 'ipconfig']
only_v6 = ['ciuser', 'cipassword', 'sshkeys', 'ipconfig', 'tags']
# valide clone parameters
valid_clone_params = ['format', 'full', 'pool', 'snapname', 'storage', 'target']
@@ -928,6 +936,13 @@ def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sock
if searchdomains:
kwargs['searchdomain'] = ' '.join(searchdomains)
# VM tags are expected to be valid and presented as a comma/semi-colon delimited string
if 'tags' in kwargs:
for tag in kwargs['tags']:
if not re.match(r'^[a-z0-9_][a-z0-9_\-\+\.]*$', tag):
module.fail_json(msg='%s is not a valid tag' % tag)
kwargs['tags'] = ",".join(kwargs['tags'])
# -args and skiplock require root@pam user - but can not use api tokens
if module.params['api_user'] == "root@pam" and module.params['args'] is None:
if not update and module.params['proxmox_default_behavior'] == 'compatibility':
@@ -1057,12 +1072,13 @@ def main():
smbios=dict(type='str'),
snapname=dict(type='str'),
sockets=dict(type='int'),
sshkeys=dict(type='str'),
sshkeys=dict(type='str', no_log=False),
startdate=dict(type='str'),
startup=dict(),
state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted', 'current']),
storage=dict(type='str'),
tablet=dict(type='bool'),
tags=dict(type='list', elements='str'),
target=dict(type='str'),
tdf=dict(type='bool'),
template=dict(type='bool'),
@@ -1267,6 +1283,7 @@ def main():
startdate=module.params['startdate'],
startup=module.params['startup'],
tablet=module.params['tablet'],
tags=module.params['tags'],
target=module.params['target'],
tdf=module.params['tdf'],
template=module.params['template'],

View File

@@ -630,7 +630,7 @@ def main():
ram=dict(type='float'),
hdds=dict(type='list', elements='dict'),
count=dict(type='int', default=1),
ssh_key=dict(type='raw'),
ssh_key=dict(type='raw', no_log=False),
auto_increment=dict(type='bool', default=True),
server=dict(type='str'),
datacenter=dict(

View File

@@ -583,7 +583,7 @@ def main():
volume_size=dict(type='int', default=10),
disk_type=dict(choices=['HDD', 'SSD'], default='HDD'),
image_password=dict(default=None, no_log=True),
ssh_keys=dict(type='list', elements='str', default=[]),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
bus=dict(choices=['VIRTIO', 'IDE'], default='VIRTIO'),
lan=dict(type='int', default=1),
count=dict(type='int', default=1),

View File

@@ -376,7 +376,7 @@ def main():
bus=dict(choices=['VIRTIO', 'IDE'], default='VIRTIO'),
image=dict(),
image_password=dict(no_log=True),
ssh_keys=dict(type='list', elements='str', default=[]),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
disk_type=dict(choices=['HDD', 'SSD'], default='HDD'),
licence_type=dict(default='UNKNOWN'),
count=dict(type='int', default=1),

View File

@@ -549,7 +549,7 @@ def main():
password=dict(default='', required=False, type='str', no_log=True),
account=dict(default='', required=False, type='str'),
application=dict(required=True, type='str'),
keyset=dict(required=True, type='str'),
keyset=dict(required=True, type='str', no_log=False),
state=dict(default='present', type='str',
choices=['started', 'stopped', 'present', 'absent']),
name=dict(required=True, type='str'), description=dict(type='str'),

View File

@@ -110,6 +110,7 @@ options:
with this image
instance_ids:
type: list
elements: str
description:
- list of instance ids, currently only used when state='absent' to
remove instances
@@ -129,6 +130,7 @@ options:
- Name to give the instance
networks:
type: list
elements: str
description:
- The network to attach to the instances. If specified, you must include
ALL networks including the public and private interfaces. Can be C(id)
@@ -810,11 +812,11 @@ def main():
flavor=dict(),
group=dict(),
image=dict(),
instance_ids=dict(type='list'),
instance_ids=dict(type='list', elements='str'),
key_name=dict(aliases=['keypair']),
meta=dict(type='dict', default={}),
name=dict(),
networks=dict(type='list', default=['public', 'private']),
networks=dict(type='list', elements='str', default=['public', 'private']),
service=dict(),
state=dict(default='present', choices=['present', 'absent']),
user_data=dict(no_log=True),

View File

@@ -30,6 +30,7 @@ options:
required: yes
databases:
type: list
elements: str
description:
- Name of the databases that the user can access
default: []
@@ -189,7 +190,7 @@ def main():
cdb_id=dict(type='str', required=True),
db_username=dict(type='str', required=True),
db_password=dict(type='str', required=True, no_log=True),
databases=dict(type='list', default=[]),
databases=dict(type='list', elements='str', default=[]),
host=dict(type='str', default='%'),
state=dict(default='present', choices=['present', 'absent'])
)

View File

@@ -53,6 +53,7 @@ options:
- key pair to use on the instance
loadbalancers:
type: list
elements: dict
description:
- List of load balancer C(id) and C(port) hashes
max_entities:
@@ -78,6 +79,7 @@ options:
required: true
networks:
type: list
elements: str
description:
- The network to attach to the instances. If specified, you must include
ALL networks including the public and private interfaces. Can be C(id)
@@ -376,12 +378,12 @@ def main():
flavor=dict(required=True),
image=dict(required=True),
key_name=dict(),
loadbalancers=dict(type='list'),
loadbalancers=dict(type='list', elements='dict'),
meta=dict(type='dict', default={}),
min_entities=dict(type='int', required=True),
max_entities=dict(type='int', required=True),
name=dict(required=True),
networks=dict(type='list', default=['public', 'private']),
networks=dict(type='list', elements='str', default=['public', 'private']),
server_name=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
user_data=dict(no_log=True),

View File

@@ -24,6 +24,7 @@ options:
manifest and 'published_date', 'published', 'source', 'clones',
and 'size'. More information can be found at U(https://smartos.org/man/1m/imgadm)
under 'imgadm list'.
type: str
'''
EXAMPLES = '''

View File

@@ -404,7 +404,7 @@ def main():
nic_speed=dict(type='int', choices=NIC_SPEEDS),
public_vlan=dict(type='str'),
private_vlan=dict(type='str'),
ssh_keys=dict(type='list', elements='str', default=[]),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
post_uri=dict(type='str'),
state=dict(type='str', default='present', choices=STATES),
wait=dict(type='bool', default=True),

View File

@@ -1448,7 +1448,7 @@ def main():
iam_role_arn=dict(type='str'),
iam_role_name=dict(type='str'),
image_id=dict(type='str', required=True),
key_pair=dict(type='str'),
key_pair=dict(type='str', no_log=False),
kubernetes=dict(type='dict'),
lifetime_period=dict(type='int'),
load_balancers=dict(type='list'),

View File

@@ -1839,7 +1839,7 @@ def main():
type='list',
elements='dict',
options=dict(
key=dict(type='str', required=True),
key=dict(type='str', required=True, no_log=False),
value=dict(type='raw', required=True),
),
),

View File

@@ -229,7 +229,7 @@ _ARGUMENT_SPEC = {
PORT_PARAMETER_NAME: dict(default=8500, type='int'),
RULES_PARAMETER_NAME: dict(type='list', elements='dict'),
STATE_PARAMETER_NAME: dict(default=PRESENT_STATE_VALUE, choices=[PRESENT_STATE_VALUE, ABSENT_STATE_VALUE]),
TOKEN_PARAMETER_NAME: dict(),
TOKEN_PARAMETER_NAME: dict(no_log=False),
TOKEN_TYPE_PARAMETER_NAME: dict(choices=[CLIENT_TOKEN_TYPE_VALUE, MANAGEMENT_TOKEN_TYPE_VALUE],
default=CLIENT_TOKEN_TYPE_VALUE)
}

View File

@@ -297,7 +297,7 @@ def main():
argument_spec=dict(
cas=dict(type='str'),
flags=dict(type='str'),
key=dict(type='str', required=True),
key=dict(type='str', required=True, no_log=False),
host=dict(type='str', default='localhost'),
scheme=dict(type='str', default='http'),
validate_certs=dict(type='bool', default=True),

View File

@@ -134,7 +134,7 @@ def run_module():
# define the available arguments/parameters that a user can pass to
# the module
module_args = dict(
key=dict(type='str', required=True),
key=dict(type='str', required=True, no_log=False),
value=dict(type='str', required=True),
host=dict(type='str', default='localhost'),
port=dict(type='int', default=2379),

View File

@@ -190,9 +190,9 @@ def run_module():
min_cluster_size=dict(type='int', required=False, default=1),
target_cluster_size=dict(type='int', required=False, default=None),
fail_on_cluster_change=dict(type='bool', required=False, default=True),
migrate_tx_key=dict(type='str', required=False,
migrate_tx_key=dict(type='str', required=False, no_log=False,
default="migrate_tx_partitions_remaining"),
migrate_rx_key=dict(type='str', required=False,
migrate_rx_key=dict(type='str', required=False, no_log=False,
default="migrate_rx_partitions_remaining")
)

View File

@@ -58,7 +58,13 @@ options:
description:
- Delete and re-install the plugin. Can be useful for plugins update.
type: bool
default: 'no'
default: false
allow_root:
description:
- Whether to allow C(kibana) and C(kibana-plugin) to be run as root. Passes the C(--allow-root) flag to these commands.
type: bool
default: false
version_added: 2.3.0
'''
EXAMPLES = '''
@@ -152,7 +158,7 @@ def parse_error(string):
return string
def install_plugin(module, plugin_bin, plugin_name, url, timeout, kibana_version='4.6'):
def install_plugin(module, plugin_bin, plugin_name, url, timeout, allow_root, kibana_version='4.6'):
if LooseVersion(kibana_version) > LooseVersion('4.6'):
kibana_plugin_bin = os.path.join(os.path.dirname(plugin_bin), 'kibana-plugin')
cmd_args = [kibana_plugin_bin, "install"]
@@ -169,6 +175,9 @@ def install_plugin(module, plugin_bin, plugin_name, url, timeout, kibana_version
if timeout:
cmd_args.append("--timeout %s" % timeout)
if allow_root:
cmd_args.append('--allow-root')
cmd = " ".join(cmd_args)
if module.check_mode:
@@ -182,13 +191,16 @@ def install_plugin(module, plugin_bin, plugin_name, url, timeout, kibana_version
return True, cmd, out, err
def remove_plugin(module, plugin_bin, plugin_name, kibana_version='4.6'):
def remove_plugin(module, plugin_bin, plugin_name, allow_root, kibana_version='4.6'):
if LooseVersion(kibana_version) > LooseVersion('4.6'):
kibana_plugin_bin = os.path.join(os.path.dirname(plugin_bin), 'kibana-plugin')
cmd_args = [kibana_plugin_bin, "remove", plugin_name]
else:
cmd_args = [plugin_bin, "plugin", PACKAGE_STATE_MAP["absent"], plugin_name]
if allow_root:
cmd_args.append('--allow-root')
cmd = " ".join(cmd_args)
if module.check_mode:
@@ -202,8 +214,12 @@ def remove_plugin(module, plugin_bin, plugin_name, kibana_version='4.6'):
return True, cmd, out, err
def get_kibana_version(module, plugin_bin):
def get_kibana_version(module, plugin_bin, allow_root):
cmd_args = [plugin_bin, '--version']
if allow_root:
cmd_args.append('--allow-root')
cmd = " ".join(cmd_args)
rc, out, err = module.run_command(cmd)
if rc != 0:
@@ -222,7 +238,8 @@ def main():
plugin_bin=dict(default="/opt/kibana/bin/kibana", type="path"),
plugin_dir=dict(default="/opt/kibana/installedPlugins/", type="path"),
version=dict(default=None),
force=dict(default="no", type="bool")
force=dict(default=False, type="bool"),
allow_root=dict(default=False, type="bool"),
),
supports_check_mode=True,
)
@@ -235,10 +252,11 @@ def main():
plugin_dir = module.params["plugin_dir"]
version = module.params["version"]
force = module.params["force"]
allow_root = module.params["allow_root"]
changed, cmd, out, err = False, '', '', ''
kibana_version = get_kibana_version(module, plugin_bin)
kibana_version = get_kibana_version(module, plugin_bin, allow_root)
present = is_plugin_present(parse_plugin_repo(name), plugin_dir)
@@ -252,10 +270,10 @@ def main():
if state == "present":
if force:
remove_plugin(module, plugin_bin, name)
changed, cmd, out, err = install_plugin(module, plugin_bin, name, url, timeout, kibana_version)
changed, cmd, out, err = install_plugin(module, plugin_bin, name, url, timeout, allow_root, kibana_version)
elif state == "absent":
changed, cmd, out, err = remove_plugin(module, plugin_bin, name, kibana_version)
changed, cmd, out, err = remove_plugin(module, plugin_bin, name, allow_root, kibana_version)
module.exit_json(changed=changed, cmd=cmd, name=name, state=state, url=url, timeout=timeout, stdout=out, stderr=err)

View File

@@ -36,6 +36,7 @@ options:
description:
- The file name of the destination archive. The parent directory must exists on the remote host.
- This is required when C(path) refers to multiple files by either specifying a glob, a directory or multiple paths in a list.
- If the destination archive already exists, it will be truncated and overwritten.
type: path
exclude_path:
description:
@@ -44,8 +45,9 @@ options:
elements: path
force_archive:
description:
- Allow you to force the module to treat this as an archive even if only a single file is specified.
- By default behaviour is maintained. i.e A when a single file is specified it is compressed only (not archived).
- Allows you to force the module to treat this as an archive even if only a single file is specified.
- By default when a single file is specified it is compressed only (not archived).
- Enable this if you want to use M(ansible.builtin.unarchive) on an archive of a single file created with this module.
type: bool
default: false
remove:
@@ -153,7 +155,6 @@ expanded_exclude_paths:
'''
import bz2
import filecmp
import glob
import gzip
import io
@@ -186,6 +187,33 @@ else:
HAS_LZMA = False
def to_b(s):
return to_bytes(s, errors='surrogate_or_strict')
def to_n(s):
return to_native(s, errors='surrogate_or_strict')
def to_na(s):
return to_native(s, errors='surrogate_or_strict', encoding='ascii')
def expand_paths(paths):
expanded_path = []
is_globby = False
for path in paths:
b_path = to_b(path)
if b'*' in b_path or b'?' in b_path:
e_paths = glob.glob(b_path)
is_globby = True
else:
e_paths = [b_path]
expanded_path.extend(e_paths)
return expanded_path, is_globby
def main():
module = AnsibleModule(
argument_spec=dict(
@@ -204,21 +232,17 @@ def main():
check_mode = module.check_mode
paths = params['path']
dest = params['dest']
b_dest = None if not dest else to_bytes(dest, errors='surrogate_or_strict')
b_dest = None if not dest else to_b(dest)
exclude_paths = params['exclude_path']
remove = params['remove']
b_expanded_paths = []
b_expanded_exclude_paths = []
fmt = params['format']
b_fmt = to_bytes(fmt, errors='surrogate_or_strict')
b_fmt = to_b(fmt)
force_archive = params['force_archive']
globby = False
changed = False
state = 'absent'
# Simple or archive file compression (inapplicable with 'zip' since it's always an archive)
archive = False
b_successes = []
# Fail early
@@ -227,35 +251,7 @@ def main():
exception=LZMA_IMP_ERR)
module.fail_json(msg="lzma or backports.lzma is required when using xz format.")
for path in paths:
b_path = to_bytes(path, errors='surrogate_or_strict')
# Expand any glob characters. If found, add the expanded glob to the
# list of expanded_paths, which might be empty.
if (b'*' in b_path or b'?' in b_path):
b_expanded_paths.extend(glob.glob(b_path))
globby = True
# If there are no glob characters the path is added to the expanded paths
# whether the path exists or not
else:
b_expanded_paths.append(b_path)
# Only attempt to expand the exclude paths if it exists
if exclude_paths:
for exclude_path in exclude_paths:
b_exclude_path = to_bytes(exclude_path, errors='surrogate_or_strict')
# Expand any glob characters. If found, add the expanded glob to the
# list of expanded_paths, which might be empty.
if (b'*' in b_exclude_path or b'?' in b_exclude_path):
b_expanded_exclude_paths.extend(glob.glob(b_exclude_path))
# If there are no glob character the exclude path is added to the expanded
# exclude paths whether the path exists or not.
else:
b_expanded_exclude_paths.append(b_exclude_path)
b_expanded_paths, globby = expand_paths(paths)
if not b_expanded_paths:
return module.fail_json(
path=', '.join(paths),
@@ -263,6 +259,9 @@ def main():
msg='Error, no source paths were found'
)
# Only attempt to expand the exclude paths if it exists
b_expanded_exclude_paths = expand_paths(exclude_paths)[0] if exclude_paths else []
# Only try to determine if we are working with an archive or not if we haven't set archive to true
if not force_archive:
# If we actually matched multiple files or TRIED to, then
@@ -280,7 +279,7 @@ def main():
if archive and not b_dest:
module.fail_json(dest=dest, path=', '.join(paths), msg='Error, must specify "dest" when archiving multiple files or trees')
b_sep = to_bytes(os.sep, errors='surrogate_or_strict')
b_sep = to_b(os.sep)
b_archive_paths = []
b_missing = []
@@ -321,7 +320,7 @@ def main():
# No source files were found but the named archive exists: are we 'compress' or 'archive' now?
if len(b_missing) == len(b_expanded_paths) and b_dest and os.path.exists(b_dest):
# Just check the filename to know if it's an archive or simple compressed file
if re.search(br'(\.tar|\.tar\.gz|\.tgz|\.tbz2|\.tar\.bz2|\.tar\.xz|\.zip)$', os.path.basename(b_dest), re.IGNORECASE):
if re.search(br'\.(tar|tar\.(gz|bz2|xz)|tgz|tbz2|zip)$', os.path.basename(b_dest), re.IGNORECASE):
state = 'archive'
else:
state = 'compress'
@@ -352,7 +351,7 @@ def main():
# Slightly more difficult (and less efficient!) compression using zipfile module
if fmt == 'zip':
arcfile = zipfile.ZipFile(
to_native(b_dest, errors='surrogate_or_strict', encoding='ascii'),
to_na(b_dest),
'w',
zipfile.ZIP_DEFLATED,
True
@@ -360,7 +359,7 @@ def main():
# Easier compression using tarfile module
elif fmt == 'gz' or fmt == 'bz2':
arcfile = tarfile.open(to_native(b_dest, errors='surrogate_or_strict', encoding='ascii'), 'w|' + fmt)
arcfile = tarfile.open(to_na(b_dest), 'w|' + fmt)
# python3 tarfile module allows xz format but for python2 we have to create the tarfile
# in memory and then compress it with lzma.
@@ -370,7 +369,7 @@ def main():
# Or plain tar archiving
elif fmt == 'tar':
arcfile = tarfile.open(to_native(b_dest, errors='surrogate_or_strict', encoding='ascii'), 'w')
arcfile = tarfile.open(to_na(b_dest), 'w')
b_match_root = re.compile(br'^%s' % re.escape(b_arcroot))
for b_path in b_archive_paths:
@@ -382,7 +381,7 @@ def main():
for b_dirname in b_dirnames:
b_fullpath = b_dirpath + b_dirname
n_fullpath = to_native(b_fullpath, errors='surrogate_or_strict', encoding='ascii')
n_fullpath = to_na(b_fullpath)
n_arcname = to_native(b_match_root.sub(b'', b_fullpath), errors='surrogate_or_strict')
try:
@@ -396,8 +395,8 @@ def main():
for b_filename in b_filenames:
b_fullpath = b_dirpath + b_filename
n_fullpath = to_native(b_fullpath, errors='surrogate_or_strict', encoding='ascii')
n_arcname = to_native(b_match_root.sub(b'', b_fullpath), errors='surrogate_or_strict')
n_fullpath = to_na(b_fullpath)
n_arcname = to_n(b_match_root.sub(b'', b_fullpath))
try:
if fmt == 'zip':
@@ -409,8 +408,8 @@ def main():
except Exception as e:
errors.append('Adding %s: %s' % (to_native(b_path), to_native(e)))
else:
path = to_native(b_path, errors='surrogate_or_strict', encoding='ascii')
arcname = to_native(b_match_root.sub(b'', b_path), errors='surrogate_or_strict')
path = to_na(b_path)
arcname = to_n(b_match_root.sub(b'', b_path))
if fmt == 'zip':
arcfile.write(path, arcname)
else:
@@ -444,14 +443,14 @@ def main():
shutil.rmtree(b_path)
elif not check_mode:
os.remove(b_path)
except OSError as e:
except OSError:
errors.append(to_native(b_path))
for b_path in b_expanded_paths:
try:
if os.path.isdir(b_path):
shutil.rmtree(b_path)
except OSError as e:
except OSError:
errors.append(to_native(b_path))
if errors:
@@ -490,25 +489,25 @@ def main():
try:
if fmt == 'zip':
arcfile = zipfile.ZipFile(
to_native(b_dest, errors='surrogate_or_strict', encoding='ascii'),
to_na(b_dest),
'w',
zipfile.ZIP_DEFLATED,
True
)
arcfile.write(
to_native(b_path, errors='surrogate_or_strict', encoding='ascii'),
to_native(b_path[len(b_arcroot):], errors='surrogate_or_strict')
to_na(b_path),
to_n(b_path[len(b_arcroot):])
)
arcfile.close()
state = 'archive' # because all zip files are archives
elif fmt == 'tar':
arcfile = tarfile.open(to_native(b_dest, errors='surrogate_or_strict', encoding='ascii'), 'w')
arcfile.add(to_native(b_path, errors='surrogate_or_strict', encoding='ascii'))
arcfile = tarfile.open(to_na(b_dest), 'w')
arcfile.add(to_na(b_path))
arcfile.close()
else:
f_in = open(b_path, 'rb')
n_dest = to_native(b_dest, errors='surrogate_or_strict', encoding='ascii')
n_dest = to_na(b_dest)
if fmt == 'gz':
f_out = gzip.open(n_dest, 'wb')
elif fmt == 'bz2':
@@ -564,14 +563,14 @@ def main():
changed = module.set_fs_attributes_if_different(file_args, changed)
module.exit_json(
archived=[to_native(p, errors='surrogate_or_strict') for p in b_successes],
archived=[to_n(p) for p in b_successes],
dest=dest,
changed=changed,
state=state,
arcroot=to_native(b_arcroot, errors='surrogate_or_strict'),
missing=[to_native(p, errors='surrogate_or_strict') for p in b_missing],
expanded_paths=[to_native(p, errors='surrogate_or_strict') for p in b_expanded_paths],
expanded_exclude_paths=[to_native(p, errors='surrogate_or_strict') for p in b_expanded_exclude_paths],
arcroot=to_n(b_arcroot),
missing=[to_n(p) for p in b_missing],
expanded_paths=[to_n(p) for p in b_expanded_paths],
expanded_exclude_paths=[to_n(p) for p in b_expanded_exclude_paths],
)

View File

@@ -137,26 +137,12 @@ list:
gid: 500
'''
import csv
from io import BytesIO, StringIO
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six import PY3
from ansible.module_utils._text import to_native
# Add Unix dialect from Python 3
class unix_dialect(csv.Dialect):
"""Describe the usual properties of Unix-generated CSV files."""
delimiter = ','
quotechar = '"'
doublequote = True
skipinitialspace = False
lineterminator = '\n'
quoting = csv.QUOTE_ALL
csv.register_dialect("unix", unix_dialect)
from ansible_collections.community.general.plugins.module_utils.csv import (initialize_dialect, read_csv, CSVError,
DialectNotAvailableError,
CustomDialectFailureError)
def main():
@@ -164,7 +150,7 @@ def main():
argument_spec=dict(
path=dict(type='path', required=True, aliases=['filename']),
dialect=dict(type='str', default='excel'),
key=dict(type='str'),
key=dict(type='str', no_log=False),
fieldnames=dict(type='list', elements='str'),
unique=dict(type='bool', default=True),
delimiter=dict(type='str'),
@@ -180,38 +166,24 @@ def main():
fieldnames = module.params['fieldnames']
unique = module.params['unique']
if dialect not in csv.list_dialects():
module.fail_json(msg="Dialect '%s' is not supported by your version of python." % dialect)
dialect_params = {
"delimiter": module.params['delimiter'],
"skipinitialspace": module.params['skipinitialspace'],
"strict": module.params['strict'],
}
dialect_options = dict(
delimiter=module.params['delimiter'],
skipinitialspace=module.params['skipinitialspace'],
strict=module.params['strict'],
)
# Create a dictionary from only set options
dialect_params = dict((k, v) for k, v in dialect_options.items() if v is not None)
if dialect_params:
try:
csv.register_dialect('custom', dialect, **dialect_params)
except TypeError as e:
module.fail_json(msg="Unable to create custom dialect: %s" % to_text(e))
dialect = 'custom'
try:
dialect = initialize_dialect(dialect, **dialect_params)
except (CustomDialectFailureError, DialectNotAvailableError) as e:
module.fail_json(msg=to_native(e))
try:
with open(path, 'rb') as f:
data = f.read()
except (IOError, OSError) as e:
module.fail_json(msg="Unable to open file: %s" % to_text(e))
module.fail_json(msg="Unable to open file: %s" % to_native(e))
if PY3:
# Manually decode on Python3 so that we can use the surrogateescape error handler
data = to_text(data, errors='surrogate_or_strict')
fake_fh = StringIO(data)
else:
fake_fh = BytesIO(data)
reader = csv.DictReader(fake_fh, fieldnames=fieldnames, dialect=dialect)
reader = read_csv(data, dialect, fieldnames)
if key and key not in reader.fieldnames:
module.fail_json(msg="Key '%s' was not found in the CSV header fields: %s" % (key, ', '.join(reader.fieldnames)))
@@ -223,16 +195,16 @@ def main():
try:
for row in reader:
data_list.append(row)
except csv.Error as e:
module.fail_json(msg="Unable to process file: %s" % to_text(e))
except CSVError as e:
module.fail_json(msg="Unable to process file: %s" % to_native(e))
else:
try:
for row in reader:
if unique and row[key] in data_dict:
module.fail_json(msg="Key '%s' is not unique for value '%s'" % (key, row[key]))
data_dict[row[key]] = row
except csv.Error as e:
module.fail_json(msg="Unable to process file: %s" % to_text(e))
except CSVError as e:
module.fail_json(msg="Unable to process file: %s" % to_native(e))
module.exit_json(dict=data_dict, list=data_list)

View File

@@ -172,7 +172,7 @@ def main():
argument_spec=dict(
path=dict(type='path', required=True, aliases=['name']),
namespace=dict(type='str', default='user'),
key=dict(type='str'),
key=dict(type='str', no_log=False),
value=dict(type='str'),
state=dict(type='str', default='read', choices=['absent', 'all', 'keys', 'present', 'read']),
follow=dict(type='bool', default=True),

View File

@@ -0,0 +1 @@
net_tools/gandi_livedns.py

View File

@@ -68,6 +68,12 @@ options:
- Option C(hostcategory) must be omitted to assign host groups.
type: list
elements: str
runasextusers:
description:
- List of external RunAs users
type: list
elements: str
version_added: 2.3.0
runasusercategory:
description:
- RunAs User category the rule applies to.
@@ -143,13 +149,15 @@ EXAMPLES = r'''
ipa_user: admin
ipa_pass: topsecret
- name: Ensure user group operations can run any commands that is part of operations-cmdgroup on any host.
- name: Ensure user group operations can run any commands that is part of operations-cmdgroup on any host as user root.
community.general.ipa_sudorule:
name: sudo_operations_all
description: Allow operators to run any commands that is part of operations-cmdgroup on any host.
description: Allow operators to run any commands that is part of operations-cmdgroup on any host as user root.
cmdgroup:
- operations-cmdgroup
hostcategory: all
runasextusers:
- root
sudoopt:
- '!authenticate'
usergroup:
@@ -183,6 +191,12 @@ class SudoRuleIPAClient(IPAClient):
def sudorule_add(self, name, item):
return self._post_json(method='sudorule_add', name=name, item=item)
def sudorule_add_runasuser(self, name, item):
return self._post_json(method='sudorule_add_runasuser', name=name, item={'user': item})
def sudorule_remove_runasuser(self, name, item):
return self._post_json(method='sudorule_remove_runasuser', name=name, item={'user': item})
def sudorule_mod(self, name, item):
return self._post_json(method='sudorule_mod', name=name, item=item)
@@ -287,6 +301,7 @@ def ensure(module, client):
hostgroup = module.params['hostgroup']
runasusercategory = module.params['runasusercategory']
runasgroupcategory = module.params['runasgroupcategory']
runasextusers = module.params['runasextusers']
if state in ['present', 'enabled']:
ipaenabledflag = 'TRUE'
@@ -371,6 +386,21 @@ def ensure(module, client):
for item in diff:
client.sudorule_add_option_ipasudoopt(name, item)
if runasextusers is not None:
ipa_sudorule_run_as_user = ipa_sudorule.get('ipasudorunasextuser', [])
diff = list(set(ipa_sudorule_run_as_user) - set(runasextusers))
if len(diff) > 0:
changed = True
if not module.check_mode:
for item in diff:
client.sudorule_remove_runasuser(name=name, item=item)
diff = list(set(runasextusers) - set(ipa_sudorule_run_as_user))
if len(diff) > 0:
changed = True
if not module.check_mode:
for item in diff:
client.sudorule_add_runasuser(name=name, item=item)
if user is not None:
changed = category_changed(module, client, 'usercategory', ipa_sudorule) or changed
changed = client.modify_if_diff(name, ipa_sudorule.get('memberuser_user', []), user,
@@ -406,8 +436,8 @@ def main():
state=dict(type='str', default='present', choices=['present', 'absent', 'enabled', 'disabled']),
user=dict(type='list', elements='str'),
usercategory=dict(type='str', choices=['all']),
usergroup=dict(type='list', elements='str'))
usergroup=dict(type='list', elements='str'),
runasextusers=dict(type='list', elements='str'))
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=[['cmdcategory', 'cmd'],
['cmdcategory', 'cmdgroup'],

View File

@@ -120,7 +120,7 @@ def main():
host=dict(),
tags=dict(type='list', elements='str'),
alert_type=dict(default='info', choices=['error', 'warning', 'info', 'success']),
aggregation_key=dict(),
aggregation_key=dict(no_log=False),
validate_certs=dict(default=True, type='bool'),
)
)

View File

@@ -205,7 +205,7 @@ def main():
client=dict(required=False, default=None),
client_url=dict(required=False, default=None),
desc=dict(required=False, default='Created via Ansible'),
incident_key=dict(required=False, default=None)
incident_key=dict(required=False, default=None, no_log=False)
),
supports_check_mode=True
)

View File

@@ -800,7 +800,7 @@ def main():
algorithm=dict(type='int'),
cert_usage=dict(type='int', choices=[0, 1, 2, 3]),
hash_type=dict(type='int', choices=[1, 2]),
key_tag=dict(type='int'),
key_tag=dict(type='int', no_log=False),
port=dict(type='int'),
priority=dict(type='int', default=1),
proto=dict(type='str'),

View File

@@ -0,0 +1,187 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: gandi_livedns
author:
- Gregory Thiemonge (@gthiemonge)
version_added: "2.3.0"
short_description: Manage Gandi LiveDNS records
description:
- "Manages DNS records by the Gandi LiveDNS API, see the docs: U(https://doc.livedns.gandi.net/)."
options:
api_key:
description:
- Account API token.
type: str
required: true
record:
description:
- Record to add.
type: str
required: true
state:
description:
- Whether the record(s) should exist or not.
type: str
choices: [ absent, present ]
default: present
ttl:
description:
- The TTL to give the new record.
- Required when I(state=present).
type: int
type:
description:
- The type of DNS record to create.
type: str
required: true
values:
description:
- The record values.
- Required when I(state=present).
type: list
elements: str
domain:
description:
- The name of the Domain to work with (for example, "example.com").
required: true
type: str
notes:
- Supports C(check_mode).
'''
EXAMPLES = r'''
- name: Create a test A record to point to 127.0.0.1 in the my.com domain
community.general.gandi_livedns:
domain: my.com
record: test
type: A
values:
- 127.0.0.1
ttl: 7200
api_key: dummyapitoken
register: record
- name: Create a mail CNAME record to www.my.com domain
community.general.gandi_livedns:
domain: my.com
type: CNAME
record: mail
values:
- www
ttl: 7200
api_key: dummyapitoken
state: present
- name: Change its TTL
community.general.gandi_livedns:
domain: my.com
type: CNAME
record: mail
values:
- www
ttl: 10800
api_key: dummyapitoken
state: present
- name: Delete the record
community.general.gandi_livedns:
domain: my.com
type: CNAME
record: mail
api_key: dummyapitoken
state: absent
'''
RETURN = r'''
record:
description: A dictionary containing the record data.
returned: success, except on record deletion
type: dict
contains:
values:
description: The record content (details depend on record type).
returned: success
type: list
elements: str
sample:
- 192.0.2.91
- 192.0.2.92
record:
description: The record name.
returned: success
type: str
sample: www
ttl:
description: The time-to-live for the record.
returned: success
type: int
sample: 300
type:
description: The record type.
returned: success
type: str
sample: A
domain:
description: The domain associated with the record.
returned: success
type: str
sample: my.com
'''
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.gandi_livedns_api import GandiLiveDNSAPI
def main():
module = AnsibleModule(
argument_spec=dict(
api_key=dict(type='str', required=True, no_log=True),
record=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
ttl=dict(type='int'),
type=dict(type='str', required=True),
values=dict(type='list', elements='str'),
domain=dict(type='str', required=True),
),
supports_check_mode=True,
required_if=[
('state', 'present', ['values', 'ttl']),
],
)
gandi_api = GandiLiveDNSAPI(module)
if module.params['state'] == 'present':
ret, changed = gandi_api.ensure_dns_record(module.params['record'],
module.params['type'],
module.params['ttl'],
module.params['values'],
module.params['domain'])
else:
ret, changed = gandi_api.delete_dns_record(module.params['record'],
module.params['type'],
module.params['values'],
module.params['domain'])
result = dict(
changed=changed,
)
if ret:
result['record'] = gandi_api.build_result(ret,
module.params['domain'])
module.exit_json(**result)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,343 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Florian Dambrine <android.florian@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: pritunl_user
author: "Florian Dambrine (@Lowess)"
version_added: 2.3.0
short_description: Manage Pritunl Users using the Pritunl API
description:
- A module to manage Pritunl users using the Pritunl API.
extends_documentation_fragment:
- community.general.pritunl
options:
organization:
type: str
required: true
aliases:
- org
description:
- The name of the organization the user is part of.
state:
type: str
default: 'present'
choices:
- present
- absent
description:
- If C(present), the module adds user I(user_name) to
the Pritunl I(organization). If C(absent), removes the user
I(user_name) from the Pritunl I(organization).
user_name:
type: str
required: true
default: null
description:
- Name of the user to create or delete from Pritunl.
user_email:
type: str
required: false
default: null
description:
- Email address associated with the user I(user_name).
user_type:
type: str
required: false
default: client
choices:
- client
- server
description:
- Type of the user I(user_name).
user_groups:
type: list
elements: str
required: false
default: null
description:
- List of groups associated with the user I(user_name).
user_disabled:
type: bool
required: false
default: null
description:
- Enable/Disable the user I(user_name).
user_gravatar:
type: bool
required: false
default: null
description:
- Enable/Disable Gravatar usage for the user I(user_name).
"""
EXAMPLES = """
- name: Create the user Foo with email address foo@bar.com in MyOrg
community.general.pritunl_user:
state: present
name: MyOrg
user_name: Foo
user_email: foo@bar.com
- name: Disable the user Foo but keep it in Pritunl
community.general.pritunl_user:
state: present
name: MyOrg
user_name: Foo
user_email: foo@bar.com
user_disabled: yes
- name: Make sure the user Foo is not part of MyOrg anymore
community.general.pritunl_user:
state: absent
name: MyOrg
user_name: Foo
"""
RETURN = """
response:
description: JSON representation of Pritunl Users.
returned: success
type: dict
sample:
{
"audit": false,
"auth_type": "google",
"bypass_secondary": false,
"client_to_client": false,
"disabled": false,
"dns_mapping": null,
"dns_servers": null,
"dns_suffix": null,
"email": "foo@bar.com",
"gravatar": true,
"groups": [
"foo", "bar"
],
"id": "5d070dafe63q3b2e6s472c3b",
"name": "foo@acme.com",
"network_links": [],
"organization": "58070daee6sf342e6e4s2c36",
"organization_name": "Acme",
"otp_auth": true,
"otp_secret": "35H5EJA3XB2$4CWG",
"pin": false,
"port_forwarding": [],
"servers": [],
}
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.common.dict_transformations import dict_merge
from ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api import (
PritunlException,
delete_pritunl_user,
get_pritunl_settings,
list_pritunl_organizations,
list_pritunl_users,
post_pritunl_user,
pritunl_argument_spec,
)
def add_or_update_pritunl_user(module):
result = {}
org_name = module.params.get("organization")
user_name = module.params.get("user_name")
user_params = {
"name": user_name,
"email": module.params.get("user_email"),
"groups": module.params.get("user_groups"),
"disabled": module.params.get("user_disabled"),
"gravatar": module.params.get("user_gravatar"),
"type": module.params.get("user_type"),
}
org_obj_list = list_pritunl_organizations(
**dict_merge(
get_pritunl_settings(module),
{"filters": {"name": org_name}},
)
)
if len(org_obj_list) == 0:
module.fail_json(
msg="Can not add user to organization '%s' which does not exist" % org_name
)
org_id = org_obj_list[0]["id"]
# Grab existing users from this org
users = list_pritunl_users(
**dict_merge(
get_pritunl_settings(module),
{
"organization_id": org_id,
"filters": {"name": user_name},
},
)
)
# Check if the pritunl user already exists
if len(users) > 0:
# Compare remote user params with local user_params and trigger update if needed
user_params_changed = False
for key in user_params.keys():
# When a param is not specified grab existing ones to prevent from changing it with the PUT request
if user_params[key] is None:
user_params[key] = users[0][key]
# 'groups' is a list comparison
if key == "groups":
if set(users[0][key]) != set(user_params[key]):
user_params_changed = True
# otherwise it is either a boolean or a string
else:
if users[0][key] != user_params[key]:
user_params_changed = True
# Trigger a PUT on the API to update the current user if settings have changed
if user_params_changed:
response = post_pritunl_user(
**dict_merge(
get_pritunl_settings(module),
{
"organization_id": org_id,
"user_id": users[0]["id"],
"user_data": user_params,
},
)
)
result["changed"] = True
result["response"] = response
else:
result["changed"] = False
result["response"] = users
else:
response = post_pritunl_user(
**dict_merge(
get_pritunl_settings(module),
{
"organization_id": org_id,
"user_data": user_params,
},
)
)
result["changed"] = True
result["response"] = response
module.exit_json(**result)
def remove_pritunl_user(module):
result = {}
org_name = module.params.get("organization")
user_name = module.params.get("user_name")
org_obj_list = []
org_obj_list = list_pritunl_organizations(
**dict_merge(
get_pritunl_settings(module),
{
"filters": {"name": org_name},
},
)
)
if len(org_obj_list) == 0:
module.fail_json(
msg="Can not remove user '%s' from a non existing organization '%s'"
% (user_name, org_name)
)
org_id = org_obj_list[0]["id"]
# Grab existing users from this org
users = list_pritunl_users(
**dict_merge(
get_pritunl_settings(module),
{
"organization_id": org_id,
"filters": {"name": user_name},
},
)
)
# Check if the pritunl user exists, if not, do nothing
if len(users) == 0:
result["changed"] = False
result["response"] = {}
# Otherwise remove the org from Pritunl
else:
response = delete_pritunl_user(
**dict_merge(
get_pritunl_settings(module),
{
"organization_id": org_id,
"user_id": users[0]["id"],
},
)
)
result["changed"] = True
result["response"] = response
module.exit_json(**result)
def main():
argument_spec = pritunl_argument_spec()
argument_spec.update(
dict(
organization=dict(required=True, type="str", aliases=["org"]),
state=dict(
required=False, choices=["present", "absent"], default="present"
),
user_name=dict(required=True, type="str"),
user_type=dict(
required=False, choices=["client", "server"], default="client"
),
user_email=dict(required=False, type="str", default=None),
user_groups=dict(required=False, type="list", elements="str", default=None),
user_disabled=dict(required=False, type="bool", default=None),
user_gravatar=dict(required=False, type="bool", default=None),
)
),
module = AnsibleModule(argument_spec=argument_spec)
state = module.params.get("state")
try:
if state == "present":
add_or_update_pritunl_user(module)
elif state == "absent":
remove_pritunl_user(module)
except PritunlException as e:
module.fail_json(msg=to_native(e))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,171 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Florian Dambrine <android.florian@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: pritunl_user_info
author: "Florian Dambrine (@Lowess)"
version_added: 2.3.0
short_description: List Pritunl Users using the Pritunl API
description:
- A module to list Pritunl users using the Pritunl API.
extends_documentation_fragment:
- community.general.pritunl
options:
organization:
type: str
required: true
aliases:
- org
description:
- The name of the organization the user is part of.
user_name:
type: str
required: false
description:
- Name of the user to filter on Pritunl.
user_type:
type: str
required: false
default: client
choices:
- client
- server
description:
- Type of the user I(user_name).
"""
EXAMPLES = """
- name: List all existing users part of the organization MyOrg
community.general.pritunl_user_info:
state: list
organization: MyOrg
- name: Search for the user named Florian part of the organization MyOrg
community.general.pritunl_user_info:
state: list
organization: MyOrg
user_name: Florian
"""
RETURN = """
users:
description: List of Pritunl users.
returned: success
type: list
elements: dict
sample:
[
{
"audit": false,
"auth_type": "google",
"bypass_secondary": false,
"client_to_client": false,
"disabled": false,
"dns_mapping": null,
"dns_servers": null,
"dns_suffix": null,
"email": "foo@bar.com",
"gravatar": true,
"groups": [
"foo", "bar"
],
"id": "5d070dafe63q3b2e6s472c3b",
"name": "foo@acme.com",
"network_links": [],
"organization": "58070daee6sf342e6e4s2c36",
"organization_name": "Acme",
"otp_auth": true,
"otp_secret": "35H5EJA3XB2$4CWG",
"pin": false,
"port_forwarding": [],
"servers": [],
}
]
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.common.dict_transformations import dict_merge
from ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api import (
PritunlException,
get_pritunl_settings,
list_pritunl_organizations,
list_pritunl_users,
pritunl_argument_spec,
)
def get_pritunl_user(module):
user_name = module.params.get("user_name")
user_type = module.params.get("user_type")
org_name = module.params.get("organization")
org_obj_list = []
org_obj_list = list_pritunl_organizations(
**dict_merge(get_pritunl_settings(module), {"filters": {"name": org_name}})
)
if len(org_obj_list) == 0:
module.fail_json(
msg="Can not list users from the organization '%s' which does not exist"
% org_name
)
org_id = org_obj_list[0]["id"]
users = list_pritunl_users(
**dict_merge(
get_pritunl_settings(module),
{
"organization_id": org_id,
"filters": (
{"type": user_type}
if user_name is None
else {"name": user_name, "type": user_type}
),
},
)
)
result = {}
result["changed"] = False
result["users"] = users
module.exit_json(**result)
def main():
argument_spec = pritunl_argument_spec()
argument_spec.update(
dict(
organization=dict(required=True, type="str", aliases=["org"]),
user_name=dict(required=False, type="str", default=None),
user_type=dict(
required=False,
choices=["client", "server"],
default="client",
),
)
),
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
try:
get_pritunl_user(module)
except PritunlException as e:
module.fail_json(msg=to_native(e))
if __name__ == "__main__":
main()

View File

@@ -67,6 +67,16 @@ options:
- Encryption key.
- Required if I(level) is C(authPriv).
type: str
timeout:
description:
- Response timeout in seconds.
type: int
version_added: 2.3.0
retries:
description:
- Maximum number of request retries, 0 retries means just a single request.
type: int
version_added: 2.3.0
'''
EXAMPLES = r'''
@@ -271,6 +281,8 @@ def main():
privacy=dict(type='str', choices=['aes', 'des']),
authkey=dict(type='str', no_log=True),
privkey=dict(type='str', no_log=True),
timeout=dict(type='int'),
retries=dict(type='int'),
),
required_together=(
['username', 'level', 'integrity', 'authkey'],
@@ -285,6 +297,7 @@ def main():
module.fail_json(msg=missing_required_lib('pysnmp'), exception=PYSNMP_IMP_ERR)
cmdGen = cmdgen.CommandGenerator()
transport_opts = dict((k, m_args[k]) for k in ('timeout', 'retries') if m_args[k] is not None)
# Verify that we receive a community when using snmp v2
if m_args['version'] in ("v2", "v2c"):
@@ -333,7 +346,7 @@ def main():
errorIndication, errorStatus, errorIndex, varBinds = cmdGen.getCmd(
snmp_auth,
cmdgen.UdpTransportTarget((m_args['host'], 161)),
cmdgen.UdpTransportTarget((m_args['host'], 161), **transport_opts),
cmdgen.MibVariable(p.sysDescr,),
cmdgen.MibVariable(p.sysObjectId,),
cmdgen.MibVariable(p.sysUpTime,),
@@ -364,7 +377,7 @@ def main():
errorIndication, errorStatus, errorIndex, varTable = cmdGen.nextCmd(
snmp_auth,
cmdgen.UdpTransportTarget((m_args['host'], 161)),
cmdgen.UdpTransportTarget((m_args['host'], 161), **transport_opts),
cmdgen.MibVariable(p.ifIndex,),
cmdgen.MibVariable(p.ifDescr,),
cmdgen.MibVariable(p.ifMtu,),

View File

@@ -0,0 +1 @@
./net_tools/pritunl/pritunl_user.py

View File

@@ -0,0 +1 @@
net_tools/pritunl/pritunl_user_info.py

View File

@@ -569,7 +569,7 @@ def endpoint_list_spec():
provider=dict(type='dict', options=endpoint_argument_spec()),
metrics=dict(type='dict', options=endpoint_argument_spec()),
alerts=dict(type='dict', options=endpoint_argument_spec()),
ssh_keypair=dict(type='dict', options=endpoint_argument_spec()),
ssh_keypair=dict(type='dict', options=endpoint_argument_spec(), no_log=False),
)

View File

@@ -33,15 +33,18 @@ options:
- Base URI of OOB controller
type: str
username:
required: true
description:
- User for authentication with OOB controller
type: str
password:
required: true
description:
- Password for authentication with OOB controller
type: str
auth_token:
description:
- Security token for authentication with OOB controller
type: str
version_added: 2.3.0
timeout:
description:
- Timeout in seconds for URL requests to OOB controller
@@ -137,11 +140,21 @@ def main():
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
timeout=dict(type='int', default=10),
resource_id=dict()
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
@@ -150,7 +163,8 @@ def main():
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']

View File

@@ -36,15 +36,18 @@ options:
- Base URI of iDRAC
type: str
username:
required: true
description:
- User for authentication with iDRAC
type: str
password:
required: true
description:
- Password for authentication with iDRAC
type: str
auth_token:
description:
- Security token for authentication with OOB controller
type: str
version_added: 2.3.0
manager_attribute_name:
required: false
description:
@@ -248,14 +251,24 @@ def main():
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
manager_attribute_name=dict(default=None),
manager_attribute_value=dict(default=None),
manager_attributes=dict(type='dict', default={}),
timeout=dict(type='int', default=10),
resource_id=dict()
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
@@ -264,7 +277,8 @@ def main():
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']

View File

@@ -37,15 +37,18 @@ options:
- Base URI of iDRAC controller
type: str
username:
required: true
description:
- User for authentication with iDRAC controller
type: str
password:
required: true
description:
- Password for authentication with iDRAC controller
type: str
auth_token:
description:
- Security token for authentication with OOB controller
type: str
version_added: 2.3.0
timeout:
description:
- Timeout in seconds for URL requests to OOB controller
@@ -174,10 +177,20 @@ def main():
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
timeout=dict(type='int', default=10)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
is_old_facts = module._name in ('idrac_redfish_facts', 'community.general.idrac_redfish_facts')
@@ -191,7 +204,8 @@ def main():
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']

View File

@@ -35,15 +35,23 @@ options:
- Base URI of OOB controller
type: str
username:
required: true
description:
- Username for authentication with OOB controller
type: str
password:
required: true
description:
- Password for authentication with OOB controller
type: str
auth_token:
description:
- Security token for authentication with OOB controller
type: str
version_added: 2.3.0
session_uri:
description:
- URI of the session resource
type: str
version_added: 2.3.0
id:
required: false
aliases: [ account_id ]
@@ -284,15 +292,6 @@ EXAMPLES = '''
category: Systems
command: DisableBootOverride
- name: Set chassis indicator LED to blink
community.general.redfish_command:
category: Chassis
command: IndicatorLedBlink
resource_id: 1U
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Add user
community.general.redfish_command:
category: Accounts
@@ -414,6 +413,31 @@ EXAMPLES = '''
password: "{{ password }}"
timeout: 20
- name: Create session
community.general.redfish_command:
category: Sessions
command: CreateSession
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
register: result
- name: Set chassis indicator LED to blink using security token for auth
community.general.redfish_command:
category: Chassis
command: IndicatorLedBlink
resource_id: 1U
baseuri: "{{ baseuri }}"
auth_token: "{{ result.session.token }}"
- name: Delete session using security token created by CreateSesssion above
community.general.redfish_command:
category: Sessions
command: DeleteSession
baseuri: "{{ baseuri }}"
auth_token: "{{ result.session.token }}"
session_uri: "{{ result.session.uri }}"
- name: Clear Sessions
community.general.redfish_command:
category: Sessions
@@ -538,7 +562,7 @@ CATEGORY_COMMANDS_ALL = {
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",
"UpdateUserRole", "UpdateUserPassword", "UpdateUserName",
"UpdateAccountServiceProperties"],
"Sessions": ["ClearSessions"],
"Sessions": ["ClearSessions", "CreateSession", "DeleteSession"],
"Manager": ["GracefulRestart", "ClearLogs", "VirtualMediaInsert",
"VirtualMediaEject", "PowerOn", "PowerForceOff", "PowerForceRestart",
"PowerGracefulRestart", "PowerGracefulShutdown", "PowerReboot"],
@@ -553,8 +577,10 @@ def main():
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
session_uri=dict(),
id=dict(aliases=["account_id"]),
new_username=dict(aliases=["account_username"]),
new_password=dict(aliases=["account_password"], no_log=True),
@@ -590,6 +616,15 @@ def main():
)
)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
@@ -598,7 +633,8 @@ def main():
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
'pswd': module.params['password'],
'token': module.params['auth_token']}
# user to add/modify/delete
user = {'account_id': module.params['id'],
@@ -712,6 +748,10 @@ def main():
for command in command_list:
if command == "ClearSessions":
result = rf_utils.clear_sessions()
elif command == "CreateSession":
result = rf_utils.create_session()
elif command == "DeleteSession":
result = rf_utils.delete_session(module.params['session_uri'])
elif category == "Manager":
# execute only if we find a Manager service resource
@@ -748,7 +788,9 @@ def main():
if result['ret'] is True:
del result['ret']
changed = result.get('changed', True)
module.exit_json(changed=changed, msg='Action was successful')
session = result.get('session', dict())
module.exit_json(changed=changed, session=session,
msg='Action was successful')
else:
module.fail_json(msg=to_native(result['msg']))

View File

@@ -34,15 +34,18 @@ options:
- Base URI of OOB controller
type: str
username:
required: true
description:
- User for authentication with OOB controller
type: str
password:
required: true
description:
- Password for authentication with OOB controller
type: str
auth_token:
description:
- Security token for authentication with OOB controller
type: str
version_added: 2.3.0
bios_attribute_name:
required: false
description:
@@ -231,8 +234,9 @@ def main():
category=dict(required=True),
command=dict(required=True, type='list', elements='str'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
bios_attribute_name=dict(default='null'),
bios_attribute_value=dict(default='null', type='raw'),
bios_attributes=dict(type='dict', default={}),
@@ -249,6 +253,15 @@ def main():
default={}
)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
@@ -257,7 +270,8 @@ def main():
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']

View File

@@ -37,15 +37,18 @@ options:
- Base URI of OOB controller
type: str
username:
required: true
description:
- User for authentication with OOB controller
type: str
password:
required: true
description:
- Password for authentication with OOB controller
type: str
auth_token:
description:
- Security token for authentication with OOB controller
type: str
version_added: 2.3.0
timeout:
description:
- Timeout in seconds for URL requests to OOB controller
@@ -301,10 +304,20 @@ def main():
category=dict(type='list', elements='str', default=['Systems']),
command=dict(type='list', elements='str'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
username=dict(),
password=dict(no_log=True),
auth_token=dict(no_log=True),
timeout=dict(type='int', default=10)
),
required_together=[
('username', 'password'),
],
required_one_of=[
('username', 'auth_token'),
],
mutually_exclusive=[
('username', 'auth_token'),
],
supports_check_mode=False
)
is_old_facts = module._name in ('redfish_facts', 'community.general.redfish_facts')
@@ -315,7 +328,8 @@ def main():
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
'pswd': module.params['password'],
'token': module.params['auth_token']}
# timeout
timeout = module.params['timeout']

View File

@@ -53,6 +53,7 @@ options:
description:
- Set value to True to force node into install state if it already exists in stacki.
type: bool
default: no
state:
description:
- Set value to the desired state for the specified host.
@@ -103,9 +104,8 @@ stdout_lines:
'''
import json
import os
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import AnsibleModule, env_fallback
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.urls import fetch_url
@@ -235,9 +235,9 @@ def main():
prim_intf_ip=dict(type='str'),
network=dict(type='str', default='private'),
prim_intf_mac=dict(type='str'),
stacki_user=dict(type='str', required=True, default=os.environ.get('stacki_user')),
stacki_password=dict(type='str', required=True, default=os.environ.get('stacki_password'), no_log=True),
stacki_endpoint=dict(type='str', required=True, default=os.environ.get('stacki_endpoint')),
stacki_user=dict(type='str', required=True, fallback=(env_fallback, ['stacki_user'])),
stacki_password=dict(type='str', required=True, fallback=(env_fallback, ['stacki_password']), no_log=True),
stacki_endpoint=dict(type='str', required=True, fallback=(env_fallback, ['stacki_endpoint'])),
force_install=dict(type='bool', default=False),
),
supports_check_mode=False,

View File

@@ -224,7 +224,7 @@ def main():
argument_spec.update(
repository=dict(type='str', required=True),
username=dict(type='str', required=True),
key=dict(type='str'),
key=dict(type='str', no_log=False),
label=dict(type='str', required=True),
state=dict(type='str', choices=['present', 'absent'], required=True),
)

View File

@@ -263,7 +263,7 @@ def main():
repository=dict(type='str', required=True),
username=dict(type='str', required=True),
name=dict(type='str', required=True),
key=dict(type='str'),
key=dict(type='str', no_log=False),
state=dict(type='str', choices=['present', 'absent'], required=True),
)
module = AnsibleModule(

View File

@@ -292,7 +292,7 @@ def main():
owner=dict(required=True, type='str', aliases=['account', 'organization']),
repo=dict(required=True, type='str', aliases=['repository']),
name=dict(required=True, type='str', aliases=['title', 'label']),
key=dict(required=True, type='str'),
key=dict(required=True, type='str', no_log=False),
read_only=dict(required=False, type='bool', default=True),
state=dict(default='present', choices=['present', 'absent']),
force=dict(required=False, type='bool', default=False),

View File

@@ -234,7 +234,7 @@ def main():
api_token=dict(type='str', no_log=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
key=dict(type='str', required=True, no_log=False),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))

View File

@@ -470,7 +470,7 @@ def main():
password=dict(type='str', no_log=True),
email=dict(type='str'),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
sshkey_file=dict(type='str', no_log=False),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),

View File

@@ -352,7 +352,7 @@ def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(default='present', choices=['present', 'absent', 'read']),
key=dict(required=True, type='str'),
key=dict(required=True, type='str', no_log=False),
value=dict(required=False, default=None, type='str'),
),
supports_check_mode=True

View File

@@ -151,7 +151,7 @@ def main():
# Setup the Ansible module
module = AnsibleModule(
argument_spec=dict(
key=dict(type='str', required=True),
key=dict(type='str', required=True, no_log=False),
value_type=dict(type='str', choices=['bool', 'float', 'int', 'string']),
value=dict(type='str'),
state=dict(type='str', required=True, choices=['absent', 'get', 'present']),

View File

@@ -369,7 +369,7 @@ def main():
argument_spec=dict(
domain=dict(type='str', default='NSGlobalDomain'),
host=dict(type='str'),
key=dict(type='str'),
key=dict(type='str', no_log=False),
type=dict(type='str', default='string', choices=['array', 'bool', 'boolean', 'date', 'float', 'int', 'integer', 'string']),
array_add=dict(type='bool', default=False),
value=dict(type='raw'),

View File

@@ -6,7 +6,6 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: jenkins_job
@@ -65,6 +64,15 @@ options:
description:
- User to authenticate with the Jenkins server.
required: false
validate_certs:
type: bool
default: yes
description:
- If set to C(no), the SSL certificates will not be validated.
This should only set to C(no) used on personally controlled sites
using self-signed certificates as it avoids verifying the source site.
- The C(python-jenkins) library only handles this by using the environment variable C(PYTHONHTTPSVERIFY).
version_added: 2.3.0
'''
EXAMPLES = '''
@@ -146,6 +154,7 @@ url:
sample: https://jenkins.mydomain.com
'''
import os
import traceback
import xml.etree.ElementTree as ET
@@ -161,7 +170,7 @@ from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
class JenkinsJob:
class JenkinsJob(object):
def __init__(self, module):
self.module = module
@@ -189,14 +198,16 @@ class JenkinsJob:
}
self.EXCL_STATE = "excluded state"
if not module.params['validate_certs']:
os.environ['PYTHONHTTPSVERIFY'] = '0'
def get_jenkins_connection(self):
try:
if (self.user and self.password):
if self.user and self.password:
return jenkins.Jenkins(self.jenkins_url, self.user, self.password)
elif (self.user and self.token):
elif self.user and self.token:
return jenkins.Jenkins(self.jenkins_url, self.user, self.token)
elif (self.user and not (self.password or self.token)):
elif self.user and not (self.password or self.token):
return jenkins.Jenkins(self.jenkins_url, self.user)
else:
return jenkins.Jenkins(self.jenkins_url)
@@ -256,9 +267,7 @@ class JenkinsJob:
if self.enabled is None:
return False
if ((self.enabled is False and status != "disabled") or (self.enabled is True and status == "disabled")):
return True
return False
return (self.enabled is False and status != "disabled") or (self.enabled is True and status == "disabled")
def switch_state(self):
if self.enabled is False:
@@ -277,7 +286,7 @@ class JenkinsJob:
self.server.reconfig_job(self.name, self.get_config())
# Handle job disable/enable
elif (status != self.EXCL_STATE and self.has_state_changed(status)):
elif status != self.EXCL_STATE and self.has_state_changed(status):
self.result['changed'] = True
if not self.module.check_mode:
self.switch_state()
@@ -342,7 +351,8 @@ def main():
enabled=dict(required=False, type='bool'),
token=dict(type='str', required=False, no_log=True),
url=dict(type='str', required=False, default="http://localhost:8080"),
user=dict(type='str', required=False)
user=dict(type='str', required=False),
validate_certs=dict(type='bool', default=True),
),
mutually_exclusive=[
['password', 'token'],

View File

@@ -0,0 +1,22 @@
---
- name: Create broken link
file:
src: /nowhere
dest: "{{ output_dir }}/nowhere.txt"
state: link
force: yes
- name: Archive broken link (tar.gz)
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_broken_link.tar.gz"
- name: Archive broken link (tar.bz2)
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_broken_link.tar.bz2"
- name: Archive broken link (zip)
archive:
path: "{{ output_dir }}/*.txt"
dest: "{{ output_dir }}/archive_broken_link.zip"

View File

@@ -369,3 +369,6 @@
- name: import remove tests
import_tasks: remove.yml
- name: import broken-link tests
import_tasks: broken-link.yml

View File

@@ -3,3 +3,4 @@ needs/root
skip/macos
skip/osx
skip/freebsd
disabled # FIXME

View File

@@ -0,0 +1,2 @@
shippable/posix/group2
skip/python2.6 # filters are controller only, and we no longer support Python 2.6 on the controller

View File

@@ -0,0 +1,49 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
- name: Parse valid csv input
assert:
that:
- "valid_comma_separated | community.general.from_csv == expected_result"
- name: Parse valid csv input containing spaces with/without skipinitialspace=True
assert:
that:
- "valid_comma_separated_spaces | community.general.from_csv(skipinitialspace=True) == expected_result"
- "valid_comma_separated_spaces | community.general.from_csv != expected_result"
- name: Parse valid csv input with no headers with/without specifiying fieldnames
assert:
that:
- "valid_comma_separated_no_headers | community.general.from_csv(fieldnames=['id','name','role']) == expected_result"
- "valid_comma_separated_no_headers | community.general.from_csv != expected_result"
- name: Parse valid pipe-delimited csv input with/without delimiter=|
assert:
that:
- "valid_pipe_separated | community.general.from_csv(delimiter='|') == expected_result"
- "valid_pipe_separated | community.general.from_csv != expected_result"
- name: Register result of invalid csv input when strict=False
debug:
var: "invalid_comma_separated | community.general.from_csv"
register: _invalid_csv_strict_false
- name: Test invalid csv input when strict=False is successful
assert:
that:
- _invalid_csv_strict_false is success
- name: Register result of invalid csv input when strict=True
debug:
var: "invalid_comma_separated | community.general.from_csv(strict=True)"
register: _invalid_csv_strict_true
ignore_errors: True
- name: Test invalid csv input when strict=True is failed
assert:
that:
- _invalid_csv_strict_true is failed
- _invalid_csv_strict_true.msg is match('Unable to process file:.*')

View File

@@ -0,0 +1,26 @@
valid_comma_separated: |
id,name,role
1,foo,bar
2,bar,baz
valid_comma_separated_spaces: |
id,name,role
1, foo, bar
2, bar, baz
valid_comma_separated_no_headers: |
1,foo,bar
2,bar,baz
valid_pipe_separated: |
id|name|role
1|foo|bar
2|bar|baz
invalid_comma_separated: |
id,name,role
1,foo,bar
2,"b"ar",baz
expected_result:
- id: '1'
name: foo
role: bar
- id: '2'
name: bar
role: baz

View File

@@ -0,0 +1,2 @@
cloud/gandi
unsupported

View File

@@ -0,0 +1,34 @@
# Copyright: (c) 2020 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
gandi_livedns_domain_name: "ansible-tests.org"
gandi_livedns_record_items:
# Single A record
- record: test-www
type: A
values:
- 10.10.10.10
ttl: 400
update_values:
- 10.10.10.11
update_ttl: 800
# Multiple A records
- record: test-www-multiple
type: A
ttl: 3600
values:
- 10.10.11.10
- 10.10.11.10
update_values:
- 10.10.11.11
- 10.10.11.13
# CNAME
- record: test-cname
type: CNAME
ttl: 10800
values:
- test-www2
update_values:
- test-www

View File

@@ -0,0 +1,67 @@
# Copyright: (c) 2020 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
- name: test absent dns record
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
type: "{{ item.type }}"
ttl: "{{ item.ttl }}"
state: absent
register: result
- name: verify test absent dns record
assert:
that:
- result is successful
- name: test create a dns record in check mode
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item['values'] }}"
ttl: "{{ item.ttl }}"
type: "{{ item.type }}"
check_mode: yes
register: result
- name: verify test create a dns record in check mode
assert:
that:
- result is changed
- name: test create a dns record
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item['values'] }}"
ttl: "{{ item.ttl }}"
type: "{{ item.type }}"
register: result
- name: verify test create a dns record
assert:
that:
- result is changed
- result.record['values'] == {{ item['values'] }}
- result.record.record == "{{ item.record }}"
- result.record.type == "{{ item.type }}"
- result.record.ttl == {{ item.ttl }}
- name: test create a dns record idempotence
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item['values'] }}"
ttl: "{{ item.ttl }}"
type: "{{ item.type }}"
register: result
- name: verify test create a dns record idempotence
assert:
that:
- result is not changed
- result.record['values'] == {{ item['values'] }}
- result.record.record == "{{ item.record }}"
- result.record.type == "{{ item.type }}"
- result.record.ttl == {{ item.ttl }}

View File

@@ -0,0 +1,5 @@
# Copyright: (c) 2020 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
- include_tasks: record.yml
with_items: "{{ gandi_livedns_record_items }}"

View File

@@ -0,0 +1,6 @@
# Copyright: (c) 2020 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
- include_tasks: create_record.yml
- include_tasks: update_record.yml
- include_tasks: remove_record.yml

View File

@@ -0,0 +1,59 @@
# Copyright: (c) 2020 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
- name: test remove a dns record in check mode
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item.update_values | default(item['values']) }}"
type: "{{ item.type }}"
state: absent
check_mode: yes
register: result
- name: verify test remove a dns record in check mode
assert:
that:
- result is changed
- name: test remove a dns record
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item.update_values | default(item['values']) }}"
type: "{{ item.type }}"
state: absent
register: result
- name: verify test remove a dns record
assert:
that:
- result is changed
- name: test remove a dns record idempotence
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item.update_values | default(item['values']) }}"
type: "{{ item.type }}"
state: absent
register: result
- name: verify test remove a dns record idempotence
assert:
that:
- result is not changed
- name: test remove second dns record idempotence
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item['values'] }}"
type: "{{ item.type }}"
state: absent
register: result
- name: verify test remove a dns record idempotence
assert:
that:
- result is not changed

View File

@@ -0,0 +1,57 @@
# Copyright: (c) 2020 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
- name: test update or add another dns record in check mode
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item.update_values | default(item['values']) }}"
ttl: "{{ item.update_ttl | default(item.ttl) }}"
type: "{{ item.type }}"
check_mode: yes
register: result
- name: verify test update in check mode
assert:
that:
- result is changed
- result.record['values'] == {{ item.update_values | default(item['values']) }}
- result.record.record == "{{ item.record }}"
- result.record.type == "{{ item.type }}"
- result.record.ttl == {{ item.update_ttl | default(item.ttl) }}
- name: test update or add another dns record
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item.update_values | default(item['values']) }}"
ttl: "{{ item.update_ttl | default(item.ttl) }}"
type: "{{ item.type }}"
register: result
- name: verify test update a dns record
assert:
that:
- result is changed
- result.record['values'] == {{ item.update_values | default(item['values']) }}
- result.record.record == "{{ item.record }}"
- result.record.ttl == {{ item.update_ttl | default(item.ttl) }}
- result.record.type == "{{ item.type }}"
- name: test update or add another dns record idempotence
community.general.gandi_livedns:
api_key: "{{ gandi_api_key }}"
record: "{{ item.record }}"
domain: "{{ gandi_livedns_domain_name }}"
values: "{{ item.update_values | default(item['values']) }}"
ttl: "{{ item.update_ttl | default(item.ttl) }}"
type: "{{ item.type }}"
register: result
- name: verify test update a dns record idempotence
assert:
that:
- result is not changed
- result.record['values'] == {{ item.update_values | default(item['values']) }}
- result.record.record == "{{ item.record }}"
- result.record.ttl == {{ item.update_ttl | default(item.ttl) }}
- result.record.type == "{{ item.type }}"

View File

@@ -65,14 +65,11 @@ plugins/modules/cloud/ovirt/ovirt_vmpool_facts.py validate-modules:doc-missing-t
plugins/modules/cloud/ovirt/ovirt_vmpool_facts.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax.py validate-modules:doc-missing-type
plugins/modules/cloud/rackspace/rax.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax.py validate-modules:undocumented-parameter
plugins/modules/cloud/rackspace/rax_cdb_user.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_scaling_group.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_image_facts.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_image_info.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_ip_facts.py validate-modules:return-syntax-error
@@ -87,7 +84,6 @@ plugins/modules/cloud/scaleway/scaleway_snapshot_facts.py validate-modules:retur
plugins/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_volume_facts.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/smartos_image_info.py validate-modules:doc-missing-type
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:undocumented-parameter
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
@@ -171,7 +167,6 @@ plugins/modules/remote_management/oneview/oneview_san_manager.py validate-module
plugins/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:parameter-type-not-in-doc
plugins/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:undocumented-parameter
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:doc-default-does-not-match-spec
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:no-default-for-required-parameter
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:parameter-type-not-in-doc
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:undocumented-parameter
plugins/modules/source_control/github/github_deploy_key.py validate-modules:parameter-invalid

View File

@@ -64,14 +64,11 @@ plugins/modules/cloud/ovirt/ovirt_vmpool_facts.py validate-modules:doc-missing-t
plugins/modules/cloud/ovirt/ovirt_vmpool_facts.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax.py validate-modules:doc-missing-type
plugins/modules/cloud/rackspace/rax.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax.py validate-modules:undocumented-parameter
plugins/modules/cloud/rackspace/rax_cdb_user.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed
plugins/modules/cloud/rackspace/rax_scaling_group.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_image_facts.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_image_info.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_ip_facts.py validate-modules:return-syntax-error
@@ -86,7 +83,6 @@ plugins/modules/cloud/scaleway/scaleway_snapshot_facts.py validate-modules:retur
plugins/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_volume_facts.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/smartos_image_info.py validate-modules:doc-missing-type
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:undocumented-parameter
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
@@ -170,7 +166,6 @@ plugins/modules/remote_management/oneview/oneview_san_manager.py validate-module
plugins/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:parameter-type-not-in-doc
plugins/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:undocumented-parameter
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:doc-default-does-not-match-spec
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:no-default-for-required-parameter
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:parameter-type-not-in-doc
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:undocumented-parameter
plugins/modules/source_control/github/github_deploy_key.py validate-modules:parameter-invalid

View File

@@ -97,7 +97,7 @@ plugins/modules/cloud/rackspace/rax.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax.py validate-modules:doc-missing-type
plugins/modules/cloud/rackspace/rax.py validate-modules:undocumented-parameter
plugins/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path
plugins/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed, expanduser() applied to dict values
plugins/modules/cloud/scaleway/scaleway_image_facts.py validate-modules:deprecation-mismatch
plugins/modules/cloud/scaleway/scaleway_image_facts.py validate-modules:invalid-documentation
plugins/modules/cloud/scaleway/scaleway_image_facts.py validate-modules:return-syntax-error
@@ -126,7 +126,6 @@ plugins/modules/cloud/scaleway/scaleway_volume_facts.py validate-modules:depreca
plugins/modules/cloud/scaleway/scaleway_volume_facts.py validate-modules:invalid-documentation
plugins/modules/cloud/scaleway/scaleway_volume_facts.py validate-modules:return-syntax-error
plugins/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:return-syntax-error
plugins/modules/cloud/smartos/smartos_image_info.py validate-modules:doc-missing-type
plugins/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/smartos/vmadm.py validate-modules:undocumented-parameter
plugins/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
@@ -199,7 +198,6 @@ plugins/modules/remote_management/oneview/oneview_san_manager.py validate-module
plugins/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:parameter-type-not-in-doc
plugins/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:undocumented-parameter
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:doc-default-does-not-match-spec
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:no-default-for-required-parameter
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:parameter-type-not-in-doc
plugins/modules/remote_management/stacki/stacki_host.py validate-modules:undocumented-parameter
plugins/modules/source_control/github/github_deploy_key.py validate-modules:parameter-invalid

View File

@@ -71,7 +71,8 @@ def get_json(url):
"status": "running",
"vmid": "100",
"disk": "1000",
"uptime": 1000}]
"uptime": 1000,
"tags": "test, tags, here"}]
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu":
# _get_qemu_per_node
return [{"name": "test-qemu",
@@ -105,7 +106,8 @@ def get_json(url):
"vmid": "9001",
"uptime": 0,
"disk": 0,
"status": "stopped"}]
"status": "stopped",
"tags": "test, tags, here"}]
elif url == "https://localhost:8006/api2/json/pools/test":
# _get_members_per_pool
return {"members": [{"uptime": 1000,

View File

@@ -22,11 +22,14 @@ class TestNiosApi(unittest.TestCase):
self.mock_connector = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.get_connector')
self.mock_connector.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosApi, self).tearDown()
self.mock_connector.stop()
self.mock_check_type_dict.stop()
def test_get_provider_spec(self):
provider_options = ['host', 'username', 'password', 'validate_certs', 'silent_ssl_warnings',
@@ -55,7 +58,7 @@ class TestNiosApi(unittest.TestCase):
{
"comment": "test comment",
"_ref": "networkview/ZG5zLm5ldHdvcmtfdmlldyQw:default/true",
"name": self.module._check_type_dict().__getitem__(),
"name": self.mock_check_type_dict_obj().__getitem__(),
"extattrs": {}
}
]
@@ -143,7 +146,7 @@ class TestNiosApi(unittest.TestCase):
kwargs = copy.deepcopy(test_object[0])
kwargs['extattrs']['Site']['value'] = 'update'
kwargs['name'] = self.module._check_type_dict().__getitem__()
kwargs['name'] = self.mock_check_type_dict_obj().__getitem__()
del kwargs['_ref']
wapi = self._get_wapi(test_object)
@@ -159,7 +162,7 @@ class TestNiosApi(unittest.TestCase):
test_object = [{
"comment": "test comment",
"_ref": "networkview/ZG5zLm5ldHdvcmtfdmlldyQw:default/true",
"name": self.module._check_type_dict().__getitem__(),
"name": self.mock_check_type_dict_obj().__getitem__(),
"extattrs": {'Site': {'value': 'test'}}
}]
@@ -190,7 +193,7 @@ class TestNiosApi(unittest.TestCase):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__()})
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__()})
def test_wapi_delete(self):
self.module.params = {'provider': None, 'state': 'absent', 'name': 'ansible',
@@ -240,7 +243,7 @@ class TestNiosApi(unittest.TestCase):
kwargs = test_object[0].copy()
ref = kwargs.pop('_ref')
kwargs['comment'] = 'updated comment'
kwargs['name'] = self.module._check_type_dict().__getitem__()
kwargs['name'] = self.mock_check_type_dict_obj().__getitem__()
del kwargs['network_view']
del kwargs['extattrs']

View File

@@ -0,0 +1,541 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2021, Florian Dambrine <android.florian@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
import json
import pytest
from ansible.module_utils.common.dict_transformations import dict_merge
from ansible.module_utils.six import iteritems
from ansible_collections.community.general.plugins.module_utils.net_tools.pritunl import api
from mock import MagicMock
__metaclass__ = type
# Pritunl Mocks
class PritunlListOrganizationMock(MagicMock):
"""Pritunl API Mock for organization GET API calls."""
def getcode(self):
return 200
def read(self):
return json.dumps(
[
{
"auth_api": False,
"name": "Foo",
"auth_token": None,
"user_count": 0,
"auth_secret": None,
"id": "csftwlu6uhralzi2dpmhekz3",
},
{
"auth_api": False,
"name": "GumGum",
"auth_token": None,
"user_count": 3,
"auth_secret": None,
"id": "58070daee63f3b2e6e472c36",
},
{
"auth_api": False,
"name": "Bar",
"auth_token": None,
"user_count": 0,
"auth_secret": None,
"id": "v1sncsxxybnsylc8gpqg85pg",
},
]
)
class PritunlListUserMock(MagicMock):
"""Pritunl API Mock for user GET API calls."""
def getcode(self):
return 200
def read(self):
return json.dumps(
[
{
"auth_type": "google",
"dns_servers": None,
"pin": True,
"dns_suffix": None,
"servers": [
{
"status": False,
"platform": None,
"server_id": "580711322bb66c1d59b9568f",
"virt_address6": "fd00:c0a8: 9700: 0: 192: 168: 101: 27",
"virt_address": "192.168.101.27",
"name": "vpn-A",
"real_address": None,
"connected_since": None,
"id": "580711322bb66c1d59b9568f",
"device_name": None,
},
{
"status": False,
"platform": None,
"server_id": "5dad2cc6e63f3b3f4a6dfea5",
"virt_address6": "fd00:c0a8:f200: 0: 192: 168: 201: 37",
"virt_address": "192.168.201.37",
"name": "vpn-B",
"real_address": None,
"connected_since": None,
"id": "5dad2cc6e63f3b3f4a6dfea5",
"device_name": None,
},
],
"disabled": False,
"network_links": [],
"port_forwarding": [],
"id": "58070dafe63f3b2e6e472c3b",
"organization_name": "GumGum",
"type": "server",
"email": "bot@company.com",
"status": True,
"dns_mapping": None,
"otp_secret": "123456789ABCDEFG",
"client_to_client": False,
"sso": "google",
"bypass_secondary": False,
"groups": ["admin", "multiregion"],
"audit": False,
"name": "bot",
"gravatar": True,
"otp_auth": True,
"organization": "58070daee63f3b2e6e472c36",
},
{
"auth_type": "google",
"dns_servers": None,
"pin": True,
"dns_suffix": None,
"servers": [
{
"status": False,
"platform": None,
"server_id": "580711322bb66c1d59b9568f",
"virt_address6": "fd00:c0a8: 9700: 0: 192: 168: 101: 27",
"virt_address": "192.168.101.27",
"name": "vpn-A",
"real_address": None,
"connected_since": None,
"id": "580711322bb66c1d59b9568f",
"device_name": None,
},
{
"status": False,
"platform": None,
"server_id": "5dad2cc6e63f3b3f4a6dfea5",
"virt_address6": "fd00:c0a8:f200: 0: 192: 168: 201: 37",
"virt_address": "192.168.201.37",
"name": "vpn-B",
"real_address": None,
"connected_since": None,
"id": "5dad2cc6e63f3b3f4a6dfea5",
"device_name": None,
},
],
"disabled": False,
"network_links": [],
"port_forwarding": [],
"id": "58070dafe63f3b2e6e472c3b",
"organization_name": "GumGum",
"type": "client",
"email": "florian@company.com",
"status": True,
"dns_mapping": None,
"otp_secret": "123456789ABCDEFG",
"client_to_client": False,
"sso": "google",
"bypass_secondary": False,
"groups": ["web", "database"],
"audit": False,
"name": "florian",
"gravatar": True,
"otp_auth": True,
"organization": "58070daee63f3b2e6e472c36",
},
{
"auth_type": "google",
"dns_servers": None,
"pin": True,
"dns_suffix": None,
"servers": [
{
"status": False,
"platform": None,
"server_id": "580711322bb66c1d59b9568f",
"virt_address6": "fd00:c0a8: 9700: 0: 192: 168: 101: 27",
"virt_address": "192.168.101.27",
"name": "vpn-A",
"real_address": None,
"connected_since": None,
"id": "580711322bb66c1d59b9568f",
"device_name": None,
},
{
"status": False,
"platform": None,
"server_id": "5dad2cc6e63f3b3f4a6dfea5",
"virt_address6": "fd00:c0a8:f200: 0: 192: 168: 201: 37",
"virt_address": "192.168.201.37",
"name": "vpn-B",
"real_address": None,
"connected_since": None,
"id": "5dad2cc6e63f3b3f4a6dfea5",
"device_name": None,
},
],
"disabled": False,
"network_links": [],
"port_forwarding": [],
"id": "58070dafe63f3b2e6e472c3b",
"organization_name": "GumGum",
"type": "server",
"email": "ops@company.com",
"status": True,
"dns_mapping": None,
"otp_secret": "123456789ABCDEFG",
"client_to_client": False,
"sso": "google",
"bypass_secondary": False,
"groups": ["web", "database"],
"audit": False,
"name": "ops",
"gravatar": True,
"otp_auth": True,
"organization": "58070daee63f3b2e6e472c36",
},
]
)
class PritunlErrorMock(MagicMock):
"""Pritunl API Mock for API call failures."""
def getcode(self):
return 500
def read(self):
return "{}"
class PritunlPostUserMock(MagicMock):
"""Pritunl API Mock for POST API calls."""
def getcode(self):
return 200
def read(self):
return json.dumps(
[
{
"auth_type": "local",
"disabled": False,
"dns_servers": None,
"otp_secret": "6M4UWP2BCJBSYZAT",
"name": "alice",
"pin": False,
"dns_suffix": None,
"client_to_client": False,
"email": "alice@company.com",
"organization_name": "GumGum",
"bypass_secondary": False,
"groups": ["a", "b"],
"organization": "58070daee63f3b2e6e472c36",
"port_forwarding": [],
"type": "client",
"id": "590add71e63f3b72d8bb951a",
}
]
)
class PritunlPutUserMock(MagicMock):
"""Pritunl API Mock for PUT API calls."""
def getcode(self):
return 200
def read(self):
return json.dumps(
{
"auth_type": "local",
"disabled": True,
"dns_servers": None,
"otp_secret": "WEJANJYMF3Q2QSLG",
"name": "bob",
"pin": False,
"dns_suffix": False,
"client_to_client": False,
"email": "bob@company.com",
"organization_name": "GumGum",
"bypass_secondary": False,
"groups": ["c", "d"],
"organization": "58070daee63f3b2e6e472c36",
"port_forwarding": [],
"type": "client",
"id": "590add71e63f3b72d8bb951a",
}
)
class PritunlDeleteUserMock(MagicMock):
"""Pritunl API Mock for DELETE API calls."""
def getcode(self):
return 200
def read(self):
return "{}"
# Ansible Module Mock and Pytest mock fixtures
class ModuleFailException(Exception):
def __init__(self, msg, **kwargs):
super(ModuleFailException, self).__init__(msg)
self.fail_msg = msg
self.fail_kwargs = kwargs
@pytest.fixture
def pritunl_settings():
return {
"api_token": "token",
"api_secret": "secret",
"base_url": "https://pritunl.domain.com",
"validate_certs": True,
}
@pytest.fixture
def pritunl_user_data():
return {
"name": "alice",
"email": "alice@company.com",
"groups": ["a", "b"],
"disabled": False,
"type": "client",
}
@pytest.fixture
def get_pritunl_organization_mock():
return PritunlListOrganizationMock()
@pytest.fixture
def get_pritunl_user_mock():
return PritunlListUserMock()
@pytest.fixture
def get_pritunl_error_mock():
return PritunlErrorMock()
@pytest.fixture
def post_pritunl_user_mock():
return PritunlPostUserMock()
@pytest.fixture
def put_pritunl_user_mock():
return PritunlPutUserMock()
@pytest.fixture
def delete_pritunl_user_mock():
return PritunlDeleteUserMock()
class TestPritunlApi:
"""
Test class to validate CRUD operations on Pritunl.
"""
# Test for GET / list operation on Pritunl API
@pytest.mark.parametrize(
"org_id,org_user_count",
[
("58070daee63f3b2e6e472c36", 3),
("v1sncsxxybnsylc8gpqg85pg", 0),
],
)
def test_list_all_pritunl_organization(
self,
pritunl_settings,
get_pritunl_organization_mock,
org_id,
org_user_count,
):
api._get_pritunl_organizations = get_pritunl_organization_mock()
response = api.list_pritunl_organizations(**pritunl_settings)
assert len(response) == 3
for org in response:
if org["id"] == org_id:
org["user_count"] == org_user_count
@pytest.mark.parametrize(
"org_filters,org_expected",
[
({"id": "58070daee63f3b2e6e472c36"}, "GumGum"),
({"name": "GumGum"}, "GumGum"),
],
)
def test_list_filtered_pritunl_organization(
self,
pritunl_settings,
get_pritunl_organization_mock,
org_filters,
org_expected,
):
api._get_pritunl_organizations = get_pritunl_organization_mock()
response = api.list_pritunl_organizations(
**dict_merge(pritunl_settings, {"filters": org_filters})
)
assert len(response) == 1
assert response[0]["name"] == org_expected
@pytest.mark.parametrize(
"org_id,org_user_count",
[("58070daee63f3b2e6e472c36", 3)],
)
def test_list_all_pritunl_user(
self, pritunl_settings, get_pritunl_user_mock, org_id, org_user_count
):
api._get_pritunl_users = get_pritunl_user_mock()
response = api.list_pritunl_users(
**dict_merge(pritunl_settings, {"organization_id": org_id})
)
assert len(response) == org_user_count
@pytest.mark.parametrize(
"org_id,user_filters,user_expected",
[
("58070daee63f3b2e6e472c36", {"email": "bot@company.com"}, "bot"),
("58070daee63f3b2e6e472c36", {"name": "florian"}, "florian"),
],
)
def test_list_filtered_pritunl_user(
self,
pritunl_settings,
get_pritunl_user_mock,
org_id,
user_filters,
user_expected,
):
api._get_pritunl_users = get_pritunl_user_mock()
response = api.list_pritunl_users(
**dict_merge(
pritunl_settings, {"organization_id": org_id, "filters": user_filters}
)
)
assert len(response) > 0
for user in response:
assert user["organization"] == org_id
assert user["name"] == user_expected
# Test for POST operation on Pritunl API
@pytest.mark.parametrize("org_id", [("58070daee63f3b2e6e472c36")])
def test_add_and_update_pritunl_user(
self,
pritunl_settings,
pritunl_user_data,
post_pritunl_user_mock,
put_pritunl_user_mock,
org_id,
):
api._post_pritunl_user = post_pritunl_user_mock()
api._put_pritunl_user = put_pritunl_user_mock()
create_response = api.post_pritunl_user(
**dict_merge(
pritunl_settings,
{
"organization_id": org_id,
"user_data": pritunl_user_data,
},
)
)
# Ensure provided settings match with the ones returned by Pritunl
for k, v in iteritems(pritunl_user_data):
assert create_response[k] == v
# Update the newly created user to ensure only certain settings are changed
user_updates = {
"name": "bob",
"email": "bob@company.com",
"disabled": True,
}
update_response = api.post_pritunl_user(
**dict_merge(
pritunl_settings,
{
"organization_id": org_id,
"user_id": create_response["id"],
"user_data": dict_merge(pritunl_user_data, user_updates),
},
)
)
# Ensure only certain settings changed and the rest remained untouched.
for k, v in iteritems(update_response):
if k in update_response:
assert update_response[k] == v
else:
assert update_response[k] == create_response[k]
# Test for DELETE operation on Pritunl API
@pytest.mark.parametrize(
"org_id,user_id", [("58070daee63f3b2e6e472c36", "590add71e63f3b72d8bb951a")]
)
def test_delete_pritunl_user(
self, pritunl_settings, org_id, user_id, delete_pritunl_user_mock
):
api._delete_pritunl_user = delete_pritunl_user_mock()
response = api.delete_pritunl_user(
**dict_merge(
pritunl_settings,
{
"organization_id": org_id,
"user_id": user_id,
},
)
)
assert response == {}
# Test API call errors
def test_pritunl_error(self, pritunl_settings, get_pritunl_error_mock):
api.pritunl_auth_request = get_pritunl_error_mock()
with pytest.raises(api.PritunlException):
response = api.list_pritunl_organizations(**pritunl_settings)

View File

@@ -0,0 +1,164 @@
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible_collections.community.general.plugins.module_utils import csv
VALID_CSV = [
(
'excel',
{},
None,
"id,name,role\n1,foo,bar\n2,bar,baz",
[
{
"id": "1",
"name": "foo",
"role": "bar",
},
{
"id": "2",
"name": "bar",
"role": "baz",
},
]
),
(
'excel',
{"skipinitialspace": True},
None,
"id,name,role\n1, foo, bar\n2, bar, baz",
[
{
"id": "1",
"name": "foo",
"role": "bar",
},
{
"id": "2",
"name": "bar",
"role": "baz",
},
]
),
(
'excel',
{"delimiter": '|'},
None,
"id|name|role\n1|foo|bar\n2|bar|baz",
[
{
"id": "1",
"name": "foo",
"role": "bar",
},
{
"id": "2",
"name": "bar",
"role": "baz",
},
]
),
(
'unix',
{},
None,
"id,name,role\n1,foo,bar\n2,bar,baz",
[
{
"id": "1",
"name": "foo",
"role": "bar",
},
{
"id": "2",
"name": "bar",
"role": "baz",
},
]
),
(
'excel',
{},
['id', 'name', 'role'],
"1,foo,bar\n2,bar,baz",
[
{
"id": "1",
"name": "foo",
"role": "bar",
},
{
"id": "2",
"name": "bar",
"role": "baz",
},
]
),
]
INVALID_CSV = [
(
'excel',
{'strict': True},
None,
'id,name,role\n1,"f"oo",bar\n2,bar,baz',
),
]
INVALID_DIALECT = [
(
'invalid',
{},
None,
"id,name,role\n1,foo,bar\n2,bar,baz",
),
]
@pytest.mark.parametrize("dialect,dialect_params,fieldnames,data,expected", VALID_CSV)
def test_valid_csv(data, dialect, dialect_params, fieldnames, expected):
dialect = csv.initialize_dialect(dialect, **dialect_params)
reader = csv.read_csv(data, dialect, fieldnames)
result = True
for idx, row in enumerate(reader):
for k, v in row.items():
if expected[idx][k] != v:
result = False
break
assert result
@pytest.mark.parametrize("dialect,dialect_params,fieldnames,data", INVALID_CSV)
def test_invalid_csv(data, dialect, dialect_params, fieldnames):
dialect = csv.initialize_dialect(dialect, **dialect_params)
reader = csv.read_csv(data, dialect, fieldnames)
result = False
try:
for row in reader:
continue
except csv.CSVError:
result = True
assert result
@pytest.mark.parametrize("dialect,dialect_params,fieldnames,data", INVALID_DIALECT)
def test_invalid_dialect(data, dialect, dialect_params, fieldnames):
result = False
try:
dialect = csv.initialize_dialect(dialect, **dialect_params)
except csv.DialectNotAvailableError:
result = True
assert result

View File

@@ -22,24 +22,38 @@ ARG_FORMATS = dict(
True, ["--superflag"]),
simple_boolean_false=("--superflag", ArgFormat.BOOLEAN, 0,
False, []),
simple_boolean_none=("--superflag", ArgFormat.BOOLEAN, 0,
None, []),
single_printf=("--param=%s", ArgFormat.PRINTF, 0,
"potatoes", ["--param=potatoes"]),
single_printf_no_substitution=("--param", ArgFormat.PRINTF, 0,
"potatoes", ["--param"]),
single_printf_none=("--param=%s", ArgFormat.PRINTF, 0,
None, []),
multiple_printf=(["--param", "free-%s"], ArgFormat.PRINTF, 0,
"potatoes", ["--param", "free-potatoes"]),
single_format=("--param={0}", ArgFormat.FORMAT, 0,
"potatoes", ["--param=potatoes"]),
single_format_none=("--param={0}", ArgFormat.FORMAT, 0,
None, []),
single_format_no_substitution=("--param", ArgFormat.FORMAT, 0,
"potatoes", ["--param"]),
multiple_format=(["--param", "free-{0}"], ArgFormat.FORMAT, 0,
"potatoes", ["--param", "free-potatoes"]),
multiple_format_none=(["--param", "free-{0}"], ArgFormat.FORMAT, 0,
None, []),
single_lambda_0star=((lambda v: ["piggies=[{0},{1},{2}]".format(v[0], v[1], v[2])]), None, 0,
['a', 'b', 'c'], ["piggies=[a,b,c]"]),
single_lambda_0star_none=((lambda v: ["piggies=[{0},{1},{2}]".format(v[0], v[1], v[2])]), None, 0,
None, []),
single_lambda_1star=((lambda a, b, c: ["piggies=[{0},{1},{2}]".format(a, b, c)]), None, 1,
['a', 'b', 'c'], ["piggies=[a,b,c]"]),
single_lambda_1star_none=((lambda a, b, c: ["piggies=[{0},{1},{2}]".format(a, b, c)]), None, 1,
None, []),
single_lambda_2star=(single_lambda_2star, None, 2,
dict(z='c', x='a', y='b'), ["piggies=[a,b,c]"])
dict(z='c', x='a', y='b'), ["piggies=[a,b,c]"]),
single_lambda_2star_none=(single_lambda_2star, None, 2,
None, []),
)
ARG_FORMATS_IDS = sorted(ARG_FORMATS.keys())

View File

@@ -40,11 +40,14 @@ class TestNiosARecordModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_a_record.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosARecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -76,7 +79,7 @@ class TestNiosARecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__(),
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__(),
'ipv4': '192.168.10.1'})
def test_nios_a_record_update_comment(self):

View File

@@ -40,11 +40,14 @@ class TestNiosAAAARecordModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_aaaa_record.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosAAAARecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -76,7 +79,7 @@ class TestNiosAAAARecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__(),
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__(),
'ipv6': '2001:0db8:85a3:0000:0000:8a2e:0370:7334'})
def test_nios_aaaa_record_update_comment(self):

View File

@@ -40,11 +40,14 @@ class TestNiosCNameRecordModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_cname_record.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosCNameRecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -76,7 +79,7 @@ class TestNiosCNameRecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__(),
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__(),
'canonical': 'realhost.ansible.com'})
def test_nios_a_record_update_comment(self):

View File

@@ -40,11 +40,14 @@ class TestNiosDnsViewModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_dns_view.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosDnsViewModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -75,7 +78,7 @@ class TestNiosDnsViewModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__()})
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__()})
def test_nios_dns_view_update_comment(self):
self.module.params = {'provider': None, 'state': 'present', 'name': 'ansible-dns',

View File

@@ -41,10 +41,13 @@ class TestNiosHostRecordModule(TestNiosModule):
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosHostRecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -74,7 +77,7 @@ class TestNiosHostRecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__()})
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__()})
def test_nios_host_record_remove(self):
self.module.params = {'provider': None, 'state': 'absent', 'name': 'ansible',

View File

@@ -40,11 +40,14 @@ class TestNiosMXRecordModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_mx_record.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosMXRecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -77,7 +80,7 @@ class TestNiosMXRecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__(),
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__(),
'mx': 'mailhost.ansible.com', 'preference': 0})
def test_nios_mx_record_update_comment(self):

View File

@@ -40,11 +40,14 @@ class TestNiosNAPTRRecordModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_naptr_record.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosNAPTRRecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -79,7 +82,7 @@ class TestNiosNAPTRRecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__(),
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__(),
'order': '1000', 'preference': '10',
'replacement': 'replacement1.network.ansiblezone.com'})

View File

@@ -40,11 +40,14 @@ class TestNiosNetworkViewModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_network_view.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosNetworkViewModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -75,7 +78,7 @@ class TestNiosNetworkViewModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__()})
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__()})
def test_nios_network_view_update_comment(self):
self.module.params = {'provider': None, 'state': 'present', 'name': 'default',

View File

@@ -42,9 +42,13 @@ class TestNiosNSGroupModule(TestNiosModule):
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosNSGroupModule, self).tearDown()
self.mock_wapi.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -73,7 +77,7 @@ class TestNiosNSGroupModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__()})
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__()})
def test_nios_nsgroup_remove(self):
self.module.params = {'provider': None, 'state': 'absent', 'name': 'my-simple-group',

View File

@@ -40,11 +40,14 @@ class TestNiosSRVRecordModule(TestNiosModule):
self.mock_wapi_run = patch('ansible_collections.community.general.plugins.modules.net_tools.nios.nios_srv_record.WapiModule.run')
self.mock_wapi_run.start()
self.load_config = self.mock_wapi_run.start()
self.mock_check_type_dict = patch('ansible_collections.community.general.plugins.module_utils.net_tools.nios.api.check_type_dict')
self.mock_check_type_dict_obj = self.mock_check_type_dict.start()
def tearDown(self):
super(TestNiosSRVRecordModule, self).tearDown()
self.mock_wapi.stop()
self.mock_wapi_run.stop()
self.mock_check_type_dict.stop()
def _get_wapi(self, test_object):
wapi = api.WapiModule(self.module)
@@ -80,7 +83,7 @@ class TestNiosSRVRecordModule(TestNiosModule):
res = wapi.run('testobject', test_spec)
self.assertTrue(res['changed'])
wapi.create_object.assert_called_once_with('testobject', {'name': self.module._check_type_dict().__getitem__(),
wapi.create_object.assert_called_once_with('testobject', {'name': self.mock_check_type_dict_obj().__getitem__(),
'port': 5080, 'target': 'service1.ansible.com', 'priority': 10, 'weight': 10})
def test_nios_srv_record_update_comment(self):

View File

@@ -0,0 +1,208 @@
# -*- coding: utf-8 -*-
# (c) 2021 Florian Dambrine <android.florian@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
import sys
from ansible.module_utils.common.dict_transformations import dict_merge
from ansible.module_utils.six import iteritems
from ansible_collections.community.general.plugins.modules.net_tools.pritunl import (
pritunl_user,
)
from ansible_collections.community.general.tests.unit.compat.mock import patch
from ansible_collections.community.general.tests.unit.plugins.module_utils.net_tools.pritunl.test_api import (
PritunlDeleteUserMock,
PritunlListOrganizationMock,
PritunlListUserMock,
PritunlPostUserMock,
PritunlPutUserMock,
)
from ansible_collections.community.general.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,
ModuleTestCase,
set_module_args,
)
__metaclass__ = type
def mock_pritunl_api(func, **kwargs):
def wrapped(self=None):
with self.patch_get_pritunl_organizations(
side_effect=PritunlListOrganizationMock
):
with self.patch_get_pritunl_users(side_effect=PritunlListUserMock):
with self.patch_add_pritunl_users(side_effect=PritunlPostUserMock):
with self.patch_delete_pritunl_users(
side_effect=PritunlDeleteUserMock
):
func(self, **kwargs)
return wrapped
class TestPritunlUser(ModuleTestCase):
def setUp(self):
super(TestPritunlUser, self).setUp()
self.module = pritunl_user
# Add backward compatibility
if sys.version_info < (3, 2):
self.assertRegex = self.assertRegexpMatches
def tearDown(self):
super(TestPritunlUser, self).tearDown()
def patch_get_pritunl_users(self, **kwds):
return patch(
"ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api._get_pritunl_users",
autospec=True,
**kwds
)
def patch_add_pritunl_users(self, **kwds):
return patch(
"ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api._post_pritunl_user",
autospec=True,
**kwds
)
def patch_update_pritunl_users(self, **kwds):
return patch(
"ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api._put_pritunl_user",
autospec=True,
**kwds
)
def patch_delete_pritunl_users(self, **kwds):
return patch(
"ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api._delete_pritunl_user",
autospec=True,
**kwds
)
def patch_get_pritunl_organizations(self, **kwds):
return patch(
"ansible_collections.community.general.plugins.module_utils.net_tools.pritunl.api._get_pritunl_organizations",
autospec=True,
**kwds
)
def test_without_parameters(self):
"""Test without parameters"""
set_module_args({})
with self.assertRaises(AnsibleFailJson):
self.module.main()
@mock_pritunl_api
def test_present(self):
"""Test Pritunl user creation and update."""
user_params = {
"user_name": "alice",
"user_email": "alice@company.com",
}
set_module_args(
dict_merge(
{
"pritunl_api_token": "token",
"pritunl_api_secret": "secret",
"pritunl_url": "https://pritunl.domain.com",
"organization": "GumGum",
},
user_params,
)
)
with self.patch_update_pritunl_users(
side_effect=PritunlPostUserMock
) as post_mock:
with self.assertRaises(AnsibleExitJson) as create_result:
self.module.main()
create_exc = create_result.exception.args[0]
self.assertTrue(create_exc["changed"])
self.assertEqual(create_exc["response"]["name"], user_params["user_name"])
self.assertEqual(create_exc["response"]["email"], user_params["user_email"])
self.assertFalse(create_exc["response"]["disabled"])
# Changing user from alice to bob should update certain fields only
new_user_params = {
"user_name": "bob",
"user_email": "bob@company.com",
"user_disabled": True,
}
set_module_args(
dict_merge(
{
"pritunl_api_token": "token",
"pritunl_api_secret": "secret",
"pritunl_url": "https://pritunl.domain.com",
"organization": "GumGum",
},
new_user_params,
)
)
with self.patch_update_pritunl_users(
side_effect=PritunlPutUserMock
) as put_mock:
with self.assertRaises(AnsibleExitJson) as update_result:
self.module.main()
update_exc = update_result.exception.args[0]
# Ensure only certain settings changed and the rest remained untouched.
for k, v in iteritems(update_exc):
if k in new_user_params:
assert update_exc[k] == v
else:
assert update_exc[k] == create_exc[k]
@mock_pritunl_api
def test_absent(self):
"""Test user removal from Pritunl."""
set_module_args(
{
"state": "absent",
"pritunl_api_token": "token",
"pritunl_api_secret": "secret",
"pritunl_url": "https://pritunl.domain.com",
"organization": "GumGum",
"user_name": "florian",
}
)
with self.assertRaises(AnsibleExitJson) as result:
self.module.main()
exc = result.exception.args[0]
self.assertTrue(exc["changed"])
self.assertEqual(exc["response"], {})
@mock_pritunl_api
def test_absent_failure(self):
"""Test user removal from a non existing organization."""
set_module_args(
{
"state": "absent",
"pritunl_api_token": "token",
"pritunl_api_secret": "secret",
"pritunl_url": "https://pritunl.domain.com",
"organization": "Unknown",
"user_name": "floria@company.com",
}
)
with self.assertRaises(AnsibleFailJson) as result:
self.module.main()
exc = result.exception.args[0]
self.assertRegex(exc["msg"], "Can not remove user")

Some files were not shown because too many files have changed in this diff Show More