Compare commits

..

298 Commits

Author SHA1 Message Date
Felix Fontein
a552266120 Release 11.4.4. 2026-01-26 18:22:40 +01:00
patchback[bot]
2c16874370 [PR #11440/53e1e86b backport][stable-11] Logstash plugin version fix (#11449)
Logstash plugin version fix (#11440)

* logstash_plugin: fix argument order when using version parameter

* logstash_plugin: add integration tests

* logstash_plugin: add changelog fragment

(cherry picked from commit 53e1e86bcc)

Co-authored-by: Nicolas Boutet <amd3002@gmail.com>
2026-01-26 06:26:54 +01:00
Felix Fontein
275961b4ad Prepare 11.4.4. 2026-01-20 22:42:33 +01:00
Felix Fontein
c083b2fa6c [stable-11] Update ignore.txt (#11428)
Update ignore.txt.
2026-01-15 22:02:29 +01:00
Felix Fontein
cc32ee2889 Fix markup in changelog. 2026-01-11 14:28:14 +01:00
Felix Fontein
e677c46329 Make sure stable-12 CI runs in cron.
(cherry picked from commit 28b16eab66)
2026-01-11 00:43:16 +01:00
patchback[bot]
e90b6d0184 [PR #11417/a689bb8e backport][stable-11] CI: Arch Linux switched to Python 3.14 (#11419)
CI: Arch Linux switched to Python 3.14 (#11417)

Arch Linux switched to Python 3.14.

(cherry picked from commit a689bb8e8d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-11 00:41:07 +01:00
patchback[bot]
c398c6bb96 [PR #11401/0e6ba072 backport][stable-11] Update CI pipelines (#11404)
Update CI pipelines (#11401)

Update CI pipelines:
- Fedora 42 -> 43 for devel
- RHEL 10.0 -> 10.1 for all ansible-core branches
- FreeBSD 13.5 -> 15.0 for devel
- Alpine 3.22 -> 3.23 for devel

(cherry picked from commit 0e6ba07261)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-08 12:28:37 +01:00
patchback[bot]
6185f06f64 [PR #11387/d4089ca2 backport][stable-11] Update RHEL 9.x to 9.7 in CI (#11392)
Update RHEL 9.x to 9.7 in CI (#11387)

* Update RHEL 9.x to 9.7 in CI.

* Add skips.

(cherry picked from commit d4089ca29a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-06 18:17:59 +01:00
Felix Fontein
91ba894643 [stable-11] cloudflare_dns: also allow 128 as a value for flag (#11377) (#11384)
cloudflare_dns: also allow 128 as a value for flag (#11377)

* Also allow 128 as a value for flag.

* Forgot to add changelog fragment.

(cherry picked from commit c00fb4fb5c)
2026-01-05 18:57:35 +01:00
patchback[bot]
5807791c80 [PR #11357/ddf05104 backport][stable-11] Add missing integration test aliases files (#11371)
Add missing integration test aliases files (#11357)

* Add missing aliases files.

* Fix directory name.

* Add another missing aliases file.

* Adjust test to also work with newer jsonpatch versions.

(cherry picked from commit ddf05104f3)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-02 15:03:05 +01:00
patchback[bot]
378c73a2a1 [PR #11369/20ba59cc backport][stable-11] Added "See Also" section (#11373)
Added "See Also" section (#11369)

* Added "See Also" section

* Corrected seealso documentation

* Update ini_file.py

Removed seealso descriptions

* Update to_ini.py

Removed seealso descriptions

* Update from_ini.py

Removed seealso descriptions

(cherry picked from commit 20ba59cce6)

Co-authored-by: daomah <129229601+daomah@users.noreply.github.com>
2026-01-02 15:02:54 +01:00
Felix Fontein
a8e60d0358 The next release will be 11.4.4. 2025-12-29 15:21:16 +01:00
Felix Fontein
f2d6ac54e9 Release 11.4.3. 2025-12-29 14:46:32 +01:00
patchback[bot]
9d7fe2f0ae [PR #11332/280d269d backport][stable-11] fix: listen_ports_facts return no facts when using with podman (#11334)
fix: listen_ports_facts return no facts when using with podman (#11332)

* fix: listen_ports_facts return no facts when using with podman

* Update changelogs/fragments/listen-ports-facts-return-no-facts.yml



---------


(cherry picked from commit 280d269d78)

Co-authored-by: Daniel Gonçalves <dangoncalves@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-28 21:15:32 +01:00
Felix Fontein
57b3ce9572 Prepare 11.4.3. 2025-12-23 21:39:27 +01:00
patchback[bot]
405435236f [PR #11316/3debc968 backport][stable-11] Fixing documentation for scaleway_private_network module. (#11318)
Fixing documentation for scaleway_private_network module. (#11316)

(cherry picked from commit 3debc968a4)

Co-authored-by: Greg Harvey <greg.harvey@gmail.com>
2025-12-23 14:19:57 +01:00
patchback[bot]
538a701f89 [PR #11295/a5aec7d6 backport][stable-11] Fix typo in auth_username in examples (#11298)
Fix typo in auth_username in examples (#11295)

(cherry picked from commit a5aec7d61a)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2025-12-19 21:10:10 +01:00
patchback[bot]
5d3132cfe0 [PR #11284/df349459 backport][stable-11] keycloak_authentication_required_actions: fix examples (#11287)
keycloak_authentication_required_actions: fix examples (#11284)

The correct parameter name is "required_actions" (plural).

(cherry picked from commit df34945991)

Co-authored-by: Samuli Seppänen <samuli.seppanen@puppeteers.net>
2025-12-15 19:25:04 +01:00
patchback[bot]
41690c84a2 [PR #11277/1b15e595 backport][stable-11] use FQCN for extending docs with files and url (#11282)
use FQCN for extending docs with files and url (#11277)

* use FQCN for extending docs with files and url

* remove typo

(cherry picked from commit 1b15e595e0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-14 12:16:23 +01:00
patchback[bot]
5d9f58b69d [PR #11276/a96a5c44 backport][stable-11] sysrc tests: skip FreeBSD 14.2 for ezjail tests (#11279)
sysrc tests: skip FreeBSD 14.2 for ezjail tests (#11276)

Looks like 14.2 no longer works.

(cherry picked from commit a96a5c44a5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-14 12:06:01 +01:00
Alexei Znamensky
b2f16f184a test(integration): monit: backport of PR 11255 (#11273)
* test(integration): monit: backport of PR 11255

* add changelog frag
2025-12-12 07:25:09 +01:00
patchback[bot]
bc61b2d656 [PR #11260/a977c6f7 backport][stable-11] fix(sanitize_cr): avoid crash when realmrep is empty (#11267)
fix(sanitize_cr): avoid crash when realmrep is empty (#11260)

* fix(docs): missing info on id when creating realms

* fix(sanitize_cr): avoid crash when realmrep is empty

* remove unrelated change

* remove unrelated change

* added changlog

* correct: changelogs

* Update changelogs



---------



(cherry picked from commit a977c6f7c1)

Co-authored-by: Guillaume Dorschner <44686652+GuillaumeDorschner@users.noreply.github.com>
Co-authored-by: Guillaume Dorschner <guillaume.dorschner@thalesgroup.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-08 23:03:21 +01:00
Alexei Znamensky
364e491b7e [stable-11] monit: fix check for pending (#11253)
* monit: fix check for pending

* add changelog frag

* adjust testcases
2025-12-03 19:19:55 +13:00
Felix Fontein
df3898b08c Next release will be 11.4.3. 2025-12-01 21:30:31 +01:00
Felix Fontein
aeb672e809 Release 11.4.2. 2025-12-01 20:47:18 +01:00
Felix Fontein
3724b36934 Prepare 11.4.2. 2025-11-30 08:37:53 +01:00
patchback[bot]
d9c09095c4 [PR #11216/6b4100d7 backport][stable-11] CONTRIBUTING.md: fixes/improvements (#11220)
CONTRIBUTING.md: fixes/improvements (#11216)

* CONTRIBUTING.md: fixes/improvements

* Update CONTRIBUTING.md



---------


(cherry picked from commit 6b4100d70f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-25 22:10:42 +01:00
Felix Fontein
43e709b9f2 [stable-11] Fix crash in module_utils.datetime.fromtimestamp() (#11206) (#11214)
Fix crash in module_utils.datetime.fromtimestamp() (#11206)

Fix crash in module_utils.datetime.fromtimestamp().

(cherry picked from commit cbf13ab6c9)
2025-11-25 21:41:49 +01:00
Michael Galati
a2042c9b93 [PR #11179/ebb53416 backport][stable-11] mas: Fix parsing on mas 3.0.0+. (#11211)
mas: Fix parsing on mas 3.0.0+. (#11179)

* mas: Fix parsing on mas 3.0.0+.

`mas` changed the formatting of `mas list` with version 3, which breaks
the parsing this module uses to determine which apps are installed.  In
particular, app IDs may now have leading space, which causes us to split
the string too early.

* Changelog fragment.

* Better format examples and changlog fragment.

(cherry picked from commit ebb534166e)
2025-11-25 06:46:29 +01:00
Felix Fontein
67bb94ae89 [stable-11] Bump actions/checkout from 5 to 6 in the ci group (#11200) (#11202)
Bump actions/checkout from 5 to 6 in the ci group (#11200)

Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).

Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...



(cherry picked from commit a803156277)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 18:44:00 +01:00
patchback[bot]
4298003ac8 [PR #11185/4517b86e backport][stable-11] snmp_facts: update docs with dependency constraint (#11188)
snmp_facts: update docs with dependency constraint (#11185)

(cherry picked from commit 4517b86ed4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-22 22:44:05 +01:00
Felix Fontein
1b44e595a3 [stable-11] docs: migrate RTD URLs to docs.ansible.com (#11109) (#11176)
docs: migrate RTD URLs to docs.ansible.com (#11109)

* docs: update readthedocs.io URLs to docs.ansible.com equivalents

🤖 Generated with Claude Code
https://claude.ai/code



* Adjust favicon URL.



---------




(cherry picked from commit d98df2d3a5)

Co-authored-by: John Barker <john@johnrbarker.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2025-11-19 18:22:31 +01:00
patchback[bot]
d9f99fdf8d [PR #11159/6bf0780d backport][stable-11] xfconf: update state=absent doc (#11161)
xfconf: update state=absent doc (#11159)

(cherry picked from commit 6bf0780d23)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-15 21:11:01 +01:00
patchback[bot]
542772500b [PR #11150/32f0ad2f backport][stable-11] Fixed typo in decompress example documentation (#11152)
Fixed typo in decompress example documentation (#11150)

(cherry picked from commit 32f0ad2f97)

Co-authored-by: Thomas Löhr <tlhr@users.noreply.github.com>
2025-11-13 23:11:47 +01:00
patchback[bot]
b66b26259a [PR #11123/1a82e93c backport][stable-11] Re-enable Copr integration tests (#11125)
Re-enable Copr integration tests (#11123)

Fixes: https://github.com/ansible-collections/community.general/issues/10987
(cherry picked from commit 1a82e93c6d)

Co-authored-by: Maxwell G <maxwell@gtmx.me>
2025-11-12 19:46:16 +01:00
Felix Fontein
0567de50d8 [stable-11] Move ansible-core 2.17 to EOL CI (#11127)
Move ansible-core 2.17 to EOL CI.
2025-11-12 19:45:58 +01:00
Felix Fontein
147ffc6b48 [stable-11] Use Cobbler API version format to check version (#11045) (#11118)
Use Cobbler API version format to check version (#11045)

* Use Cobbler API version format to check version

Cobbler use the formula below to return the version:

float(format(int(elems[0]) + 0.1 * int(elems[1]) + 0.001 * int(elems[2]), '.3f'))

Which means that 3.3.7 is changed to 3.307 which is > 3.4.

* Compare Cobbler version as a float

* Remove LooseVersion import

(cherry picked from commit 6f11d75047)

Co-authored-by: Bruno Travouillon <devel@travouillon.fr>
2025-11-12 06:59:31 +01:00
Felix Fontein
4594f7cd18 [stable-11] Add ignore.txt entries for bad-return-value-key (#11111) (#11116)
Add ignore.txt entries for bad-return-value-key (#11111)

Add ignore.txt entries.

(cherry picked from commit 62492fe742)
2025-11-12 06:53:58 +01:00
patchback[bot]
008de3e245 [PR #11089/c26a4e61 backport][stable-11] consul_kv: adjust RV in docs (#11091)
consul_kv: adjust RV in docs (#11089)

(cherry picked from commit c26a4e613b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-11 06:09:15 +01:00
Felix Fontein
943c021446 [stable-11] Migrate 1 RTD URLs to docs.ansible.com (#11081) (#11083)
Migrate 1 RTD URLs to docs.ansible.com (#11081)

Migrate RTD URLs to docs.ansible.com

Updated 1 ansible.readthedocs.io URLs to docs.ansible.com equivalents
as part of the Read the Docs migration.

🤖 Generated with Claude Code
https://claude.ai/code


(cherry picked from commit e8bdf46627)

Co-authored-by: John Barker <john@johnrbarker.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-10 20:35:15 +01:00
patchback[bot]
4ef7b7573b [PR #11031/3cbe44e2 backport][stable-11] Update TSS lookup plugin documentation and add Delinea Platform authentication examples (#11073)
Update TSS lookup plugin documentation and add Delinea Platform authentication examples (#11031)

* - Update documentation from Thycotic to Delinea branding
- Add comprehensive Platform authentication examples
- Enhance existing examples with clearer task names
- Improve RETURN section documentation
- Fix AccessTokenAuthorizer initialization with base_url parameter
- Add support for both Secret Server and Platform authentication methods

* Fixed lintitng issue and added changelog fragment file.

* Removed documentation changes from changelog file.

(cherry picked from commit 3cbe44e269)

Co-authored-by: delinea-sagar <131447653+delinea-sagar@users.noreply.github.com>
2025-11-10 06:47:40 +01:00
patchback[bot]
a0ae0a7c76 [PR #11057/0d8521c7 backport][stable-11] supervisorctl: investigate integration tests (#11064)
supervisorctl: investigate integration tests (#11057)

* supervisorctl: investigate integration tests

* wait for supervisord to complete stop

* adjust in module

(cherry picked from commit 0d8521c718)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-09 09:52:55 +01:00
patchback[bot]
4408972762 [PR #11053/ac4f657d backport][stable-11] opendj_backendprop: docs improvements (#11061)
opendj_backendprop: docs improvements (#11053)

(cherry picked from commit ac4f657d43)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-09 09:52:41 +01:00
Felix Fontein
dce8b507fd [stable-11] filesystem: xfs resize: minimal required increment (#11033) (#11042)
filesystem: xfs resize: minimal required increment (#11033)

Internally XFS uses allocation groups. Allocation groups have a maximum
size of 1 TiB - 1 block. For devices >= 4 TiB XFS uses max size
allocation groups. If a filesystem is extended and the last allocation
group is already at max size, a new allocation group is added. An
allocation group seems to require at least 64 4 KiB blocks.

For devices with integer TiB size (>4), this creates a filesystem that
has initially has 1 unused block per TiB size. The `resize` option
detects this unused space, and tries to resize the filesystem.  The
xfs_growfs call is successful (exit 0), but does not increase the file
system size. This is detected as repeated change in the task.

Test case:
```
- hosts: localhost
  tasks:
    - ansible.builtin.command:
        cmd: truncate -s 4T /media/xfs.img
        creates: /media/xfs.img
      notify: loopdev xfs

    - ansible.builtin.meta: flush_handlers

    - name: pickup xfs.img resize
      ansible.builtin.command:
        cmd: losetup -c /dev/loop0
      changed_when: false

    - community.general.filesystem:
        dev: "/dev/loop0"
        fstype: "xfs"

    - ansible.posix.mount:
        src: "/dev/loop0"
        fstype: "xfs"
        path: "/media/xfs"
        state: "mounted"

    # always shows a diff even for newly created filesystems
    - community.general.filesystem:
        dev: "/dev/loop0"
        fstype: "xfs"
        resizefs: true

  handlers:
    - name: loopdev xfs
      ansible.builtin.command:
        cmd: losetup /dev/loop0 /media/xfs.img
```

NB: If the last allocation group is not yet at max size, the filesystem
can be resized. Detecting this requires considering the XFS topology.
Other filesystems (at least ext4) also seem to require a minimum
increment after the initial device size, but seem to use the entire
device after initial creation.

Fun observation: creating a 64(+) TiB filesystem leaves a 64(+) block
gap at the end, that is allocated in a subsequent xfs_growfs call.


(cherry picked from commit f5943201b9)

Co-authored-by: jnaab <25617714+jnaab@users.noreply.github.com>
Co-authored-by: Johannes Naab <johannes.naab@hetzner-cloud.de>
2025-11-08 10:00:56 +01:00
Alexei Znamensky
40eec12c2c xfconf: fix existing empty array case (#11026) (#11027)
* xfconf: fix existing empty array case

* fix xfconf_info as well

* add changelog frag

(cherry picked from commit b28ac655fc)
2025-11-02 22:07:52 +01:00
Felix Fontein
cd4a02605e Adjust CI schedules: remove stable-9, move stable-10 to weekly.
(cherry picked from commit 09d8b2bb77)
2025-11-02 14:10:30 +01:00
Felix Fontein
0929d24077 The next release will be 11.4.2. 2025-11-02 14:10:15 +01:00
Felix Fontein
ac7b95e710 Release 11.4.1. 2025-11-02 13:04:47 +01:00
patchback[bot]
cd50836977 [PR #11001/eb6337c0 backport][stable-11] omapi_host: fix bytes vs. str confusion (#11022)
omapi_host: fix bytes vs. str confusion (#11001)

* omapi_host: fix bytes vs. str confusion

After an update of the control node from Debian
bookworm to trixie, the omapi_host module fails to
work with the error message:

Key of type 'bytes' is not JSON serializable by the
'module_legacy_m2c' profile.

https://github.com/ansible/ansible/issues/85937 had the
same error, but the fix is a bit more intricate here
because the result dict is dynamically generated from
an API response object.

This also fixes unpacking the MAC and IP address and
hardware type, which were broken for Python3.

* Merge suggestion for changelog fragment



* do not unpack_ip twice

Noticed by Felix Fontein <felix@fontein.de>

* mention py3k in changelog fragment, too

---------


(cherry picked from commit eb6337c0c9)

Co-authored-by: mirabilos <tg@mirbsd.org>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-30 20:30:00 +01:00
patchback[bot]
23cc57c9f6 [PR #11005/54af64ad backport][stable-11] keycloak_user: mark credentials[].value as no_log=True (#11012)
keycloak_user: mark credentials[].value as no_log=True (#11005)

Mark credentials[].value as no_log=True.

(cherry picked from commit 54af64ad36)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-29 17:15:36 +00:00
patchback[bot]
b9119335cd [PR #10955/e84f59a6 backport][stable-11] fix(pritunl_user): improve resilience to null or missing user parameters (#11014)
fix(pritunl_user): improve resilience to null or missing user parameters (#10955)

* fix(pritunl_user): improve resilience to null or missing user parameters

* added changelog fragment - 10955

* standardize 10955 changelog fragment content



* simplify user params comparison



* simplify list fetch



* simplify remote value retrieval



---------



(cherry picked from commit e84f59a62d)

Co-authored-by: David Jenkins <david.jenkins@twosixtech.com>
Co-authored-by: djenkins <djenkins@twosix.net>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-29 17:15:18 +00:00
patchback[bot]
e4e33e6824 [PR #10965/ce0d06b3 backport][stable-11] onepassword: extend CLI class initialization with additional parameters (#11007)
onepassword: extend CLI class initialization with additional parameters (#10965)

* onepassword: extend CLI class initialization with additional parameters

* add changelog fragment 10965-onepassword-bugfix.yml

* Update changelogs/fragments/10965-onepassword-bugfix.yml



---------


(cherry picked from commit ce0d06b306)

Co-authored-by: Matthew <mjmjelde@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-28 21:38:35 +01:00
Felix Fontein
9cb619ff6c [stable-11] terraform: Fix bug when None values aren't processed correctly (#10961) (#11003)
terraform: Fix bug when None values aren't processed correctly (#10961)

* terraform: Fix bug when None values aren't processed correctly

Just found that i can't pass null values as complex variables into terraform using this module, while i can do that with terraform itself. Fixed undesired behavior.

* chore: changelog fragment 10961-terraform-complexvars-null-bugfix.yaml

* Update changelogs/fragments/10961-terraform-complexvars-null-bugfix.yaml



* Update plugins/modules/terraform.py



* Update plugins/modules/terraform.py



* Fix condition to check for None type in terraform.py

---------


(cherry picked from commit af8c4fb95e)

Co-authored-by: nbragin4 <139489942+nbragin4@users.noreply.github.com>
2025-10-28 20:53:57 +01:00
Felix Fontein
47e808da51 Prepare 11.4.1. 2025-10-27 19:40:08 +01:00
patchback[bot]
01b25a8236 [PR #10988/f6781f65 backport][stable-11] CI: temporarily disable tests for copr (#10991)
CI: temporarily disable tests for copr (#10988)

Temporarily disable tests for copr.

(cherry picked from commit f6781f654e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-26 21:59:52 +01:00
patchback[bot]
6cf8ce06ca [PR #10953/258e65f5 backport][stable-11] keycloak_user_rolemapping: docs fixes and examples about mapping realm roles in keycloak_user_rolemapping (#10963)
keycloak_user_rolemapping: docs fixes and examples about mapping realm roles in keycloak_user_rolemapping (#10953)

* Fix docs and add examples about mapping realm roles for keycloak_user_rolemapping.py module (#7149)

* fix sanity tests

(cherry picked from commit 258e65f5fc)

Co-authored-by: Stanislav Shamilov <shamilovstas@protonmail.com>
2025-10-23 21:37:47 +02:00
Felix Fontein
31f130a56f Add ignore.txt entries. 2025-10-23 21:32:04 +02:00
patchback[bot]
7ef0705984 [PR #10956/4c7be8f2 backport][stable-11] cloudflare_dns: rollback validation for CAA records (#10957)
cloudflare_dns: rollback validation for CAA records (#10956)

* cloudflare_dns: rollback validation for CAA records

* add changelog frag

(cherry picked from commit 4c7be8f268)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-23 07:05:39 +02:00
patchback[bot]
5e326a25a4 [PR #10948/7572b46c backport][stable-11] filesystem: docs adjustments (#10952)
filesystem: docs adjustments (#10948)

(cherry picked from commit 7572b46c7b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-21 06:45:36 +02:00
patchback[bot]
6526e0196a [PR #10933/c850e209 backport][stable-11] Add support for client auth in Keycloak cllient secrets module (#10946)
Add support for client auth in Keycloak cllient secrets module (#10933)

* keycloak: add client authentication support for client_secret



* readd ['token', 'auth_realm']



---------


(cherry picked from commit c850e209ab)

Signed-off-by: Marius Bertram <marius@brtrm.de>
Co-authored-by: Marius Bertram <marius@brtrm.de>
2025-10-19 21:22:44 +02:00
patchback[bot]
e757adbfca [PR #10918/7e666a9c backport][stable-11] fix(modules/gitlab_runner): Fix exception in check mode on new runners (#10944)
fix(modules/gitlab_runner): Fix exception in check mode on new runners (#10918)

* fix(modules/gitlab_runner): Fix exception in check mode on new runners

When a new runner is added in check mode, the role used to throw an
exception. Fix this by returning a valid runner object instead of a
boolean.

Fixes #8854

* docs: Add changelog fragment

(cherry picked from commit 7e666a9c31)

Co-authored-by: carlfriedrich <carlfriedrich@posteo.de>
2025-10-19 09:31:24 +02:00
patchback[bot]
3a2ce4add5 [PR #10937/2bd44584 backport][stable-11] cloudflare_dns: rollback validation for SRV records (#10938)
cloudflare_dns: rollback validation for SRV records (#10937)

* cloudflare_dns: rollback validation for SRV records

* add changelog frag

(cherry picked from commit 2bd44584d3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-18 09:47:57 +02:00
patchback[bot]
3868664046 [PR #10926/9dedd774 backport][stable-11] Add __init__.py to work around ansible-test/pylint bug (#10928)
Add __init__.py to work around ansible-test/pylint bug (#10926)

Add __init__.py to work around ansible-test/pylint bug.

(cherry picked from commit 9dedd77459)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-15 21:55:32 +02:00
Felix Fontein
476914013d [stable-11] Add stable-2.20 to CI, bump version of devel branch (#10923) (#10924)
Add stable-2.20 to CI, bump version of devel branch (#10923)

Add stable-2.20 to CI, bump version of devel branch.

(cherry picked from commit 8472dc22ea)
2025-10-15 12:57:20 +02:00
patchback[bot]
dcfaee08a0 [PR #10914/c5253c50 backport][stable-11] build(deps): bump github/codeql-action from 3 to 4 in the ci group (#10915)
build(deps): bump github/codeql-action from 3 to 4 in the ci group (#10914)

Bumps the ci group with 1 update: [github/codeql-action](https://github.com/github/codeql-action).

Updates `github/codeql-action` from 3 to 4
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...



(cherry picked from commit c5253c5007)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 09:57:50 +02:00
patchback[bot]
78b61cc5cb [PR #10910/10bdd9c5 backport][stable-11] tests/unit/plugins/modules/test_composer.yaml: remove redundant lines (#10911)
tests/unit/plugins/modules/test_composer.yaml: remove redundant lines (#10910)

(cherry picked from commit 10bdd9c56b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-12 11:05:00 +02:00
patchback[bot]
32438bdf80 [PR #10891/5f471b8e backport][stable-11] refactor dict from literal list (#10895)
refactor dict from literal list (#10891)

* refactor dict from literal list

* add changelog frag

(cherry picked from commit 5f471b8e5b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-10 19:15:49 +02:00
patchback[bot]
8685d12996 [PR #10893/14a858fd backport][stable-11] random_string: replace random.SystemRandom() with secrets.SystemRandom() (#10894)
random_string: replace random.SystemRandom() with secrets.SystemRandom() (#10893)

* random_string: replace random.SystemRandom() with secrets.SystemRandom()



* add the forgotten blank line



* Update changelogs/fragments/replace-random-with-secrets.yml



* readd the description



* Update changelogs/fragments/replace-random-with-secrets.yml



---------



(cherry picked from commit 14a858fd9c)

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-10 19:15:21 +02:00
patchback[bot]
f0c18ec730 [PR #10887/68b83451 backport][stable-11] pacman: link to yay bug report (#10890)
pacman: link to yay bug report (#10887)

Link to yay bug report.

(cherry picked from commit 68b8345199)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-10 08:10:22 +02:00
Felix Fontein
3d4dc21a68 The next release will be 11.4.1. 2025-10-06 19:04:32 +02:00
Felix Fontein
49e620cb6a Release 11.4.0. 2025-10-06 18:29:30 +02:00
patchback[bot]
e82c2ad80d [PR #10842/f34842b7 backport][stable-11] Keycloak client scope support (#10882)
Keycloak client scope support (#10842)

* first commit

* sanity

* fixe test

* trailing white space

* sanity

* Fragment

* test sanity

* Update changelogs/fragments/10842-keycloak-client-scope-support.yml



* Update plugins/modules/keycloak_client.py



* add client_scopes_behavior

* Sanity

* Sanity

* Update plugins/modules/keycloak_client.py



* Fix typo.



* Update plugins/modules/keycloak_client.py



* Update plugins/modules/keycloak_client.py



* Update plugins/modules/keycloak_client.py



* Update plugins/modules/keycloak_client.py



---------




(cherry picked from commit f34842b7b2)

Co-authored-by: desand01 <desrosiers.a@hotmail.com>
Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-06 18:28:38 +02:00
patchback[bot]
74dfcae673 [PR #10880/30894f41 backport][stable-11] github_app_access_token: add support for GitHub Enterprise Server (#10881)
github_app_access_token: add support for GitHub Enterprise Server (#10880)

* github_app_access_token: add support for GitHub Enterprise Server (#10879)
Add option to specify api endpoint for a GitHub Enterprise Server.
If option is not specified, defaults to https://api.github.com.

* refactor: apply changes as suggested by felixfontein

* docs: fix nox check error and type-o

nox check: plugins/lookup/github_app_access_token.py:57:1: DOCUMENTATION: error: too many blank lines (1 > 0)  (empty-lines)

* refactor: apply changes as suggested by russoz

* refactor: apply changes as suggested by felixfontein

(cherry picked from commit 30894f4144)

Co-authored-by: Chris <chodonne@gmail.com>
2025-10-06 18:28:31 +02:00
patchback[bot]
1e01aeacb4 [PR #10873/6cd46654 backport][stable-11] Avoid six in plugin code (#10875)
Avoid six in plugin code (#10873)

Avoid six in plugin code.

(cherry picked from commit 6cd4665412)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-05 07:36:47 +02:00
patchback[bot]
3b207ba0fd [PR #10874/750adb43 backport][stable-11] pipx: adjustments for pipx 1.8.0 (#10876)
pipx: adjustments for pipx 1.8.0 (#10874)

* pipx: adjustments for pipx 1.8.0

* add changelog frag

* typo

(cherry picked from commit 750adb431a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-05 07:23:57 +02:00
patchback[bot]
90850b3763 [PR #10689/cc41d9da backport][stable-11] gem: fix soundness issue when uninstalling default gems on Ubuntu (#10878)
gem: fix soundness issue when uninstalling default gems on Ubuntu (#10689)

* Attempt to fix gem soundness issue

* Return command execution

* Fix value error

* Attempt to fix failling tests

* Fix minor issues

* Update changelog

* Update tests/integration/targets/gem/tasks/main.yml



* Update changelogs/fragments/10689-gem-prevent-soundness-issue.yml



* Remove state and name from gem error message

* Improve gem uninstall check

* Make unit tests pass

* Fix linting issues

* gem: Remove length chenck and adapt unit tests

* Adapt gem unit tests

* gem: improve error msg

* Fix sanity error

* Fix linting issue

---------


(cherry picked from commit cc41d9da60)

Co-authored-by: Giorgos Drosos <56369797+gdrosos@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-05 07:23:48 +02:00
patchback[bot]
5af8d6132d [PR #10829/7c40c6b6 backport][stable-11] Keycloak role fix changed status (#10839)
Keycloak role fix changed status (#10829)

* Exclude aliases before comparison

* add test

* fragment

* Update changelogs/fragments/10829-fix-keycloak-role-changed-status.yml



---------



(cherry picked from commit 7c40c6b6b5)

Co-authored-by: desand01 <desrosiers.a@hotmail.com>
Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-05 07:08:09 +02:00
patchback[bot]
bce7efb866 [PR #10863/9d0150b2 backport][stable-11] [doc] update requirements for all consul modules/lookups (#10872)
[doc] update requirements for all consul modules/lookups (#10863)

* [doc] update requirements for consul_kv module

python-consul has been unmaintained for a while. It uses a legacy way of passing the Consul token when sending requests. This leads to warning messages in Consul log, and will eventually break communication. Using the maintained py-consul library ensures compatibility to newer Consul versions.

* [doc] replace all python-consul occurrences with py-consul

* [fix] tests and possible pip server errors

* [chore] remove referencce to python-consul in comment

---------


(cherry picked from commit 9d0150b2c3)

Co-authored-by: Sebastian Damm <SipSeb@users.noreply.github.com>
Co-authored-by: Sebastian Damm <sebastian.damm@pascom.net>
2025-10-03 07:53:16 +02:00
patchback[bot]
edd8981af6 [PR #10867/41b65161 backport][stable-11] Fix typos: s/the the/the/ (#10868)
Fix typos: s/the the/the/ (#10867)

(cherry picked from commit 41b65161bd)

Co-authored-by: Pierre Riteau <pierre@stackhpc.com>
2025-09-30 21:54:43 +02:00
patchback[bot]
c2adcfa51d [PR #10864/4b644ae4 backport][stable-11] docs: fix sphinx warnings in uthelper guide (#10865)
docs: fix sphinx warnings in uthelper guide (#10864)

(cherry picked from commit 4b644ae41b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-09-28 12:42:23 +02:00
Felix Fontein
fd72e9b2a3 Add repository configuration to antsibull-nox.toml.
(cherry picked from commit e9b1788bb9)
2025-09-26 07:04:11 +02:00
patchback[bot]
96ed253c79 [PR #10861/8b5f4b05 backport][stable-11] Fix RST syntax error (#10862)
Fix RST syntax error (#10861)

Fix RST syntax error.

(cherry picked from commit 8b5f4b055f)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-25 21:13:55 +02:00
patchback[bot]
b21e6466c7 [PR #10857/68684a7a backport][stable-11] github_deploy_key: make sure variable exists before use (#10860)
github_deploy_key: make sure variable exists before use (#10857)

Make sure variable exists before use.

(cherry picked from commit 68684a7a4c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-25 20:49:03 +02:00
patchback[bot]
de8d1760c4 [PR #10852/648ff7db backport][stable-11] yaml cache plugin: make compatible with ansible-core 2.19 (#10856)
yaml cache plugin: make compatible with ansible-core 2.19 (#10852)

Make compatible with ansible-core 2.19.

(cherry picked from commit 648ff7db02)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-25 20:34:42 +02:00
Felix Fontein
a098845a0f Prepare 11.4.0. 2025-09-21 20:46:10 +02:00
patchback[bot]
ff6735f0ce [PR #10840/b865bf57 backport][stable-11] Fix keycloak sub-group search (#10846)
Fix keycloak sub-group search (#10840)

* fix bug in missing realm argument when searching for groups

* MR change fragment

* 39+1=40

(cherry picked from commit b865bf5751)

Co-authored-by: Jakub Danek <danekja@users.noreply.github.com>
2025-09-21 20:44:22 +02:00
patchback[bot]
657268120c [PR #10832/0f23b9e3 backport][stable-11] Force Content-type header to application/json if is_pre740 is false (#10848)
Force Content-type header to application/json if is_pre740 is false (#10832)

* Force Content-type header to application/json if is_pre740 is false

* Remove response variable from fail_json module

* Add a missing blank line to match pep8 requirement

* Add changelog fragment of issue #10796

* Rename fragment section

* Improve fragment readability



---------



(cherry picked from commit 0f23b9e391)

Co-authored-by: X <2465124+broferek@users.noreply.github.com>
Co-authored-by: ludovic <ludovic.petetin@aleph-networks.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-21 20:44:12 +02:00
patchback[bot]
e541b5b709 [PR #10830/2bf8ae88 backport][stable-11] timezone: mention that Debian 13 also needs util-linux-extra (#10837)
timezone: mention that Debian 13 also needs util-linux-extra (#10830)

Mention that Debian 13 also needs util-linux-extra.

(cherry picked from commit 2bf8ae88be)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-18 22:14:58 +02:00
patchback[bot]
9a3fd8fabe [PR #10812/7a231a24 backport][stable-11] gitlab_*_variable: add description option (#10834)
gitlab_*_variable: add `description` option (#10812)

(cherry picked from commit 7a231a248e)

Co-authored-by: David Phillips <phillid@users.noreply.github.com>
2025-09-18 22:14:48 +02:00
patchback[bot]
baf124bc17 [PR #10805/833e6e36 backport][stable-11] homebrew: Support old_tokens and oldnames in homebrew package data (#10831)
homebrew: Support old_tokens and oldnames in homebrew package data (#10805)

* homebrew: Support old_tokens and oldnames in homebrew package data

Fixes #10804

Since brew info will accept old_tokens (for casks) and oldnames (for formulae) when provided by the homebrew module "name" argument, the module also needs to consider thes old names as valid for the given package.  This commit updates _extract_package_name to do that.

All existing package name tests, including existing tests for name aliases and tap prefixing, have been consolidated with new name tests into package_names.yml.

* Added changelog fragment.

* homebrew: replace non-py2 compliant f-string usage

* code formatting lint, and py2 compatibility fixes

* homebrew: added licenses to new files, nox lint

* Update plugins/modules/homebrew.py

use str.format() instead of string addition



* Update tests/integration/targets/homebrew/tasks/casks.yml



* Update tests/integration/targets/homebrew/tasks/package_names_item.yml



* Update tests/integration/targets/homebrew/tasks/formulae.yml



* Fixes for performance concerns on new homebrew tests.
1) tests for alternate package names are commented out in main.yml.
2) the "install via alternate name, uninstall via base name" test
   case was deemed duplicative, and has been deleted .
3) minor fixes to use jinja2 "~" for string concat instead of "+"

* Fix nox lint

---------


(cherry picked from commit 833e6e36de)

Co-authored-by: brad2014 <brad2014@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-15 19:39:57 +02:00
patchback[bot]
712bd5194d [PR #10810/c1e877d2 backport][stable-11] github_app_access_token: fix compatibility import of jwt (#10826)
github_app_access_token: fix compatibility import of jwt (#10810)

Fix compatibility import of jwt.

(cherry picked from commit c1e877d254)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-13 10:31:01 +02:00
patchback[bot]
0dfa80d386 [PR #10822/0911db45 backport][stable-11] pipx: review tests (#10824)
pipx: review tests (#10822)

(cherry picked from commit 0911db457e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-09-13 10:30:52 +02:00
patchback[bot]
e1eb88def5 [PR #10823/562d2ae5 backport][stable-11] parted: join command list for fail_json message (#10827)
parted: join command list for fail_json message (#10823)

* parted: join command list for fail_json message

* add changelog frag

(cherry picked from commit 562d2ae5b1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-09-13 10:30:42 +02:00
patchback[bot]
bdf0c4e0bf [PR #10818/d2e2395a backport][stable-11] Speed up tests in android_sdk module (#10821)
Speed up tests in android_sdk module (#10818)

changed the dependency that is used to test the functionality in android_sdk module. The previous dependency was ~100MB, the current one is ~6MB. This should speed up the tests a bit and reduce the traffic.

(cherry picked from commit d2e2395ae3)

Co-authored-by: Stanislav Shamilov <shamilovstas@protonmail.com>
2025-09-12 19:42:53 +02:00
patchback[bot]
0b2f50a3ed [PR #10813/a7e4cee4 backport][stable-11] Remove obsolete test conditions (#10815)
Remove obsolete test conditions (#10813)

* Fedora 31 and 32 are EOL, remove conditions related


(cherry picked from commit a7e4cee47d)

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-09-12 06:41:19 +02:00
Felix Fontein
f9b7938cf6 Release 11.3.0. 2025-09-08 19:07:07 +02:00
patchback[bot]
053d0aec28 [PR #10795/f772bcda backport][stable-11] gitlab_protected_branch: refactor, add allow_force_push, code_owner_approval_required (#10803)
gitlab_protected_branch: refactor, add `allow_force_push`, `code_owner_approval_required` (#10795)

* gitlab_protected_branch: fix typo

* gitlab_protected_branch: lump parameters into options dictionary

Hardcoding parameter lists gets repetitive. Refactor this module to use
an options dictionary like many other gitlab_* modules. This makes it
cleaner to add new options.

* gitlab_protected_branch: update when possible

Until now, the module deletes and re-creates the protected branch if any
change is detected. This makes sense for the access level parameters, as
these are not easily mutated after creation.

However, in order to add further options which _can_ easily be updated,
we should support updating by default, unless known-immutable parameters
are changing.

* gitlab_protected_branch: add `allow_force_push` option

* gitlab_protected_branch: add `code_owner_approval_required` option

* gitlab_protected_branch: add issues to changelog

* Update changelog.

---------


(cherry picked from commit f772bcda88)

Co-authored-by: David Phillips <phillid@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-08 19:06:38 +02:00
Felix Fontein
f9baa999a8 Revert "Release 11.3.0."
This reverts commit ab10b6ba36.
2025-09-08 19:02:54 +02:00
Felix Fontein
ab10b6ba36 Release 11.3.0. 2025-09-08 18:58:07 +02:00
patchback[bot]
e8a6fabf4c [PR #10791/cb84a0e9 backport][stable-11] Add Option to configure webAuthnPolicies for Keycloak (#10800)
Add Option to configure webAuthnPolicies for Keycloak (#10791)

* Add Option to configure webAuthnPolicies for Keycloak

* Mark webauth properties as noLog false

* fix line length

* rename webauthn stuff to match api of keycloak

* rename webauthn stuff to match api of keycloak

* Update changelogs/fragments/keycloak-realm-webauthn-policies.yml



* add version for each type

* Update plugins/modules/keycloak_realm.py



---------




(cherry picked from commit cb84a0e99f)

Co-authored-by: Julian Thanner <62133932+Juoper@users.noreply.github.com>
Co-authored-by: Julian Thanner <julian.thanner@check24.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-08 18:54:56 +02:00
patchback[bot]
5fca1f641b [PR #10784/062b63bd backport][stable-11] Add filters to_yaml and to_nice_yaml (#10802)
Add filters to_yaml and to_nice_yaml (#10784)

* Add filters to_yaml and to_nice_yaml.

* Allow to redact sensitive values.

* Add basic tests.

* Work around https://github.com/ansible/ansible/issues/85783.

* Cleanup.

(cherry picked from commit 062b63bda5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-08 18:54:47 +02:00
patchback[bot]
88f0a4c770 [PR #10787/3574b3fa backport][stable-11] gitlab_*_variable: support masked-and-hidden variables (#10801)
gitlab_*_variable: support masked-and-hidden variables (#10787)

* gitlab_*_variable: support masked-and-hidden variables

Support masking and hiding GitLab project and group variables. In the
GitLab API, variables that are hidden are also masked by implication.
Note gitlab_instance_variable is unmodified since instance variables
cannot be hidden.

* gitlab_*_variable: add `hidden` to legacy `vars` syntax

* gitlab_*_variable: address review comments in doc

(cherry picked from commit 3574b3fa93)

Co-authored-by: David Phillips <phillid@users.noreply.github.com>
2025-09-08 18:54:38 +02:00
patchback[bot]
9a565f356c [PR #10665/3baa13a3 backport][stable-11] pacemaker_resource: Add cloning support for resources and groups (#10798)
pacemaker_resource: Add cloning support for resources and groups (#10665)

* add clone state for pacemaker_resource

* add changelog fragment

* Additional description entry for comment header

* Apply suggestions from code review



* Update plugins/modules/pacemaker_resource.py



* fix formatting for yamllint

* Apply code review suggestions

* refactor state name to cloned

* Update plugins/modules/pacemaker_resource.py



* Apply suggestions from code review



* Apply suggestions from code review

---------



(cherry picked from commit 3baa13a3e4)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-09-07 21:36:35 +02:00
Felix Fontein
3c0a9d7826 Prepare 11.3.0. 2025-09-04 07:09:50 +02:00
patchback[bot]
12cf3dc19a [PR #10726/d0123a10 backport][stable-11] django_dumpdata, django_loaddata: new modules (#10790)
django_dumpdata, django_loaddata: new modules (#10726)

* django module, module_utils: adjustments

* more fixes

* more fixes

* further simplification

* django_dumpdata/django_loaddata: new modules

* Update plugins/modules/django_dumpdata.py



* add note about idempotency

---------


(cherry picked from commit d0123a1038)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-03 22:29:17 +02:00
patchback[bot]
6e98e3d3eb [PR #10785/aed763da backport][stable-11] gitlab_*_access_token: add missing scopes (#10789)
gitlab_*_access_token: add missing scopes (#10785)

Over time, GitLab added extra scopes to the API. I'm in here to add
self_rotate, but may as well add all other missing scopes while I'm
here.

(cherry picked from commit aed763dae7)

Co-authored-by: David Phillips <phillid@users.noreply.github.com>
2025-09-03 21:46:35 +02:00
patchback[bot]
716a1b924e [PR #10783/f1f167e3 backport][stable-11] dnf_versionlock: minor refactor (#10788)
dnf_versionlock: minor refactor (#10783)

* dnf_versionlock: minor refactor

* Python 2 does not appreciate clever syntax

* Update plugins/modules/dnf_versionlock.py

* Update plugins/modules/dnf_versionlock.py

* rollback raw patterns adjustment

(cherry picked from commit f1f167e3fc)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-09-03 21:46:21 +02:00
patchback[bot]
fa4bf56fed [PR #10638/07ce0041 backport][stable-11] CI: Add Debian 13 Trixie (#10782)
CI: Add Debian 13 Trixie (#10638)

* Add Debian 13 Trixie to CI.

* Add adjustments.

* Disable one apache2_module test for Debian 13.

* Disable ejabberd_user test on Debian 13.

* Fix paramiko install.

* Skip cloud_init_data_facts on Debian 13.

* Fix postgresql setup.

* Fix timezone tests.

(cherry picked from commit 07ce00417d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-31 16:53:54 +02:00
patchback[bot]
2c385cfab5 [PR #10779/4a70d409 backport][stable-11] Deprecate hiera lookup (#10781)
Deprecate hiera lookup (#10779)

Deprecate hiera lookup.

(cherry picked from commit 4a70d4091d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-31 16:51:55 +02:00
patchback[bot]
e44011ff94 [PR #10768/b4984350 backport][stable-11] zpool: fix broken example (#10772)
zpool: fix broken example (#10768)

Fix broken example.

(cherry picked from commit b498435066)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-31 12:11:34 +02:00
patchback[bot]
25ffe69b51 [PR #10727/6f40eff6 backport][stable-11] simplify string formatting in some modules (#10773)
simplify string formatting in some modules (#10727)

* simplify string formatting in some modules

* add changelog frag

(cherry picked from commit 6f40eff632)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-31 12:11:27 +02:00
patchback[bot]
144945894f [PR #10642/f6e1d908 backport][stable-11] parted: command args as list rather than string (#10774)
parted: command args as list rather than string (#10642)

* parted: command args as list rather than string

* add changelog frag

* add missing command line dash args

* make scripts as lists as well

* Apply suggestions from code review



---------


(cherry picked from commit f6e1d90870)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-31 12:11:16 +02:00
patchback[bot]
4c85efd807 [PR #10769/e6502a8e backport][stable-11] xenserver: remove required=false from arg spec (#10775)
xenserver: remove required=false from arg spec (#10769)

* xenserver: remove required=false from arg spec

* add changelog frag

(cherry picked from commit e6502a8e51)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-31 12:11:02 +02:00
patchback[bot]
cdadfa979e [PR #10770/3cc4f28f backport][stable-11] minor fixes in doc guides (#10777)
minor fixes in doc guides (#10770)

(cherry picked from commit 3cc4f28fd7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-31 12:10:41 +02:00
patchback[bot]
e3a8d238a8 [PR #10752/f6003f61 backport][stable-11] selective: don't hard code ansible_loop_var 'item' (#10764)
selective: don't hard code ansible_loop_var 'item' (#10752)

* selective: don't hard code ansible_loop_var 'item'

* Add changelog fragment

* Update changelog message



---------


(cherry picked from commit f6003f61cc)

Co-authored-by: Hoang Nguyen <folliekazetani@protonmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-29 07:04:00 +02:00
patchback[bot]
93a1aa4e38 [PR #10751/d6ad9beb backport][stable-11] kdeconfig: add support for kwriteconfig6 (#10762)
kdeconfig: add support for kwriteconfig6 (#10751)

* kdeconfig: add support for kwriteconfig6

Rationale:
With a minimal install of KDE Plasma 6, the kdeconfig module would systematically fail with the following error: `kwriteconfig is not installed.`
In this configuration, kwriteconfig6 is the only version of kwriteconfig installed, and the kdeconfig module did not not find it.

Fixes #10746

* Add changelog fragment

* Update changelogs/fragments/10751-kdeconfig-support-kwriteconfig6.yml



---------


(cherry picked from commit d6ad9beb58)

Co-authored-by: Thibault Geoffroy <33561374+nebularnoise@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-29 06:59:58 +02:00
patchback[bot]
cf1b02c6b9 [PR #10705/b1c75339 backport][stable-11] openbsd_pkg: add support for removing unused dependencies (#10758)
openbsd_pkg: add support for removing unused dependencies (#10705)

* openbsd_pkg: add support for removing unused dependencies

Add new state 'rm_unused_deps' that uses 'pkg_delete -a' to remove
packages that are no longer required by any other packages.

Features:
- Requires name='*' to avoid accidental usage
- Supports check mode, diff mode, clean and quick flags
- Follows existing module patterns for error handling
- Integrates with existing package list comparison for change detection

* Update the PR number in the frgment link

* Fix the changelog fragment name to include the PR #

* Force non-interactive mode like most of the other modes

* Fix PEP8 E302: add missing blank line before function definition

* Ensure that no matter what, if the package list unchanged then there was no change

Also removed some unused vars from the original code.

* Standardize names in the PR

* Swap over from a new state to implementing an autoremove option

Added code to handle the case where you git a name or list of names as
pkg_delete will correctly filter what it autoremove by the names

* Update the fragment to match the new code

* typo in EXAMPLES

* Fix up a yamllint complaint.

I do note the following:

```
$ ansible-lint tests/test_openbsd_pkg.yml

Passed: 0 failure(s), 0 warning(s) on 1 files. Last profile that met the validation criteria was 'production'.
```

Although that could be due to local config

* While here add realistic examples of packages that might be autoinstalled

* Clean up docs.



* Autoremove is an option, work like the other package managers

* Update changelog for openbsd_pkg autoremove parameter

Clarified the behavior of the `autoremove` parameter to specify it removes autoinstalled packages. Removed flowery text that isn't needed.

* Cut the rest of the cruft out of the changelog fragment

Make it obvious how '*' can be used as a 'name:'
Be more pythonic in the package list comparison.

* Update changelogs/fragments/10705-openbsd-pkg-remove-unused.yml



---------


(cherry picked from commit b1c75339c0)

Co-authored-by: Allen Smith <lazlor@lotaris.org>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-28 22:18:07 +02:00
patchback[bot]
dda872c3e6 [PR #10743/469e557b backport][stable-11] monit: handle arbitrary error status (#10760)
monit: handle arbitrary error status (#10743)

* handle arbitrary error status

* add changelog fragment

* mock module in test

* Update changelogs/fragments/10743-monit-handle-unknown-status.yml



---------


(cherry picked from commit 469e557b95)

Co-authored-by: Simon Kelly <skelly@dimagi.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-28 22:17:56 +02:00
patchback[bot]
d90eb6444a [PR #10755/9d0866bf backport][stable-11] Add ignores necessary for ansible-core 2.20 (#10757)
Add ignores necessary for ansible-core 2.20 (#10755)

Add ignores necessary for ansible-core 2.20 if Python 2.7 is still supported by the collection.

(cherry picked from commit 9d0866bfb8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-28 21:47:08 +02:00
patchback[bot]
93782ffb35 [PR #10684/ded43714 backport][stable-11] django module, module_utils: adjustments (#10747)
django module, module_utils: adjustments (#10684)

* django module, module_utils: adjustments

* fix name

* more fixes

* more fixes

* further simplification

* add changelog frag

(cherry picked from commit ded43714d3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-27 22:09:53 +02:00
patchback[bot]
dac26d12bd [PR #10710/b5a2c581 backport][stable-11] random_string: Specify seed while generating random string (#10748)
random_string: Specify seed while generating random string (#10710)

* random_string: Specify seed while generating random string

* Allow user to specify seed to generate random string

Fixes: #5362



* Apply suggestions from code review



---------



(cherry picked from commit b5a2c5812c)

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-27 22:09:41 +02:00
patchback[bot]
22946365fc [PR #10413/3b09e9d9 backport][stable-11] pacemaker_resource: add cleanup state (#10750)
pacemaker_resource: add cleanup state (#10413)

* refactor(deprecate): Add cleanup deprecations for pacemaker_cluster

* Additional code review changes

* Add changelog fragment

(cherry picked from commit 3b09e9d9ed)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
2025-08-27 22:09:33 +02:00
patchback[bot]
ddc546596e [PR #10707/63321754 backport][stable-11] pacemaker: Add regex checking for maintenance-mode (#10749)
pacemaker: Add regex checking for maintenance-mode (#10707)

* Add regex checking for maintenance-mode

* Add changelog fragment

* Apply suggestions from code review




---------



(cherry picked from commit 6332175493)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-27 22:09:23 +02:00
patchback[bot]
a618aa6b0a [PR #10732/5ee02297 backport][stable-11] ssh_config tests: remove paramiko version restriction (#10735)
ssh_config tests: remove paramiko version restriction (#10732)

Remove paramiko version restriction for ssh_config tests.

(cherry picked from commit 5ee02297b0)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-25 07:25:56 +02:00
patchback[bot]
49598ac93a [PR #10728/82b37bdb backport][stable-11] pacman: re-enable yay test (#10731)
pacman: re-enable yay test (#10728)

Re-enable yay test.

(cherry picked from commit 82b37bdb56)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-25 06:57:35 +02:00
patchback[bot]
de7afabe17 [PR #10661/177b385d backport][stable-11] Add support for gpg-auto-import-keys option to zypper (#10721)
Add support for gpg-auto-import-keys option to zypper (#10661)

* Add support for gpg-auto-import-keys option to zypper

* Add changelog fragment

* Add missing module argument_spec

* Improving documentation

* Improve changelog fragment

(cherry picked from commit 177b385dfb)

Co-authored-by: Marc Urben <aegnor@mittelerde.ch>
2025-08-23 19:45:11 +02:00
patchback[bot]
fafe6ef87b [PR #10664/65bc4706 backport][stable-11] GitHub app access token lookup: allow to use PyJWT + cryptography instead of jwt (#10720)
GitHub app access token lookup: allow to use PyJWT + cryptography instead of jwt (#10664)

* Fix issue #10299

* Fix issue #10299

* Fix blank lines

* Fix blank lines

* Add compatibility changes for jwt

* Bump to a higher magic number

* Update change log fragment

* Update changelogs/fragments/10299-github_app_access_token-lookup.yml



* Update changelogs/fragments/10299-github_app_access_token-lookup.yml



* Update changelogs/fragments/10299-github_app_access_token-lookup.yml



* Update plugins/lookup/github_app_access_token.py



* Update plugins/lookup/github_app_access_token.py



* Update requirement document

* Remove a whitespace

---------



(cherry picked from commit 65bc47068e)

Co-authored-by: weisheng-p <weisheng-p@users.noreply.github.com>
Co-authored-by: Bruno Lavoie <bruno.lavoie@dti.ulaval.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 19:45:03 +02:00
patchback[bot]
148c133248 [PR #10195/e4373565 backport][stable-11] pacemaker_stonith: new module (#10719)
pacemaker_stonith: new module (#10195)

* feat(initial): Add pacemaker_stonith module and unit tests

* feat(initial): Add working changes to pacemaker_stonith

* refactor(review): Apply code review suggestions

* Apply suggestions from code review




* refactor(review): Additional code review items

* bug(cli_action): Add missing runner arguments

* Apply code review suggestions

* Apply suggestions from code review



* Apply suggestions from code review



* WIP

* Apply doc changes to pacemaker stonith

* Update plugins/modules/pacemaker_stonith.py

---------



(cherry picked from commit e43735659a)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-23 19:44:54 +02:00
patchback[bot]
3bf688be39 [PR #10646/09f11523 backport][stable-11] Add cpu limit argument to scaleway_container (#10718)
Add cpu limit argument to scaleway_container (#10646)

Add cpu limit arguments

And document the units used for memory_limit and cpu_limit.

(cherry picked from commit 09f11523d1)

Co-authored-by: mscherer <mscherer@users.noreply.github.com>
2025-08-23 19:44:45 +02:00
patchback[bot]
bc8721c37c [PR #10652/9e86d239 backport][stable-11] oci/oracle: deprecation (#10717)
oci/oracle: deprecation (#10652)

* oci/oracle: deprecation

* add changelog frag

* add doc frags to changelog frag

* Update changelogs/fragments/10652-oracle-deprecation.yml



---------


(cherry picked from commit 9e86d239d2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 19:44:37 +02:00
patchback[bot]
579bd879c1 [PR #10679/1c0eb9dd backport][stable-11] gitlab_*_access_token: add planner access level (#10716)
gitlab_*_access_token: add `planner` access level (#10679)

The Planner role was introduced in December 2024 with GitLab 17.7 [1].
Allow its use in gitlab_project_access_token and
gitlab_group_access_token.

[1]: https://about.gitlab.com/releases/2024/12/19/gitlab-17-7-released/

(cherry picked from commit 1c0eb9ddf4)

Co-authored-by: David Phillips <phillid@users.noreply.github.com>
2025-08-23 19:44:29 +02:00
patchback[bot]
d83a835a3c [PR #10647/29b35022 backport][stable-11] Add a scaleway group to be able to use module_defaults (#10715)
Add a scaleway group to be able to use module_defaults (#10647)

(cherry picked from commit 29b35022cf)

Co-authored-by: mscherer <mscherer@users.noreply.github.com>
2025-08-23 19:44:19 +02:00
patchback[bot]
89e6e6c626 [PR #10696/db7757ed backport][stable-11] Update documentation (#10714)
Update documentation (#10696)

* Update documentation

Added to the description explaining the mode of operation and the protocol being used.
This would add to the user experience and saves time for the user.

* use single quotes around colon contained list element to satisfy linter

* Apply suggestions from code review



* documentation of nagios module - included all nagios configuration paths in plugins/modules/nagios.py

* used italic code I(...) for paths

* added trailing comma to nagios.cfg path listing



* added trailing period after icinga path listing.



---------



(cherry picked from commit db7757ed4b)

Co-authored-by: bofo540 <bjoern.foersterling@cpb-software.com>
Co-authored-by: bjt-user <bjoern.foersterling@web.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 19:44:09 +02:00
patchback[bot]
510a9228c0 [PR #10700/9f4bb3a7 backport][stable-11] django_check: rename database param, add alias (#10713)
django_check: rename database param, add alias (#10700)

* django_check: rename database param, add alias

* add changelog frag

* Update plugins/modules/django_check.py



---------


(cherry picked from commit 9f4bb3a788)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 19:44:00 +02:00
patchback[bot]
8a43df548c [PR #10706/5eab0f24 backport][stable-11] CI: Remove no longer necessary constraints (#10723)
CI: Remove no longer necessary constraints (#10706)

Remove no longer necessary constraints.

(cherry picked from commit 5eab0f2419)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 19:43:45 +02:00
patchback[bot]
491ba1b1a3 [PR #10711/62fa3e6f backport][stable-11] remove trailing comma in dict(parameters,) (#10724)
remove trailing comma in dict(parameters,) (#10711)

* remove trailing comma in dict(parameters,)

* add changelog frag

(cherry picked from commit 62fa3e6f2b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-23 19:43:36 +02:00
patchback[bot]
4d6f4c82e2 [PR #10712/cb84fa74 backport][stable-11] remove extra brackets when params are a given by a comprehension (#10725)
remove extra brackets when params are a given by a comprehension (#10712)

* remove extra brackets when function params are a given by a comprehension

* add changelog frag

(cherry picked from commit cb84fa740a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-23 19:43:27 +02:00
patchback[bot]
f5ad2cee8d [PR #10701/3b9acafc backport][stable-11] update requirements for Python versions currently used (#10703)
update requirements for Python versions currently used (#10701)

(cherry picked from commit 3b9acafc72)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-19 07:16:23 +02:00
Felix Fontein
5e6a7cab92 The next expected release is 11.2.2. 2025-08-18 21:45:19 +02:00
Felix Fontein
48c50fa335 Release 11.2.1. 2025-08-18 21:17:43 +02:00
patchback[bot]
a1e2ada993 [PR #10663/b9385d7f backport][stable-11] pacemaker_resource: Fix resource_type parameter (#10699)
pacemaker_resource: Fix resource_type parameter (#10663)

* Ensure resource standard, provider, and name are proper format

* Add changelog fragment

* Update changelogs/fragments/10663-pacemaker-resource-fix-resource-type.yml



---------


(cherry picked from commit b9385d7fe8)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-18 20:26:48 +02:00
patchback[bot]
d7eb5432f3 [PR #10695/6827680c backport][stable-11] build(deps): bump actions/checkout from 4 to 5 in the ci group (#10698)
build(deps): bump actions/checkout from 4 to 5 in the ci group (#10695)

Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).

Updates `actions/checkout` from 4 to 5
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...



(cherry picked from commit 6827680cda)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 18:34:01 +02:00
patchback[bot]
26f19db2f8 [PR #10687/47e8a3c1 backport][stable-11] ansible-core 2.20: avoid deprecated functionality (#10693)
ansible-core 2.20: avoid deprecated functionality (#10687)

Avoid deprecated functionality.

(cherry picked from commit 47e8a3c193)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-18 05:18:31 +00:00
patchback[bot]
8bc0c103ad [PR #10688/ceba0cbe backport][stable-11] pids: avoid type error if name is empty (#10692)
pids: avoid type error if name is empty (#10688)

Avoid type error if name is empty.

(cherry picked from commit ceba0cbedb)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-18 06:41:18 +02:00
Felix Fontein
0e495aae75 Add missing changelog fragment. 2025-08-17 22:35:33 +02:00
patchback[bot]
840b1b82ac [PR #10617/c84f16c5 backport][stable-11] scaleway_lb: fix RETURN docs (#10686)
scaleway_lb: fix RETURN docs (#10617)

* scaleway_lb: fix RETURN docs

* remove outer dict from sample content

(cherry picked from commit c84f16c5e9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-17 17:28:39 +02:00
patchback[bot]
2b08a308bc [PR #10423/735a066d backport][stable-11] apache2_module: updated cgi action conditions (#10682)
apache2_module: updated cgi action conditions (#10423)

* apache2_module: updated cgi action conditions

Only the activation of the cgi module in threaded mode should be a
restriction due to apache2 limitations, not the deactivation.
Especially when the cgi module isn't enabled yet at all. Fixes #9140

* bug(fix): apache2_module fails to disable cgi module

* Update changelog fragment.

---------


(cherry picked from commit 735a066d92)

Co-authored-by: Daniel Hoffend <dh@dotlan.net>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-17 13:12:55 +02:00
patchback[bot]
04305e8d9d [PR #10669/13bd4b5d backport][stable-11] composer: fix command args as list rather than string (#10680)
composer: fix command args as list rather than string (#10669)

(cherry picked from commit 13bd4b5d82)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-17 12:53:22 +02:00
Felix Fontein
15109a26fd Prepare 11.2.1. 2025-08-17 12:45:16 +02:00
patchback[bot]
d1730adce0 [PR #10674/dfc2a54d backport][stable-11] pacman: temporary disable yay test (#10678)
pacman: temporary disable yay test (#10674)

Temporary disable pacman yay test.

(cherry picked from commit dfc2a54d16)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-15 21:03:35 +02:00
patchback[bot]
69d7cce55c [PR #10668/d84d2397 backport][stable-11] ipa_*: adjust common connection notes to modules (#10671)
ipa_*: adjust common connection notes to modules (#10668)

(cherry picked from commit d84d2397b9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-15 20:14:13 +02:00
patchback[bot]
d80aca951c [PR #10657/3c0d6074 backport][stable-11] jc filter: remove skips for FreeBSD (#10659)
jc filter: remove skips for FreeBSD (#10657)

(cherry picked from commit 3c0d60740c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-12 09:58:11 +02:00
patchback[bot]
6f5462fb27 [PR #10653/eb5708a1 backport][stable-11] CI: Make sure to install Java in Debian Bullseye (#10656)
CI: Make sure to install Java in Debian Bullseye (#10653)

Make sure to install Java in Debian Bullseye.

(cherry picked from commit eb5708a125)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-12 01:21:35 +02:00
Felix Fontein
1c5c622ae8 There might be a 11.2.1 coming up next. 2025-08-11 22:34:04 +02:00
Felix Fontein
0b9abdf3de Release 11.2.0. 2025-08-11 21:50:25 +02:00
patchback[bot]
f077c1e104 [PR #10649/bc90635e backport][stable-11] pipx examples and tests: fix terminology (#10651)
pipx examples and tests: fix terminology (#10649)

Fix terminology.

(cherry picked from commit bc90635e66)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-11 21:24:16 +02:00
patchback[bot]
ba789d71ec [PR #10643/2aa53706 backport][stable-11] jc filter: remove redundant noqa comment (#10648)
jc filter: remove redundant noqa comment (#10643)

(cherry picked from commit 2aa53706f5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-11 19:49:32 +02:00
patchback[bot]
897729b507 [PR #10615/993e3a73 backport][stable-11] ipa_*: add common connection notes to modules (#10641)
ipa_*: add common connection notes to modules (#10615)

* ipa_*: add common connection notes to modules

* Update plugins/doc_fragments/ipa.py



* Update plugins/doc_fragments/ipa.py



---------


(cherry picked from commit 993e3a736e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-11 07:18:10 +02:00
patchback[bot]
cf8107b628 [PR #10596/92ca3793 backport][stable-11] lvm_pv - Fixes #10444 - Partition device not found (#10639)
lvm_pv - Fixes #10444 - Partition device not found (#10596)

* Skip rescan for partition devices in LVM PV module

Adds a check to prevent unnecessary rescan attempts on partition devices in the LVM physical volume module. When a device is actually a partition, attempting to rescan it via sysfs would fail since partitions don't have a rescan interface.

This change improves error handling by gracefully skipping the rescan operation when dealing with partition devices, avoiding misleading warning messages.

* Rewrote device rescan logic
Added changelog fragment

* Add issue reference to lvm_pv changelog entry

(cherry picked from commit 92ca379319)

Co-authored-by: Klention Mali <45871249+klention@users.noreply.github.com>
2025-08-11 07:17:59 +02:00
patchback[bot]
fe922a26f0 [PR #8647/2321d272 backport][stable-11] Docs. Remove helpers. (#10637)
Docs. Remove helpers. (#8647)

(cherry picked from commit 2321d27288)

Co-authored-by: Vladimir Botka <vbotka@gmail.com>
2025-08-10 14:24:31 +02:00
patchback[bot]
485a3cc11e [PR #10608/c16cf774 backport][stable-11] xbps: command args as list rather than string (#10636)
xbps: command args as list rather than string (#10608)

* xbps: command args as list rather than string

* add changelog frag

(cherry picked from commit c16cf774d7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:52:13 +02:00
patchback[bot]
bdfa91b3df [PR #10415/f50b52b4 backport][stable-11] keycloak_realm: Add missing brute force attributes (#10635)
keycloak_realm: Add missing brute force attributes (#10415)

* Add brute_force_strategy

* Add max_temporary_lockouts

* Add changelog

* Update changelogs/fragments/10415-keycloak-realm-brute-force-attributes.yml



* Update plugins/modules/keycloak_realm.py



* Update plugins/modules/keycloak_realm.py



---------


(cherry picked from commit f50b52b462)

Co-authored-by: maxblome <53860633+maxblome@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-10 13:52:06 +02:00
patchback[bot]
0123222ba8 [PR #10620/a68ba504 backport][stable-11] homectl, maven_artifact: removed redundant comments (#10622)
homectl, maven_artifact: removed redundant comments (#10620)

* homectl, maven_artifact: removed redundant comments

* stacki_hosts: one more redundant comment

(cherry picked from commit a68ba50466)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:58 +02:00
patchback[bot]
3406288644 [PR #10618/4e8a6c03 backport][stable-11] infinity: improve RV descriptions (#10624)
infinity: improve RV descriptions (#10618)

(cherry picked from commit 4e8a6c03dd)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:52 +02:00
patchback[bot]
c44fc97d6c [PR #10616/8960a57d backport][stable-11] Add binary_file lookup (#10625)
Add binary_file lookup (#10616)

* Add binary_file lookup.

* Remove sentence on deprecation.

(cherry picked from commit 8960a57d53)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-10 13:51:45 +02:00
patchback[bot]
7901287dd3 [PR #10612/5d3662b2 backport][stable-11] timezone: command args as list rather than string (#10626)
timezone: command args as list rather than string (#10612)

* timezone: command args as list rather than string

* adjust attr `update_timezone`

* add changelog frag

(cherry picked from commit 5d3662b23c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:29 +02:00
patchback[bot]
f2d1099b83 [PR #10609/9fc5d2ec backport][stable-11] xfs_quota: command args as list rather than string (#10627)
xfs_quota: command args as list rather than string (#10609)

(cherry picked from commit 9fc5d2ec4d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:23 +02:00
patchback[bot]
438ed7ea0e [PR #10606/83ce5313 backport][stable-11] urpmi: command args as list rather than string (#10628)
urpmi: command args as list rather than string (#10606)

* urpmi: command args as list rather than string

* add changelog frag

(cherry picked from commit 83ce53136c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:15 +02:00
patchback[bot]
a119ae2833 [PR #10605/2dd74b3f backport][stable-11] swupd: command args as list rather than string (#10629)
swupd: command args as list rather than string (#10605)

* swupd: command args as list rather than string

* add changelog frag

(cherry picked from commit 2dd74b3f3c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:10 +02:00
patchback[bot]
bcf984ec1c [PR #10604/b1bb034b backport][stable-11] solaris_zone: command args as list rather than string (#10630)
solaris_zone: command args as list rather than string (#10604)

* solaris_zone: command args as list rather than string

* add changelog frag

(cherry picked from commit b1bb034b50)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:51:03 +02:00
patchback[bot]
f28375eeb0 [PR #10602/a90759d9 backport][stable-11] portage: command args as list rather than string (#10631)
portage: command args as list rather than string (#10602)

* portage: command args as list rather than string

* add changelog frag

* fix pr number in chglog frag

(cherry picked from commit a90759d949)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:50:56 +02:00
patchback[bot]
682469b9b8 [PR #10603/6b7ec564 backport][stable-11] riak: command args as list rather than string (#10632)
riak: command args as list rather than string (#10603)

* riak: command args as list rather than string

* add changelog frag

(cherry picked from commit 6b7ec5648d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:50:49 +02:00
patchback[bot]
3fa1c3ac2c [PR #10599/1bd7aac0 backport][stable-11] open_iscsi: command args as list rather than string (#10633)
open_iscsi: command args as list rather than string (#10599)

* open_iscsi: command args as list rather than string

* add changelog frag

(cherry picked from commit 1bd7aac07e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:50:43 +02:00
patchback[bot]
0d96b65b4b [PR #10601/25dc0907 backport][stable-11] pear: command args as list rather than string (#10634)
pear: command args as list rather than string (#10601)

* pear: command args as list rather than string

* add changelog frag

(cherry picked from commit 25dc09074e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-10 13:50:36 +02:00
Felix Fontein
6daf178146 Prepare 11.2.0. 2025-08-10 13:49:25 +02:00
patchback[bot]
f8d0e5448d [PR #10610/9155bc2e backport][stable-11] random_string: add docs to use min_* (#10611)
random_string: add docs to use min_* (#10610)

* random_string: add docs to use min_*

* Update docs for min_* usage

Fixes: #10576



* Review requests



---------


(cherry picked from commit 9155bc2e53)

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-08-06 21:11:08 +02:00
patchback[bot]
2aa9fc7528 [PR #10435/25163ed8 backport][stable-11] github_repo: deprecate force_defaults=true (#10600)
github_repo: deprecate force_defaults=true (#10435)

* github_repo: deprecate force_defaults=true

* add changelog frag

(cherry picked from commit 25163ed87a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-06 07:02:09 +02:00
patchback[bot]
0317d506b8 [PR #10490/88bd44ae backport][stable-11] rocketchat: deprecate default value of is_pre740 (#10597)
rocketchat: deprecate default value of is_pre740 (#10490)

* Deprecate default value of is_pre740.

* Use correct markup.



---------


(cherry picked from commit 88bd44aea7)

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:51:58 +02:00
patchback[bot]
274ab506ca [PR #10574/1518b43b backport][stable-11] django module utils: remove deprecated function arg ignore_value_none (#10595)
django module utils: remove deprecated function arg `ignore_value_none` (#10574)

* django module utils: remove deprecated function arg ignore_value_none

* fix argument order in call from _DjangoRunner to superclass

* add changelog frag

(cherry picked from commit 1518b43b85)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:51:51 +02:00
patchback[bot]
4bef90fc7e [PR #10573/47ebde33 backport][stable-11] logstash_plugin: command args as list rather than string (#10594)
logstash_plugin: command args as list rather than string (#10573)

* logstash_plugin: command args as list rather than string

* add changelog frag

(cherry picked from commit 47ebde3339)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:51:33 +02:00
patchback[bot]
438d38ddfe [PR #10536/40bcfd96 backport][stable-11] imgadm: command args as list rather than string (#10592)
imgadm: command args as list rather than string (#10536)

* imgadm: command args as list rather than string

* add changelog frag

* Update plugins/modules/imgadm.py



* Update plugins/modules/imgadm.py



---------


(cherry picked from commit 40bcfd9646)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:51:24 +02:00
patchback[bot]
8b277cbe61 [PR #10538/85f6a07b backport][stable-11] Keycloak realm add support for some missing options (#10593)
Keycloak realm add support for some missing options (#10538)

* First commit

* fixe

* changelog

---------


(cherry picked from commit 85f6a07b19)

Co-authored-by: desand01 <desrosiers.a@hotmail.com>
Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
2025-08-04 20:51:15 +02:00
patchback[bot]
f1e0e590ab [PR #10527/7ffeaaa1 backport][stable-11] Keycloak idp well known url support (#10591)
Keycloak idp well known url support (#10527)

* first commit

* add and fixe test

* add example

* fragment and sanity

* sanity

* sanity

* Update plugins/modules/keycloak_identity_provider.py



* Update changelogs/fragments/10527-keycloak-idp-well-known-url-support.yml

---------



(cherry picked from commit 7ffeaaa16d)

Co-authored-by: desand01 <desrosiers.a@hotmail.com>
Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:51:06 +02:00
patchback[bot]
d65b6edfaf [PR #10525/5bdd82fb backport][stable-11] composer: command args as list rather than string (#10590)
composer: command args as list rather than string (#10525)

* composer: command args as list rather than string

* add changelog frag

(cherry picked from commit 5bdd82fbf5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:50:58 +02:00
patchback[bot]
e0a86f172f [PR #10526/4918ecd4 backport][stable-11] easy_install: command args as list rather than string (#10589)
easy_install: command args as list rather than string (#10526)

* easy_install: command args as list rather than string

* add changelog frag

(cherry picked from commit 4918ecd4c5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:50:48 +02:00
patchback[bot]
81f66feea4 [PR #10524/7e2d91e5 backport][stable-11] capabilities: command args as list rather than string (#10588)
capabilities: command args as list rather than string (#10524)

* capabilities: command args as list rather than string

* add changelog frag

(cherry picked from commit 7e2d91e53d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:50:41 +02:00
patchback[bot]
124f465819 [PR #10523/a96684ef backport][stable-11] bzr: command args as list rather than string (#10587)
bzr: command args as list rather than string (#10523)

* bzr: command args as list rather than string

* add changelog frag

(cherry picked from commit a96684ef40)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:50:30 +02:00
patchback[bot]
85af92810c [PR #10520/2a4222c0 backport][stable-11] apk: command args as list rather than string (#10586)
apk: command args as list rather than string (#10520)

* apk: command args as list rather than string

* add changelog frag

* APK_PATH itself should be a list not a string

* fix mock values in unit tests

* keep package names as list

* add package names as list to cmd line

(cherry picked from commit 2a4222c0f6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:50:17 +02:00
patchback[bot]
b8fdfdc644 [PR #10346/d0a1a617 backport][stable-11] Addressing multiple jenkins_plugins module issue (#10585)
Addressing multiple jenkins_plugins module issue (#10346)

* Fix version compatibility issue

* Add dependencies installation to specific versions

* Seperate Jenkins and updates_url credentials

* Create changelog fragment

* Added a test and some adjustments

* Return to fetch_url

* Add pull link to changelog and modify install latest deps function

* Use updates_url for plugin version if it exists

* Change version number

(cherry picked from commit d0a1a617af)

Co-authored-by: Youssef Ali <154611350+YoussefKhalidAli@users.noreply.github.com>
2025-08-04 20:50:10 +02:00
patchback[bot]
3d6227d1e2 [PR #10291/47aec260 backport][stable-11] pacemaker_info: new module and enhance cli_action (#10584)
pacemaker_info: new module and enhance cli_action (#10291)

* feat(info): Add pacemaker_info module and enhance cli_action util

This commit adds in the pacemaker_info module which is responsible for
retrieving pacemaker facts. Additionally, the cli_action var has been
refactored for the pacemaker.py util, which is passed through the
runner.

* refactor(version): Bump version_added to 11.2.0

* Apply suggestions from code review



* Update plugins/modules/pacemaker_info.py



* refactor(process): Simplify command output

---------


(cherry picked from commit 47aec26001)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:49:59 +02:00
patchback[bot]
423509769d [PR #10416/e91e2ef6 backport][stable-11] lvm_pv_move_data: new module (#10583)
lvm_pv_move_data: new module (#10416)

* Added lvm_pv_move_data module

* Removed trailing whitespace

* Decreased loop devices file size

* Remove test VG if exists

* Force remove test VG if exists

* Renamed test VG and LV names

* Updated assert conditions

* Added .ansible to .gitignore

* Force extending VG

* Wiping LVM metadata from PVs before creating VG

* Clean FS, LV, VG and PSs before run

* Migrated to CmdRunner

* Added more detailed info in case of failure and cosmetic changes

* Remove redundant params from CmdRunner call

* Updates the RETURN documentation block to properly specify the return type
of the 'actions' field:
- Changes return status from 'always' to 'success'
- Adds missing 'elements: str' type specification

(cherry picked from commit e91e2ef6f8)

Co-authored-by: Klention Mali <45871249+klention@users.noreply.github.com>
2025-08-04 20:49:51 +02:00
patchback[bot]
cdaf6d9493 [PR #10424/658af61e backport][stable-11] scaleway: update zone list (#10582)
scaleway: update zone list (#10424)

* changelog fragment

* add new zones

* add new zones to choices for instance resources

* add new zones to doc in inventory plugin

* Apply suggestions from code review



* Update changelogs/fragments/10424-scaleway-update-zones.yml



---------


(cherry picked from commit 658af61e17)

Co-authored-by: Mia-Cross <lmarabese@scaleway.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:49:44 +02:00
patchback[bot]
e1017afe4a [PR #10493/6e1821e5 backport][stable-11] nagios: make services param a list (#10580)
nagios: make services param a list (#10493)

* nagios: make services param a list

* add changelog frag

* nagios: update docs

(cherry picked from commit 6e1821e557)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:49:36 +02:00
patchback[bot]
86dfc731ad [PR #10434/e3467385 backport][stable-11] cpanm: deprecate mode=compatibility (#10579)
cpanm: deprecate mode=compatibility (#10434)

* cpanm: deprecate mode=compatibility

* adjust docs

* add changelog frag

(cherry picked from commit e3467385fb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:49:28 +02:00
patchback[bot]
58037799e4 [PR #10483/32fbacd9 backport][stable-11] sensu_subscription: normalize quotes in return message (#10577)
sensu_subscription: normalize quotes in return message (#10483)

* sensu_subscription: normalize quotes in return message

* add changelog frag

(cherry picked from commit 32fbacd9ae)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:49:20 +02:00
patchback[bot]
1e5c0d5f42 [PR #10422/710c02ec backport][stable-11] tasks_only callback: add result_format_callback docs fragment (#10578)
tasks_only callback: add result_format_callback docs fragment (#10422)

Add result_format_callback docs fragment.

(cherry picked from commit 710c02ec01)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:49:13 +02:00
patchback[bot]
d6a0c914d1 [PR #10514/158f64ca backport][stable-11] bearychat: deprecation (#10581)
bearychat: deprecation (#10514)

* deprecation: bearychat

* add changelog frag

* fix chglog file placement

(cherry picked from commit 158f64ca77)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:48:56 +02:00
Felix Fontein
f190897687 The next expected release will be 11.2.0. 2025-08-04 19:57:38 +02:00
Felix Fontein
b3995081a2 Release 11.1.2. 2025-08-04 19:34:35 +02:00
patchback[bot]
c2083c8034 [PR #10570/c7e18306 backport][stable-11] CI: python-jenkins 1.8.3 fails to import on Python 2.7 (#10572)
CI: python-jenkins 1.8.3 fails to import on Python 2.7 (#10570)

python-jenkins 1.8.3 fails to import on Python 2.7.

(cherry picked from commit c7e18306fb)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-03 14:13:22 +02:00
patchback[bot]
e3e0bf3cfb [PR #10566/14f706c5 backport][stable-11] merge_variables lookup: avoid deprecated Templar.set_temporary_context (#10569)
merge_variables lookup: avoid deprecated Templar.set_temporary_context (#10566)

Avoid deprecated Templar.set_temporary_context.

(cherry picked from commit 14f706c5dd)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-03 13:08:52 +02:00
Felix Fontein
9246704874 Prepare 11.1.2. 2025-08-03 08:57:31 +02:00
patchback[bot]
2394a2dd5b [PR #10537/9a296225 backport][stable-11] Disable pipelining for doas and machinectl on ansible-core 2.19+ (#10557)
Disable pipelining for doas and machinectl on ansible-core 2.19+ (#10537)

Disable pipelining for doas and machinectl.

(cherry picked from commit 9a29622584)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-02 17:18:52 +02:00
patchback[bot]
c99873bd30 [PR #10550/ac4aca20 backport][stable-11] diy callback: add test for on_any_msg (#10552)
diy callback: add test for on_any_msg (#10550)

Add test for on_any_msg.

(cherry picked from commit ac4aca2004)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-02 17:01:09 +02:00
patchback[bot]
9380f9ef7d [PR #10539/3de073fb backport][stable-11] json_query: extend list of type aliases for compatibility with ansible-core 2.19 (#10560)
json_query: extend list of type aliases for compatibility with ansible-core 2.19 (#10539)

* Extend list of type aliases for json_query.

* Improve tests.



---------


(cherry picked from commit 3de073fb6f)

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-08-02 17:00:57 +02:00
patchback[bot]
fc2e2db6e4 [PR #10532/abfe1e61 backport][stable-11] apk: fix empty/whitespace-only package name check (#10556)
apk: fix empty/whitespace-only package name check (#10532)

* Fix empty/whitespace-only package name check.

* Adjust test.

(cherry picked from commit abfe1e6180)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-02 17:00:48 +02:00
patchback[bot]
87a213225d [PR #10455/bd84f654 backport][stable-11] Improve capabilities module by detecting /sbin/getcap error message and stop early with a meaningful error message (#10563)
Improve capabilities module by detecting /sbin/getcap error message and stop early with a meaningful error message (#10455)

* modules/capabilities.py: fail & propagate if getcap command error

* Fix comment spacing (pep8)

* Add changelogs fragment for PR 10455

* Update changelogs/fragments/10455-capabilities-improve-error-detection.yml



---------



(cherry picked from commit bd84f65456)

Co-authored-by: hakril <github@hakril.net>
Co-authored-by: clement rouault <clement.rouault@exatrack.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-02 17:00:34 +02:00
patchback[bot]
78a4efae28 [PR #10543/7298f25f backport][stable-11] Fix no longer valid constructs in tests (#10547)
Fix no longer valid constructs in tests (#10543)

Fix no longer valid constructs in tests.

(cherry picked from commit 7298f25fe0)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-02 09:04:37 +02:00
patchback[bot]
ccc2fcefdd [PR #10513/3b551f92 backport][stable-11] arg_spec adjustments: modules [t-z]* (#10535)
arg_spec adjustments: modules [t-z]* (#10513)

* arg_spec adjustments: modules [t-z]*

* add changelog frag

(cherry picked from commit 3b551f92fc)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-01 11:18:58 +02:00
patchback[bot]
57c523af55 [PR #10531/d0b0aff5 backport][stable-11] wsl connection: import paramiko directly (#10534)
wsl connection: import paramiko directly (#10531)

Import paramiko directly.

(cherry picked from commit d0b0aff5bc)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-01 11:18:50 +02:00
patchback[bot]
e7567e854b [PR #10505/0f7cd547 backport][stable-11] arg_spec adjustments: modules [g-j]* (#10528)
arg_spec adjustments: modules [g-j]* (#10505)

* arg_spec adjustments: modules [g-j]*

* add changelog frag

(cherry picked from commit 0f7cd5473f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-31 23:03:54 +02:00
patchback[bot]
2dbcfaa650 [PR #10512/3bb7a77b backport][stable-11] arg_spec adjustments: modules [o-s]* (#10530)
arg_spec adjustments: modules [o-s]* (#10512)

* arg_spec adjustments: modules [o-s]*

* add changelog frag

(cherry picked from commit 3bb7a77b14)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-31 22:58:33 +02:00
patchback[bot]
5c5774b7b5 [PR #10507/5601ef4c backport][stable-11] arg_spec adjustments: modules [k-n]* (#10529)
arg_spec adjustments: modules [k-n]* (#10507)

* arg_spec adjustments: modules [k-n]*

* adjust lxca tests

* add changelog frag

(cherry picked from commit 5601ef4c57)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-31 22:58:26 +02:00
patchback[bot]
402fdcec6a [PR #10506/84b5d38c backport][stable-11] Change description of nopasswd parameter for sudoers to be more clear (#10519)
Change description of nopasswd parameter for sudoers to be more clear (#10506)

Update sudoers.py

Made the description of nopasswd more clear

(cherry picked from commit 84b5d38c51)

Co-authored-by: freyja <github.com.tidy739@passinbox.com>
2025-07-30 06:53:47 +02:00
patchback[bot]
302d88b33d [PR #10511/6ce9f805 backport][stable-11] CI: Add Python 3.14 unit tests (#10516)
CI: Add Python 3.14 unit tests (#10511)

* Add Python 3.14 unit tests.

* Skip test if github cannot be imported.

It currently cannot be imported because nacl isn't compatible with Python 3.14 yet,
and importing github indirectly tries to import nacl, which fails as it uses a
type from typing that got removed in 3.14.

* Skip test if paramiko cannot be imported.

(cherry picked from commit 6ce9f805a8)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-30 06:10:06 +02:00
patchback[bot]
6ff737dc87 [PR #10508/69bcb88e backport][stable-11] Update Python versions for CI (#10510)
Update Python versions for CI (#10508)

* Update Python versions for CI.

* Disable Python 3.14 temporarily.

(cherry picked from commit 69bcb88efe)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-29 17:38:40 +02:00
Felix Fontein
b5ea91259b The next planned release will be 11.2.0. 2025-07-28 20:39:56 +02:00
Felix Fontein
5b6bb1776c Release 11.1.1. 2025-07-28 19:55:21 +02:00
patchback[bot]
9772fb291c [PR #10485/15d3ea12 backport][stable-11] remove common return values from docs (#10503)
remove common return values from docs (#10485)

* remove common return values from docs

* pacman: add note about version added of RV

(cherry picked from commit 15d3ea123d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-28 19:22:07 +02:00
patchback[bot]
6e0c62d3e2 [PR #10417/44ca3661 backport][stable-11] sysrc: refactor (#10504)
sysrc: refactor (#10417)

* sysrc: refactor

* sysrc: refactor changelog fragment

* sysrc: forgot the os import

* sysrc: update test to edit the correct file

* sysrc: Added copyright info to the test conf file

* sysrc: Added full copyright info to the test conf file

* sysrc: Detect permission denied when using sysrc

* sysrc: Fixed the permission check and 2.7 compatibility

* sysrc: Fix typo of import

* sysrc: Fix err.find check

* sysrc: Add bugfixes changelog fragment

* sysrc: Use `StateModuleHelper`

* sysrc: updated imports

* sysrc: remove re import and set errno.EACCES on the OSError

* sysrc: format code properly

* sysrc: fix Python 2.7 compatibility and set changed manually

* sysrc: add missing name format check

Also use `self.module.fail_json` through out

* sysrc: Removed os import by accident

* sysrc: updated per review, and the way the existing value is retrieved

(cherry picked from commit 44ca366173)

Co-authored-by: David Lundgren <dlundgren@syberisle.net>
2025-07-28 19:21:55 +02:00
patchback[bot]
a5ae69c701 [PR #10494/736ce198 backport][stable-11] arg_spec adjustments: modules [a-f]* (#10501)
arg_spec adjustments: modules [a-f]* (#10494)

* arg_spec adjustments: modules [a-f]*

* add changelog frag

* Update changelogs/fragments/10494-rfdn-1.yml



---------


(cherry picked from commit 736ce1983d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-28 18:59:32 +02:00
patchback[bot]
9d8fac08bb [PR #10445/1f8b5eea backport][stable-11] cronvar: Handle empty value string properly (#10496)
cronvar: Handle empty value string properly (#10445)

* Fix empty  value issue  in cronvar

* Update changelog

* Update plugins/modules/cronvar.py



* Update changelogs/fragments/10445-cronvar-reject-empty-values.yml



* Update tests/integration/targets/cronvar/tasks/main.yml



* Update tests/integration/targets/cronvar/tasks/main.yml



* Accept empty strings on cronvar

* Update plugins/modules/cronvar.py



* Update main.yml



---------


(cherry picked from commit 1f8b5eea4c)

Co-authored-by: Giorgos Drosos <56369797+gdrosos@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-28 06:47:08 +02:00
patchback[bot]
6dd19450bd [PR #10491/de0618b8 backport][stable-11] irc: fix wrap_socket() call when validate_certs=true and use_tls=true (#10499)
irc: fix wrap_socket() call when validate_certs=true and use_tls=true (#10491)

Fix wrap_socket() call when validate_certs=true and use_tls=true.

(cherry picked from commit de0618b843)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-28 06:46:54 +02:00
Felix Fontein
14432bd760 Normalize changelog configs.
(cherry picked from commit a692888478)
2025-07-27 16:36:53 +02:00
patchback[bot]
1c31fa1ff3 [PR #10481/dc7d791d backport][stable-11] doc style adjustments: modules [yz]* (#10486)
doc style adjustments: modules [yz]* (#10481)

* doc style adjustments: modules y*

* doc style adjustments: modules z*

(cherry picked from commit dc7d791d12)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-27 16:34:41 +02:00
patchback[bot]
f7aa319704 [PR #10466/7b05484d backport][stable-11] doc style adjustments: modules [rtuvx]* (#10489)
doc style adjustments: modules [rtuvx]* (#10466)

* doc style adjustments: modules r*

* doc style adjustments: modules t*

* doc style adjustments: modules u*

* doc style adjustments: modules v*

* doc style adjustments: modules x*

* Update plugins/modules/redis_data.py



---------


(cherry picked from commit 7b05484d8f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 16:34:34 +02:00
patchback[bot]
9e91fb0704 [PR #10480/c1bd4611 backport][stable-11] doc style adjustments: modules s* (#10487)
doc style adjustments: modules s* (#10480)

* doc style adjustments: modules s*

* adjust comment indentation

* remove empty RETURN section in stacki_host

* spectrum_model_attrs: improve formatting of example

* Apply suggestions from code review



* Update plugins/modules/spotinst_aws_elastigroup.py



* Update plugins/modules/swdepot.py



---------


(cherry picked from commit c1bd461173)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 14:34:24 +00:00
Felix Fontein
f8eca8a209 Prepare 11.1.1. 2025-07-27 12:14:54 +02:00
patchback[bot]
6d8ae5d639 [PR #10463/d288555f backport][stable-11] doc style adjustments: modules p* (#10468)
doc style adjustments: modules p* (#10463)

* doc style adjustments: modules p*

* Update plugins/modules/pacemaker_resource.py

* Update plugins/modules/pagerduty_alert.py

* Update plugins/modules/pear.py

* Update plugins/modules/portage.py

* reformat

* adjustment from review

* Update plugins/modules/pkg5_publisher.py



---------


(cherry picked from commit d288555fd9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Peter Oliver <github.com@mavit.org.uk>
2025-07-27 12:13:47 +02:00
patchback[bot]
62e852f421 [PR #10459/ee783066 backport][stable-11] Fix ansible-core 2.19 deprecations (#10471)
Fix ansible-core 2.19 deprecations (#10459)

Do not return warnings.

(cherry picked from commit ee7830667a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 12:13:35 +02:00
patchback[bot]
58803e62fe [PR #10461/cc13f42b backport][stable-11] Fix cronvar crash when parent dir of cron_file is missing (#10474)
Fix cronvar crash when parent dir of cron_file is missing (#10461)

* Fix cronvar crash on non existent directories

* Update changelog

* Fix small variable bug

* Fix trailing witespace

* Fix CI issues

* Update changelogs/fragments/10461-cronvar-non-existent-dir-crash-fix.yml



* Update plugins/modules/cronvar.py



---------


(cherry picked from commit cc13f42be4)

Co-authored-by: Giorgos Drosos <56369797+gdrosos@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 12:13:20 +02:00
patchback[bot]
5eff6e779a [PR #10458/fe59c6d2 backport][stable-11] listen_ports_facts: Avoid crash when required commands are missing (#10476)
listen_ports_facts: Avoid crash when required commands are missing (#10458)

* Fix listen-port-facts crash

* Update changelog

* Update tests/integration/targets/listen_ports_facts/tasks/main.yml



* Fix sanity tests

* Update changelogs/fragments/10458-listen_port_facts-prevent-type-error.yml



---------


(cherry picked from commit fe59c6d29e)

Co-authored-by: Giorgos Drosos <56369797+gdrosos@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 12:13:10 +02:00
patchback[bot]
e39c887508 [PR #10442/3ad57ffa backport][stable-11] Ensure apk handles empty name strings properly (#10478)
Ensure apk handles empty name strings properly (#10442)

* Ensure apk handles empty name strings

* Update changelog

* Update tests/integration/targets/apk/tasks/main.yml



* Update changelogs/fragments/10442-apk-fix-empty-names.yml



* Remove redundant conditional

* Remove redundant ignore errors

* Reject apk with update cache for empty package names

---------


(cherry picked from commit 3ad57ffa67)

Co-authored-by: Giorgos Drosos <56369797+gdrosos@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 12:12:54 +02:00
patchback[bot]
e660f3e8d3 [PR #10462/b458ee85 backport][stable-11] CI: Bump Alpine 3.21 to 3.22, Fedora 41 to 42, and FreeBSD 14.2 to 14.3 (#10465)
CI: Bump Alpine 3.21 to 3.22, Fedora 41 to 42, and FreeBSD 14.2 to 14.3 (#10462)

* Bump Alpine 3.21 to 3.22, Fedora 41 to 42, RHEL 9.5 to 9.6, and FreeBSD 14.2 to 14.3.

Add old versions to stable-2.19 if not present yet.

* Add some expected skips.

* Add more restrictions.

* Another try for Android tests.

* Another try.

* Another try.

(cherry picked from commit b458ee85ce)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-26 15:16:38 +02:00
patchback[bot]
ab7b199af9 [PR #10443/6d675469 backport][stable-11] doc style adjustments: modules [no]* (#10454)
doc style adjustments: modules [no]* (#10443)

* doc style adjustments: modules n*

* doc style adjustments: modules o*

* Apply suggestions from code review

* Apply suggestions from code review



---------


(cherry picked from commit 6d67546902)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-25 09:45:53 +02:00
patchback[bot]
a34df7dc49 [PR #10449/f1f7d9b0 backport][stable-11] CI: Disable zpool tests on Alpine (#10452)
CI: Disable zpool tests on Alpine (#10449)

Disable zpool tests on Alpine.

(cherry picked from commit f1f7d9b038)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-24 22:45:32 +02:00
patchback[bot]
953058b518 [PR #10446/01f3248a backport][stable-11] CI: Replace FreeBSD 13.3 with 13.5 (#10450)
CI: Replace FreeBSD 13.3 with 13.5 (#10446)

Replace FreeBSD 13.3 with 13.5.

(cherry picked from commit 01f3248a12)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-24 22:29:06 +02:00
patchback[bot]
5c7feec6f7 [PR #10433/69d479f0 backport][stable-11] doc style adjustments: modules [lm]* (#10438)
doc style adjustments: modules [lm]* (#10433)

* doc style adjustments: modules l*

* doc style adjustments: modules m*

* Apply suggestions from code review



* Update plugins/modules/logstash_plugin.py



---------


(cherry picked from commit 69d479f06c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-21 22:32:10 +02:00
patchback[bot]
eab6f4c6ff [PR #10428/bc4d06ef backport][stable-11] Fix dnf_versionlock examples (#10431)
Fix dnf_versionlock examples (#10428)

Fix dnf_versionlock examples.

(cherry picked from commit bc4d06ef34)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-18 23:23:44 +02:00
patchback[bot]
4999521c11 [PR #10420/14f13daa backport][stable-11] doc style adjustments: modules [jk]* (#10425)
doc style adjustments: modules [jk]* (#10420)

* doc style adjustments: modules j*

* doc style adjustments: modules k*

* Apply suggestions from code review



* Update plugins/modules/keycloak_realm_key.py

---------


(cherry picked from commit 14f13daa99)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-18 01:38:22 +02:00
Felix Fontein
e733b486b8 The next release will likely be 11.1.1. 2025-07-14 16:33:52 +02:00
Felix Fontein
10f1f690e4 Release 11.1.0. 2025-07-14 15:40:07 +02:00
patchback[bot]
27377140d0 [PR #10409/a36ad54b backport][stable-11] doc style adjustments: modules i* (#10411)
doc style adjustments: modules i* (#10409)

(cherry picked from commit a36ad54b53)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-14 15:34:39 +02:00
patchback[bot]
ffa1436f05 [PR #10227/283d947f backport][stable-11] pacemaker_cluster: enhancements and add unit tests (#10408)
pacemaker_cluster: enhancements and add unit tests (#10227)

* feat(initial): Add unit tests and rewrite pacemaker_cluster

This commit introduces unit tests and pacemaker_cluster module rewrite
to use the pacemaker module utils.

* feat(cleanup): Various fixes and add resource state

This commit migrates the pacemaker_cluster's cleanup state to the
pacemaker_resource module. Additionally, the unit tests for
pacemaker_cluster have been corrected to proper mock run command order.

* doc(botmeta): Add author to pacemaker_cluster

* style(whitespace): Cleanup test files

* refactor(cleanup): Remove unused state value

* bug(fix): Parse apply_all as separate option

* refactor(review): Apply code review suggestions

This commit refactors breaking changes in pacemaker_cluster module into
deprecated features. The following will be scheduled for deprecation:
`state: cleanup` and `state: None`.

* Apply suggestions from code review



* refactor(review): Additional review suggestions

* refactor(deprecations): Remove all deprecation changes

* refactor(review): Enhance rename changelog entry and fix empty string logic

* refactor(cleanup): Remove from pacemaker_resource

* Apply suggestions from code review



* refactor(review): Add changelog and revert required name

* revert(default): Use default state=present

* Update changelogs/fragments/10227-pacemaker-cluster-and-resource-enhancement.yml



* Update changelog fragment.

---------


(cherry picked from commit 283d947f17)

Co-authored-by: Dexter <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-14 09:55:32 +02:00
patchback[bot]
115f4b5c51 [PR #10399/4801b0fc backport][stable-11] manageiq_provider: fix docs markup (#10407)
manageiq_provider: fix docs markup (#10399)

* Fix docs markup.

* Add one more.



* Update plugins/modules/manageiq_provider.py



* More fixes.

---------



(cherry picked from commit 4801b0fc00)

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-14 07:47:52 +02:00
patchback[bot]
d385c47d0b [PR #10397/5e2ffb84 backport][stable-11] doc style adjustments: modules [cd]* (#10405)
doc style adjustments: modules [cd]* (#10397)

* doc style adjustments: modules c*

* doc style adjustments: modules d*

* Update plugins/modules/consul_agent_check.py



---------


(cherry picked from commit 5e2ffb845f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-14 07:23:02 +02:00
patchback[bot]
e19e69a07e [PR #10396/3787808e backport][stable-11] iocage inventory guide: adjust filenames, fix typo (#10403)
iocage inventory guide: adjust filenames, fix typo (#10396)

* Rename iocage inventory guide files.

* Fix typo.

(cherry picked from commit 3787808e72)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-13 22:34:53 +02:00
patchback[bot]
1ba0a31328 [PR #10398/717ef511 backport][stable-11] doc style adjustments: modules [efgh]* (#10401)
doc style adjustments: modules [efgh]* (#10398)

* doc style adjustments: modules e*

* doc style adjustments: modules f*

* doc style adjustments: modules g*

* doc style adjustments: modules h*

* Update plugins/modules/easy_install.py



---------


(cherry picked from commit 717ef51137)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-13 17:36:27 +02:00
patchback[bot]
47f3922c51 [PR #10239/563b29e1 backport][stable-11] Added docs Inventory Guide. (#10395)
Added docs Inventory Guide. (#10239)

* Added docs Inventory Guide.

* Errata docs Inventory Guide.

* Fix docs Inventory Guide error: use ASCII quotes.

* Fix docs Inventory Guide various lint errors.

* Added docs Inventory Guide BOTMETA entries.

* Fix docs Inventory Guide lint errors: trailing whitespace

* Fix docs Inventory Guide lint errors: force yaml pygment

* Fix docs Inventory Guide lint errors: No way to force yaml pygment in code-block

* Update docs/docsite/rst/inventory_guide_iocage.rst



* Update docs/docsite/rst/inventory_guide_iocage_aliases.rst

Thank you for the explanation!



* Update docs/docsite/rst/inventory_guide_iocage_aliases.rst



* Updated docs Inventory Guide.

* Problematic pygments changed to 'console'.

* Update docs/docsite/rst/inventory_guide_iocage_hooks.rst
  Update docs/docsite/rst/inventory_guide_iocage_properties.rst
  Update docs/docsite/rst/inventory_guide_iocage_hooks.rst



* Put dhclient-exit-hooks into the sh code-block.

* Fix the code-block.

* Update docs/docsite/rst/inventory_guide_iocage.rst
  Update docs/docsite/rst/inventory_guide_iocage_aliases.rst
  Update docs/docsite/rst/inventory_guide_iocage_basics.rst



* Remove tabs.

* Update docs/docsite/rst/inventory_guide_iocage_basics.rst



* Indent the note block.

* Update docs/docsite/rst/inventory_guide_iocage_hooks.rst
  Update docs/docsite/rst/inventory_guide_iocage_dhcp.rst
  Update docs/docsite/rst/inventory_guide_iocage_hooks.rst



* Fix ansval.

* Add guide_iocage.rst and inventory_guide_iocage*.rst

* Fix 'disallowed language sh found'.

* Remove note block.

* Remove include which triggers a bug in rstcheck.

* Update docs/docsite/extra-docs.yml
  Update docs/docsite/rst/iocage_inventory_guide_basics.rst
  Update docs/docsite/rst/iocage_inventory_guide_dhcp.rst
  Update docs/docsite/rst/iocage_inventory_guide_hooks.rst
  Update docs/docsite/rst/iocage_inventory_guide_properties.rst
  Update docs/docsite/rst/iocage_inventory_guide_tags.rst
  Update docs/docsite/rst/iocage_inventory_guide_hooks.rst
  Update docs/docsite/rst/iocage_inventory_guide_properties.rst



* Put man iocage quotation into the text code block.

---------



(cherry picked from commit 563b29e12a)

Co-authored-by: Vladimir Botka <vbotka@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-12 20:47:12 +02:00
patchback[bot]
1b6a3efa31 [PR #10333/731f0be3 backport][stable-11] Configure LUKS encrypted volume using crypttab (#10390)
Configure LUKS encrypted volume using crypttab (#10333)

(cherry picked from commit 731f0be3f4)

Co-authored-by: Aditya Putta <puttaa@yahoo.com>
2025-07-12 12:50:14 +02:00
patchback[bot]
4850c3b2b4 [PR #10385/baf1cdec backport][stable-11] Enable hg integration test (#10392)
Enable hg integration test (#10385)

Fixes: #10044


(cherry picked from commit baf1cdec09)

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-07-12 12:43:06 +02:00
patchback[bot]
7bda6f1df7 [PR #10380/20e9ef87 backport][stable-11] community.general.easy_install : use of the virtualenv_command parameter (#10388)
community.general.easy_install :  use of the virtualenv_command parameter (#10380)

* community.general.easy_install :  use of the virtualenv_command parameter

* Apply suggestions from code review

---------


(cherry picked from commit 20e9ef877f)

Co-authored-by: Aditya Putta <puttaa@yahoo.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-07-12 12:33:09 +02:00
patchback[bot]
6200dbaedf [PR #10363/1a7aafc0 backport][stable-11] lvg examples: use YAML lists (#10382)
lvg examples: use YAML lists (#10363)

Use YAML lists.

(cherry picked from commit 1a7aafc037)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-11 07:26:09 +02:00
patchback[bot]
bce01f325a [PR #10374/a0200d11 backport][stable-11] Disable lmdb_kv integration tests (#10378)
Disable lmdb_kv integration tests (#10374)

Disable lmdb_kv integration tests.

(cherry picked from commit a0200d1130)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-10 22:07:29 +02:00
patchback[bot]
ebba59d2ee [PR #10345/096fa388 backport][stable-11] logstash: Remove reference to Python 2 library (#10370)
logstash: Remove reference to Python 2 library (#10345)

* logstash: Remove reference to Python 2 library



* Review requests



* Apply suggestions from code review



---------



(cherry picked from commit 096fa388ac)

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-09 06:31:16 +02:00
patchback[bot]
82506a10ba [PR #10339/e5b37c3f backport][stable-11] github_release - support multiple type of tokens (#10371)
github_release - support multiple type of tokens (#10339)

* Support multiple type of tokens

* Add missing spaces around operator.

* Add changelog fragments.

* fix logic, missing NOT

* Update changelogs/fragments/10339-github_app_access_token.yml



---------


(cherry picked from commit e5b37c3ffd)

Co-authored-by: Bruno Lavoie <bl@brunol.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-08 22:29:54 +02:00
Felix Fontein
5f77312888 Prepare 11.1.0. 2025-07-08 21:10:56 +02:00
patchback[bot]
93cfbaf2a4 [PR #10347/f2286701 backport][stable-11] Add tasks_only callback (#10367)
Add tasks_only callback (#10347)

* Add tasks_only callback.

* Improve tests.

* Fix option name.

* Add missing s.



* Add ignore.txt entry.

---------


(cherry picked from commit f2286701c8)

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-08 17:14:20 +02:00
patchback[bot]
7773289ceb [PR #10359/16d6e4a8 backport][stable-11] dependent lookup: avoid deprecated ansible-core 2.19 functionality (#10366)
dependent lookup: avoid deprecated ansible-core 2.19 functionality (#10359)

* Avoid deprecated ansible-core 2.19 functionality.

* Adjust unit tests.

(cherry picked from commit 16d6e4a8e5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-08 06:56:04 +02:00
Felix Fontein
3bd1fa77c2 Remove no longer needed ignore-2.15.txt.
(cherry picked from commit 49975b383a)
2025-07-08 06:42:37 +02:00
patchback[bot]
c164c2634c [PR #10350/7a4448d4 backport][stable-11] doc style adjustments: modules [ab]* (#10360)
doc style adjustments: modules [ab]* (#10350)

* doc style adjustments: modules [ab]*

* Update plugins/modules/btrfs_subvolume.py

* Update plugins/modules/aerospike_migrations.py



* Update plugins/modules/aix_filesystem.py



* Update plugins/modules/bigpanda.py



* aix_filesystems: roll back wording for `filesystem` description

---------


(cherry picked from commit 7a4448d45c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-07 21:21:27 +02:00
patchback[bot]
0781f49673 [PR #10349/4195cbb3 backport][stable-11] incus_connection: Improve error handling (#10362)
incus_connection: Improve error handling (#10349)

Related to #10344

This tweaks the error handling logic to work with more versions of Incus
as well as catching some of the project and instance access errors.

The full context (instance name, project name and remote name) is now
included so that the user can easily diagnose access problems.


(cherry picked from commit 4195cbb364)

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>
Co-authored-by: Stéphane Graber <stgraber@stgraber.org>
2025-07-07 21:06:14 +02:00
patchback[bot]
bd64ddc570 [PR #10334/79509a53 backport][stable-11] flatpak: add docs example for install using custom executable path (#10357)
flatpak: add docs example for install using custom executable path (#10334)

(cherry picked from commit 79509a533d)

Co-authored-by: Aditya Putta <puttaa@yahoo.com>
2025-07-06 13:34:39 +02:00
patchback[bot]
0072cb27d4 [PR #10336/dd135920 backport][stable-11] lvg: add docs example for preserving existing PVs in a volume group using remove_extra_pvs: false (#10356)
lvg: add docs example for preserving existing PVs in a volume group using `remove_extra_pvs: false` (#10336)

(cherry picked from commit dd13592034)

Co-authored-by: Aditya Putta <puttaa@yahoo.com>
2025-07-06 13:34:30 +02:00
patchback[bot]
a68a8511c8 [PR #10335/2ec3d022 backport][stable-11] jenkins_build: docs example for trigger with custom polling interval (#10353)
jenkins_build: docs example for trigger with custom polling interval (#10335)

(cherry picked from commit 2ec3d02215)

Co-authored-by: Aditya Putta <puttaa@yahoo.com>
2025-07-06 13:34:15 +02:00
patchback[bot]
42181abb51 [PR #10337/5ef1cad6 backport][stable-11] Using add_keys_to_agent in ssh_config module (#10352)
Using add_keys_to_agent in ssh_config module (#10337)

* Using add_keys_to_agent in ssh_config module

* removed white space

* Apply suggestion

---------


(cherry picked from commit 5ef1cad64f)

Co-authored-by: Aditya Putta <puttaa@yahoo.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-07-06 13:34:08 +02:00
patchback[bot]
9bf247bb5f [PR #10323/7959d971 backport][stable-11] nmcli: improvements (#10348)
nmcli: improvements (#10323)

* better handling of parameter validation

* execute_command is always called with list arg

* minor improvements

* add changelog frag

(cherry picked from commit 7959d971a4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-05 17:51:33 +02:00
patchback[bot]
723665b0d3 [PR #10329/66139679 backport][stable-11] catapult: deprecation (#10340)
catapult: deprecation (#10329)

* catapult: deprecation

* add changelog frag

* Update changelogs/fragments/10329-catapult-deprecation.yml



* Update meta/runtime.yml



* Update plugins/modules/catapult.py



* Update plugins/modules/catapult.py



---------


(cherry picked from commit 66139679e1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-04 09:16:55 +02:00
patchback[bot]
100cfd1592 [PR #10328/682a89cd backport][stable-11] remove unnecessary brackets in conditions (#10331)
remove unnecessary brackets in conditions (#10328)

* remove unnecessary brackets in conditions

* add changelog frag

(cherry picked from commit 682a89cdf5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-04 06:09:54 +02:00
patchback[bot]
6b57b2bb74 [PR #10327/5a5b2d2e backport][stable-11] remove unnecessary checks for unsupported python versions (#10330)
remove unnecessary checks for unsupported python versions (#10327)

(cherry picked from commit 5a5b2d2eed)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-02 06:58:13 +02:00
patchback[bot]
da3874c96d [PR #10302/580ac1e3 backport][stable-11] fix style in plugins (#10325)
fix style in plugins (#10302)


(cherry picked from commit 580ac1e30d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-01 22:58:46 +02:00
Felix Fontein
1c4556dc4c Adjust README.
(cherry picked from commit 4323058809)
2025-07-01 22:36:39 +02:00
patchback[bot]
a7ec516be3 [PR #10303/329c2222 backport][stable-11] fix style in plugins (#10324)
fix style in plugins (#10303)

(cherry picked from commit 329c2222fc)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-01 22:09:45 +02:00
patchback[bot]
1d6d8bdf7f [PR #10319/dd3c253b backport][stable-11] CI: Add stable-2.19 (#10321)
CI: Add stable-2.19 (#10319)

* Add ignore-2.20.txt.

* Add stable-2.19 to CI.

(cherry picked from commit dd3c253b78)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-01 21:31:31 +02:00
patchback[bot]
d7c5b35b32 [PR #10279/7e66fb05 backport][stable-11] CI: Add yamllint for YAML files, plugin/module docs, and YAML in extra docs (#10317)
CI: Add yamllint for YAML files, plugin/module docs, and YAML in extra docs (#10279)

* Add yamllint to CI.

* Fix more YAML booleans.

(cherry picked from commit 7e66fb052e)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-30 21:21:08 +02:00
patchback[bot]
a3e07bd083 [PR #10313/cc2e0679 backport][stable-11] htpasswd: doc adjustment (#10315)
htpasswd: doc adjustment (#10313)

(cherry picked from commit cc2e067907)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-06-30 20:32:57 +02:00
patchback[bot]
a4a1f1240e [PR #10280/41855418 backport][stable-11] CI: add checks for code block types in extra docs (#10314)
CI: add checks for code block types in extra docs (#10280)

* Add checks for code block types in extra docs.

* Add 'ini' and 'text' to allowlist.

(cherry picked from commit 41855418bb)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-30 20:25:49 +02:00
Felix Fontein
ee601d3036 Add comment that transform_recursively should no longer be needed.
(cherry picked from commit 3b5a9779b4)
2025-06-29 09:51:02 +02:00
patchback[bot]
3fdb4d4afb [PR #10311/5462b1cf backport][stable-11] xfconf: small refactor (#10312)
xfconf: small refactor (#10311)

* xfconf: small refactor

* add changelog frag

* Update changelogs/fragments/10311-xfconf-refactor.yml



---------


(cherry picked from commit 5462b1cff8)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-28 13:19:27 +02:00
patchback[bot]
685bdc0dc7 [PR #10304/7d06be1c backport][stable-11] fix typo in ipa_dnsrecord module examples (#10308)
fix typo in ipa_dnsrecord module examples (#10304)

[FIX] Typo in ipa_dnsrecord example

Simple comma instead of a period, easy mistake.

(cherry picked from commit 7d06be1c20)

Co-authored-by: alice seaborn <seaborn@lavabit.com>
2025-06-26 20:14:41 +00:00
patchback[bot]
fbe9e5ba3e [PR #10297/af8c586e backport][stable-11] Docs: use :anscollection: (#10301)
Docs: use :anscollection: (#10297)

Use :anscollection:.
(cherry picked from commit af8c586e29)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-25 22:03:57 +02:00
patchback[bot]
94c04e4a8f [PR #10270/1ed0f329 backport][stable-11] slack: support slack-gov.com (#10296)
slack: support slack-gov.com (#10270)

* slack: support slack-gov.com

Allow the slack module to work with GovSlack, hosted at https://slack-gov.com/

This re-uses the existing `domain` option so that users can set it to
`slack-gov.com` to use GovSlack. To maintain backwards compatibility,
any setting of `domain` for WebAPI tokens that is not `slack.com` or
`slack-gov.com` is ignored.

* fixup

* cleanup

* fix pep8

* clean up docs and better function name

* document default value

* try to fix yaml, not sure what is wrong

* Update plugins/modules/slack.py



* Update plugins/modules/slack.py



* Update plugins/modules/slack.py



---------


(cherry picked from commit 1ed0f329bc)

Co-authored-by: Wade Simmons <wsimmons@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-25 08:21:51 +02:00
patchback[bot]
dc30d33d64 [PR #10269/dd53a2ce backport][stable-11] cloudflare_dns: some refactoring (#10295)
cloudflare_dns: some refactoring (#10269)

* cloudflare_dns: remove extraneous validation

* further improvements

* revert the first validation removed

* simplify validation for types SRC and CAA

* add changelog frag

(cherry picked from commit dd53a2cee0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-06-25 08:21:41 +02:00
patchback[bot]
ffe55564f0 [PR #10286/e37cd1a0 backport][stable-11] fix YAML docs in multiple plugins (#10293)
fix YAML docs in multiple plugins (#10286)

* fix YAML docs in multiple plugins

* pfexec: fix short description

* adjust callback plugins

* fix wsl connection

* fix filter plugins

* fix inventory plugins

* minor adjustments in diy, print_task, xen_orchestra

(cherry picked from commit e37cd1a015)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-06-24 06:58:12 +02:00
patchback[bot]
ec87b44816 [PR #10170/52cd1049 backport][stable-11] jenkins_credentials: new module to manage Jenkins credentials (#10294)
jenkins_credentials: new module to manage Jenkins credentials (#10170)

* Added Jenkins credentials module to manage Jenkins credentials

* Added Jenkins credentials module to manage Jenkins credentials

* Added import error detection, adjusted indentation, and general enhancements.

* Added py3 requirement and set files value to avoid errors

* Added username to BOTMETA. Switched to format() instead of f strings to support py 2.7, improved delete function, and added function to read private key

* Remove redundant message



* Replaced requests with ansible.module_utils.urls, merged check domain and credential functions, and made minor adjustments to documentation

* Adjusted for py 2.7 compatibility

* Replaced command with state.

* Added managing credentials within a folder and made adjustments to documentation

* Added unit and integration tests, added token managament, and adjusted documentation.

* Added unit and integration tests, added token management, and adjusted documentation.(fix)

* Fix BOTMETA.yml

* Removed files and generate them at runtime.

* moved id and token checks to required_if

* Documentation changes, different test setup, and switched to Ansible testing tools

* Fixed typos

* Correct indentation.



---------


(cherry picked from commit 52cd104962)

Co-authored-by: YoussefKhalidAli <154611350+YoussefKhalidAli@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-24 06:51:33 +02:00
patchback[bot]
30e2d9f26f [PR #10285/3ab7a898 backport][stable-11] replace concatenations with f-string in plugins (#10290)
replace concatenations with f-string in plugins (#10285)

* replace concatenations with f-string in plugins

* add changelog frag

(cherry picked from commit 3ab7a898c6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-06-23 21:42:06 +02:00
patchback[bot]
874b00aebb [PR #10282/d4f2b2fb backport][stable-11] sl_vm: update docs about requirements (#10284)
sl_vm: update docs about requirements (#10282)

* sl_vm: update docs about requirements

* Update plugins/modules/sl_vm.py

(cherry picked from commit d4f2b2fb55)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-06-19 21:36:08 +02:00
patchback[bot]
274d7984c7 [PR #10267/b7f9f24f backport][stable-11] cloudflare_dns: Add PTR record support (#10281)
cloudflare_dns: Add PTR record support (#10267)

* cloudflare_dns: Add PTR record support

* Add changelog fragment

* Apply suggestions from code review



---------


(cherry picked from commit b7f9f24ffe)

Co-authored-by: Titus Sanchez <titusjo@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-19 07:34:31 +02:00
patchback[bot]
991b3cbb04 [PR #10271/40fb0f0c backport][stable-11] Inventory plugins: remove deprecated disable_lookups parameter (which was set to its default anyway) (#10278)
Inventory plugins: remove deprecated disable_lookups parameter (which was set to its default anyway) (#10271)

* Remove default value for keyword argument that is deprecated since ansible-core 2.19.

* Add changelog fragment.

(cherry picked from commit 40fb0f0c75)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-18 21:52:54 +02:00
patchback[bot]
66112e7c90 [PR #10272/5b14129c backport][stable-11] sysrc jail tests: FreeBSD 14.1 stopped working (#10276)
sysrc jail tests: FreeBSD 14.1 stopped working (#10272)

FreeBSD 14.1 stopped working.

(cherry picked from commit 5b14129c8f)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-18 21:52:40 +02:00
patchback[bot]
83a48d9fdc [PR #10231/f44ca23d backport][stable-11] keycloak: add support for client_credentials authentication (#10268)
keycloak: add support for client_credentials authentication (#10231)

* add client_credentials authentication for keycloak tasks incl. test case

* support client credentials in all keycloak modules

* Add changelog fragment

* fix typos in required list

* Update changelogs/fragments/10231-keycloak-add-client-credentials-authentication.yml



* revert keycloak url in test environment

---------


(cherry picked from commit f44ca23d7a)

Co-authored-by: divinity666 <65871511+divinity666@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-18 07:57:07 +02:00
patchback[bot]
9402741c63 [PR #10264/74ed0fc4 backport][stable-11] import mocks from community.internal_test_tools (#10266)
import mocks from community.internal_test_tools (#10264)

(cherry picked from commit 74ed0fc438)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-06-17 21:24:13 +02:00
patchback[bot]
c44a33e15d [PR #10261/38ab1fbb backport][stable-11] Extra docs: normalize code block language (#10263)
Extra docs: normalize code block language (#10261)

Extra docs: normalize code block language.

(cherry picked from commit 38ab1fbb88)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-17 07:03:10 +02:00
Felix Fontein
0b18c977df The next expected release is 11.1.0. 2025-06-16 20:57:48 +02:00
Felix Fontein
5da06a75aa Release 11.0.0. 2025-06-16 20:18:30 +02:00
Felix Fontein
d456d25d6b Add 11.0.0 release summary; update branch names. 2025-06-16 20:16:20 +02:00
Felix Fontein
abc2be0bf6 Update CI schedule.
(cherry picked from commit 49d84e7b97)
2025-06-16 20:12:10 +02:00
1563 changed files with 78741 additions and 98471 deletions

View File

@@ -70,19 +70,6 @@ stages:
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_21
displayName: Sanity 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.21/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_20
displayName: Sanity 2.20
dependsOn: []
@@ -109,6 +96,19 @@ stages:
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_18
displayName: Sanity 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.18/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
### Units
- stage: Units_devel
displayName: Units devel
@@ -125,19 +125,6 @@ stages:
- test: '3.12'
- test: '3.13'
- test: '3.14'
- test: '3.15'
- stage: Units_2_21
displayName: Units 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.21/units/{0}/1
targets:
- test: 3.9
- test: "3.12"
- test: "3.14"
- stage: Units_2_20
displayName: Units 2.20
dependsOn: []
@@ -162,6 +149,18 @@ stages:
- test: 3.8
- test: "3.11"
- test: "3.13"
- stage: Units_2_18
displayName: Units 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.18/units/{0}/1
targets:
- test: 3.8
- test: "3.11"
- test: "3.13"
## Remote
- stage: Remote_devel_extra_vms
@@ -174,8 +173,8 @@ stages:
targets:
- name: Alpine 3.23
test: alpine/3.23
# - name: Fedora 44
# test: fedora/44
# - name: Fedora 43
# test: fedora/43
- name: Ubuntu 22.04
test: ubuntu/22.04
- name: Ubuntu 24.04
@@ -190,8 +189,8 @@ stages:
parameters:
testFormat: devel/{0}
targets:
- name: macOS 26.3
test: macos/26.3
- name: macOS 15.3
test: macos/15.3
- name: RHEL 10.1
test: rhel/10.1
- name: RHEL 9.7
@@ -199,27 +198,8 @@ stages:
# TODO: enable this ASAP!
# - name: FreeBSD 15.0
# test: freebsd/15.0
# TODO: enable this ASAP!
# - name: FreeBSD 14.4
# test: freebsd/14.4
groups:
- 1
- 2
- 3
- stage: Remote_2_21
displayName: Remote 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.21/{0}
targets:
# - name: macOS 26.3
# test: macos/26.3
- name: RHEL 10.1
test: rhel/10.1
# - name: RHEL 9.7
# test: rhel/9.7
- name: FreeBSD 14.3
test: freebsd/14.3
groups:
- 1
- 2
@@ -232,8 +212,6 @@ stages:
parameters:
testFormat: 2.20/{0}
targets:
- name: macOS 15.3
test: macos/15.3
- name: RHEL 10.1
test: rhel/10.1
- name: FreeBSD 14.3
@@ -258,6 +236,22 @@ stages:
- 1
- 2
- 3
- stage: Remote_2_18
displayName: Remote 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.18/{0}
targets:
- name: macOS 14.3
test: macos/14.3
- name: FreeBSD 14.1
test: freebsd/14.1
groups:
- 1
- 2
- 3
### Docker
- stage: Docker_devel
@@ -268,8 +262,8 @@ stages:
parameters:
testFormat: devel/linux/{0}
targets:
- name: Fedora 44
test: fedora44
- name: Fedora 43
test: fedora43
- name: Alpine 3.23
test: alpine323
- name: Ubuntu 22.04
@@ -280,26 +274,6 @@ stages:
- 1
- 2
- 3
- stage: Docker_2_21
displayName: Docker 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.21/linux/{0}
targets:
- name: Fedora 43
test: fedora43
# - name: Alpine 3.23
# test: alpine323
# - name: Ubuntu 22.04
# test: ubuntu2204
- name: Ubuntu 24.04
test: ubuntu2404
groups:
- 1
- 2
- 3
- stage: Docker_2_20
displayName: Docker 2.20
dependsOn: []
@@ -332,6 +306,24 @@ stages:
- 1
- 2
- 3
- stage: Docker_2_18
displayName: Docker 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.18/linux/{0}
targets:
- name: Fedora 40
test: fedora40
- name: Alpine 3.20
test: alpine320
- name: Ubuntu 24.04
test: ubuntu2404
groups:
- 1
- 2
- 3
### Community Docker
- stage: Docker_community_devel
@@ -367,18 +359,6 @@ stages:
# testFormat: devel/generic/{0}/1
# targets:
# - test: '3.9'
# - test: '3.13'
# - test: '3.15'
# - stage: Generic_2_21
# displayName: Generic 2.21
# dependsOn: []
# jobs:
# - template: templates/matrix.yml
# parameters:
# nameFormat: Python {0}
# testFormat: 2.21/generic/{0}/1
# targets:
# - test: '3.9'
# - test: '3.12'
# - test: '3.14'
# - stage: Generic_2_20
@@ -402,33 +382,44 @@ stages:
# testFormat: 2.19/generic/{0}/1
# targets:
# - test: '3.9'
# - test: '3.13'
# - stage: Generic_2_18
# displayName: Generic 2.18
# dependsOn: []
# jobs:
# - template: templates/matrix.yml
# parameters:
# nameFormat: Python {0}
# testFormat: 2.18/generic/{0}/1
# targets:
# - test: '3.8'
# - test: '3.13'
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity_devel
- Sanity_2_21
- Sanity_2_20
- Sanity_2_19
- Sanity_2_18
- Units_devel
- Units_2_21
- Units_2_20
- Units_2_19
- Units_2_18
- Remote_devel_extra_vms
- Remote_devel
- Remote_2_21
- Remote_2_20
- Remote_2_19
- Remote_2_18
- Docker_devel
- Docker_2_21
- Docker_2_20
- Docker_2_19
- Docker_2_18
- Docker_community_devel
# Right now all generic tests are disabled. Uncomment when at least one of them is re-enabled.
# - Generic_devel
# - Generic_2_21
# - Generic_2_20
# - Generic_2_19
# - Generic_2_18
jobs:
- template: templates/coverage.yml

View File

@@ -11,7 +11,8 @@ Keep in mind that Azure Pipelines does not enforce unique job display names (onl
It is up to pipeline authors to avoid name collisions when deviating from the recommended format.
"""
from __future__ import annotations
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
@@ -23,12 +24,12 @@ def main():
"""Main program entry point."""
source_directory = sys.argv[1]
if "/ansible_collections/" in os.getcwd():
if '/ansible_collections/' in os.getcwd():
output_path = "tests/output"
else:
output_path = "test/results"
destination_directory = os.path.join(output_path, "coverage")
destination_directory = os.path.join(output_path, 'coverage')
if not os.path.exists(destination_directory):
os.makedirs(destination_directory)
@@ -37,27 +38,27 @@ def main():
count = 0
for name in os.listdir(source_directory):
match = re.search("^Coverage (?P<attempt>[0-9]+) (?P<label>.+)$", name)
label = match.group("label")
attempt = int(match.group("attempt"))
match = re.search('^Coverage (?P<attempt>[0-9]+) (?P<label>.+)$', name)
label = match.group('label')
attempt = int(match.group('attempt'))
jobs[label] = max(attempt, jobs.get(label, 0))
for label, attempt in jobs.items():
name = f"Coverage {attempt} {label}"
name = 'Coverage {attempt} {label}'.format(label=label, attempt=attempt)
source = os.path.join(source_directory, name)
source_files = os.listdir(source)
for source_file in source_files:
source_path = os.path.join(source, source_file)
destination_path = os.path.join(destination_directory, source_file + "." + label)
print(f'"{source_path}" -> "{destination_path}"')
destination_path = os.path.join(destination_directory, source_file + '.' + label)
print('"%s" -> "%s"' % (source_path, destination_path))
shutil.copyfile(source_path, destination_path)
count += 1
print(f"Coverage file count: {count}")
print(f"##vso[task.setVariable variable=coverageFileCount]{count}")
print(f"##vso[task.setVariable variable=outputPath]{output_path}")
print('Coverage file count: %d' % count)
print('##vso[task.setVariable variable=coverageFileCount]%d' % count)
print('##vso[task.setVariable variable=outputPath]%s' % output_path)
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -15,6 +15,7 @@ import pathlib
import shutil
import subprocess
import tempfile
import typing as t
import urllib.request
@@ -22,7 +23,7 @@ import urllib.request
class CoverageFile:
name: str
path: pathlib.Path
flags: list[str]
flags: t.List[str]
@dataclasses.dataclass(frozen=True)
@@ -33,8 +34,8 @@ class Args:
def parse_args() -> Args:
parser = argparse.ArgumentParser()
parser.add_argument("-n", "--dry-run", action="store_true")
parser.add_argument("path", type=pathlib.Path)
parser.add_argument('-n', '--dry-run', action='store_true')
parser.add_argument('path', type=pathlib.Path)
args = parser.parse_args()
@@ -45,36 +46,32 @@ def parse_args() -> Args:
return Args(**kwargs)
def process_files(directory: pathlib.Path) -> tuple[CoverageFile, ...]:
def process_files(directory: pathlib.Path) -> t.Tuple[CoverageFile, ...]:
processed = []
for file in directory.joinpath("reports").glob("coverage*.xml"):
name = file.stem.replace("coverage=", "")
for file in directory.joinpath('reports').glob('coverage*.xml'):
name = file.stem.replace('coverage=', '')
# Get flags from name
flags = name.replace("-powershell", "").split("=") # Drop '-powershell' suffix
flags = [
flag if not flag.startswith("stub") else flag.split("-")[0] for flag in flags
] # Remove "-01" from stub files
flags = name.replace('-powershell', '').split('=') # Drop '-powershell' suffix
flags = [flag if not flag.startswith('stub') else flag.split('-')[0] for flag in flags] # Remove "-01" from stub files
processed.append(CoverageFile(name, file, flags))
return tuple(processed)
def upload_files(codecov_bin: pathlib.Path, files: tuple[CoverageFile, ...], dry_run: bool = False) -> None:
def upload_files(codecov_bin: pathlib.Path, files: t.Tuple[CoverageFile, ...], dry_run: bool = False) -> None:
for file in files:
cmd = [
str(codecov_bin),
"--name",
file.name,
"--file",
str(file.path),
'--name', file.name,
'--file', str(file.path),
]
for flag in file.flags:
cmd.extend(["--flags", flag])
cmd.extend(['--flags', flag])
if dry_run:
print(f"DRY-RUN: Would run command: {cmd}")
print(f'DRY-RUN: Would run command: {cmd}')
continue
subprocess.run(cmd, check=True)
@@ -82,11 +79,11 @@ def upload_files(codecov_bin: pathlib.Path, files: tuple[CoverageFile, ...], dry
def download_file(url: str, dest: pathlib.Path, flags: int, dry_run: bool = False) -> None:
if dry_run:
print(f"DRY-RUN: Would download {url} to {dest} and set mode to {flags:o}")
print(f'DRY-RUN: Would download {url} to {dest} and set mode to {flags:o}')
return
with urllib.request.urlopen(url) as resp:
with dest.open("w+b") as f:
with dest.open('w+b') as f:
# Read data in chunks rather than all at once
shutil.copyfileobj(resp, f, 64 * 1024)
@@ -95,14 +92,14 @@ def download_file(url: str, dest: pathlib.Path, flags: int, dry_run: bool = Fals
def main():
args = parse_args()
url = "https://ansible-ci-files.s3.amazonaws.com/codecov/linux/codecov"
with tempfile.TemporaryDirectory(prefix="codecov-") as tmpdir:
codecov_bin = pathlib.Path(tmpdir) / "codecov"
url = 'https://ansible-ci-files.s3.amazonaws.com/codecov/linux/codecov'
with tempfile.TemporaryDirectory(prefix='codecov-') as tmpdir:
codecov_bin = pathlib.Path(tmpdir) / 'codecov'
download_file(url, codecov_bin, 0o755, args.dry_run)
files = process_files(args.path)
upload_files(codecov_bin, files, args.dry_run)
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -5,7 +5,8 @@
"""Prepends a relative timestamp to each input line from stdin and writes it to stdout."""
from __future__ import annotations
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import time
@@ -15,14 +16,14 @@ def main():
"""Main program entry point."""
start = time.time()
sys.stdin.reconfigure(errors="surrogateescape")
sys.stdout.reconfigure(errors="surrogateescape")
sys.stdin.reconfigure(errors='surrogateescape')
sys.stdout.reconfigure(errors='surrogateescape')
for line in sys.stdin:
seconds = int(time.time() - start)
sys.stdout.write(f"{seconds // 60:02}:{seconds % 60:02} {line}")
seconds = time.time() - start
sys.stdout.write('%02d:%02d %s' % (seconds // 60, seconds % 60, line))
sys.stdout.flush()
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -1,34 +0,0 @@
{
"name": "community.general devcontainer",
"image": "mcr.microsoft.com/devcontainers/python:3.14-bookworm",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
},
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true,
"files.autoSave": "afterDelay",
"files.eol": "\n",
"files.insertFinalNewline": true,
"files.trimFinalNewlines": true,
"files.trimTrailingWhitespace": true
},
"extensions": [
"charliermarsh.ruff",
"ms-python.python",
"ms-python.vscode-pylance",
"redhat.ansible",
"redhat.vscode-yaml",
"trond-snekvik.simple-rst"
]
}
},
"remoteUser": "vscode",
"postCreateCommand": ".devcontainer/setup.sh",
"workspaceFolder": "/workspace/ansible_collections/community/general",
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace/ansible_collections/community/general,type=bind"
}

View File

@@ -1,3 +0,0 @@
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https: //www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
SPDX-FileCopyrightText: 2025 Alexei Znamensky <russoz@gmail.com>

View File

@@ -1,10 +0,0 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Alexei Znamensky <russoz@gmail.com>
nox
ruff
antsibull-nox
pre-commit
ansible-core
andebox

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -x
sudo chown -R vscode:vscode /workspace/
pip install -U pip
pip install -r .devcontainer/requirements-dev.txt
pip install -r tests/unit/requirements.txt
export ANSIBLE_COLLECTIONS_PATH=/workspace:${ANSIBLE_COLLECTIONS_PATH}
ansible-galaxy collection install -v -r tests/unit/requirements.yml
ansible-galaxy collection install -v -r tests/integration/requirements.yml
pre-commit install

View File

@@ -7,9 +7,3 @@ d032de3b16eed11ea3a31cd3d96d78f7c46a2ee0
e8f965fbf8154ea177c6622da149f2ae8533bd3c
e938ca5f20651abc160ee6aba10014013d04dcc1
eaa5e07b2866e05b6c7b5628ca92e9cb1142d008
# Code reformatting
340ff8586d4f1cb6a0f3c934eb42589bcc29c0ea
e530d2906a1f61df89861286ac57c951a247f32c
b769b0bc01520d12699d3911e1fc290b813cde40
dd9c86dfc094131f223ffb59e5a3d9f2dfc5875d

112
.github/BOTMETA.yml vendored
View File

@@ -65,9 +65,6 @@ files:
$callbacks/log_plays.py: {}
$callbacks/loganalytics.py:
maintainers: zhcli
$callbacks/loganalytics_ingestion.py:
ignore: zhcli
maintainers: pboushy vsh47 wtcline-intc
$callbacks/logdna.py: {}
$callbacks/logentries.py: {}
$callbacks/logstash.py:
@@ -102,6 +99,7 @@ files:
$callbacks/unixy.py:
labels: unixy
maintainers: akatch
$callbacks/yaml.py: {}
$connections/:
labels: connections
$connections/chroot.py: {}
@@ -136,8 +134,6 @@ files:
$doc_fragments/hwc.py:
labels: hwc
maintainers: $team_huawei
$doc_fragments/_icinga2_api.py:
maintainers: cfiehe
$doc_fragments/nomad.py:
maintainers: chris93111 apecnascimento
$doc_fragments/pipx.py:
@@ -221,10 +217,6 @@ files:
maintainers: resmo
$filters/to_time_unit.yml:
maintainers: resmo
$filters/to_toml.py:
maintainers: milliams
$filters/to_toml.yml:
maintainers: milliams
$filters/to_weeks.yml:
maintainers: resmo
$filters/to_yaml.py:
@@ -247,9 +239,6 @@ files:
maintainers: vbotka
$inventories/icinga2.py:
maintainers: BongoEADGC6
$inventories/incus.py:
labels: incus
maintainers: stgraber
$inventories/linode.py:
keywords: linode dynamic inventory script
labels: cloud linode
@@ -312,7 +301,7 @@ files:
$lookups/lmdb_kv.py:
maintainers: jpmens
$lookups/merge_variables.py:
maintainers: rlenferink m-a-r-k-e alpex8 cfiehe
maintainers: rlenferink m-a-r-k-e alpex8
$lookups/onepass:
labels: onepassword
maintainers: samdoran
@@ -367,8 +356,6 @@ files:
keywords: cloud huawei hwc
labels: huawei hwc_utils networking
maintainers: $team_huawei
$module_utils/_icinga2.py:
maintainers: cfiehe
$module_utils/identity/keycloak/keycloak.py:
maintainers: $team_keycloak
$module_utils/identity/keycloak/keycloak_clientsecret.py:
@@ -379,13 +366,6 @@ files:
$module_utils/jenkins.py:
labels: jenkins
maintainers: russoz
$module_utils/_crypt.py:
maintainers: russoz
$module_utils/_lxc.py:
maintainers: russoz
$module_utils/_lvm.py:
labels: lvm
maintainers: russoz
$module_utils/manageiq.py:
labels: manageiq
maintainers: $team_manageiq
@@ -414,6 +394,9 @@ files:
$module_utils/puppet.py:
labels: puppet
maintainers: russoz
$module_utils/pure.py:
labels: pure pure_storage
maintainers: $team_purestorage
$module_utils/redfish_utils.py:
labels: redfish_utils
maintainers: $team_redfish
@@ -497,6 +480,8 @@ files:
keywords: beadm dladm illumos ipadm nexenta omnios openindiana pfexec smartos solaris sunos zfs zpool
labels: beadm solaris
maintainers: $team_solaris
$modules/bearychat.py:
maintainers: tonyseek
$modules/bigpanda.py:
ignore: hkariti
$modules/bitbucket_:
@@ -602,6 +587,9 @@ files:
$modules/etcd3.py:
ignore: vfauth
maintainers: evrardjp
$modules/facter.py:
labels: facter
maintainers: $team_ansible_core gamethis
$modules/facter_facts.py:
labels: facter
maintainers: russoz $team_ansible_core gamethis
@@ -610,8 +598,6 @@ files:
$modules/filesystem.py:
labels: filesystem
maintainers: pilou- abulimov quidame
$modules/file_remove.py:
maintainers: shahargolshani
$modules/flatpak.py:
maintainers: $team_flatpak
$modules/flatpak_remote.py:
@@ -647,10 +633,6 @@ files:
maintainers: adrianmoisey
$modules/github_repo.py:
maintainers: atorrescogollo
$modules/github_secrets.py:
maintainers: konstruktoid
$modules/github_secrets_info.py:
maintainers: konstruktoid
$modules/gitlab_:
keywords: gitlab source_control
maintainers: $team_gitlab
@@ -666,10 +648,10 @@ files:
maintainers: zvaraondrej
$modules/gitlab_milestone.py:
maintainers: gpongelli
$modules/gitlab_instance_variable.py:
maintainers: benibr
$modules/gitlab_project_variable.py:
maintainers: markuman
$modules/gitlab_instance_variable.py:
maintainers: benibr
$modules/gitlab_runner.py:
maintainers: SamyCoenen
$modules/gitlab_user.py:
@@ -729,8 +711,6 @@ files:
maintainers: $team_huawei huaweicloud
$modules/ibm_sa_:
maintainers: tzure
$modules/icinga2_downtime.py:
maintainers: cfiehe
$modules/icinga2_feature.py:
maintainers: nerzhul
$modules/icinga2_host.py:
@@ -769,8 +749,6 @@ files:
maintainers: obourdon hryamzik
$modules/ip_netns.py:
maintainers: bregman-arie
$modules/ip2location_info.py:
maintainers: ip2location
$modules/ipa_:
maintainers: $team_ipa
ignore: fxfitz
@@ -778,14 +756,14 @@ files:
maintainers: abakanovskii
$modules/ipa_dnsrecord.py:
maintainers: $team_ipa jwbernin
$modules/ipbase_info.py:
maintainers: dominikkukacka
$modules/ipa_pwpolicy.py:
maintainers: adralioh
$modules/ipa_service.py:
maintainers: cprh
$modules/ipa_vault.py:
maintainers: jparrill
$modules/ipbase_info.py:
maintainers: dominikkukacka
$modules/ipify_facts.py:
maintainers: resmo
$modules/ipinfoio_facts.py:
@@ -835,8 +813,6 @@ files:
maintainers: Slezhuk pertoft
$modules/kdeconfig.py:
maintainers: smeso
$modules/kea_command.py:
maintainers: mirabilos
$modules/kernel_blacklist.py:
maintainers: matze
$modules/keycloak_:
@@ -845,22 +821,16 @@ files:
maintainers: elfelip Gaetan2907
$modules/keycloak_authentication_required_actions.py:
maintainers: Skrekulko
$modules/keycloak_authentication_v2.py:
maintainers: thomasbargetz
$modules/keycloak_authz_authorization_scope.py:
maintainers: mattock
$modules/keycloak_authz_custom_policy.py:
maintainers: mattock
$modules/keycloak_authz_permission.py:
maintainers: mattock
$modules/keycloak_authz_custom_policy.py:
maintainers: mattock
$modules/keycloak_authz_permission_info.py:
maintainers: mattock
$modules/keycloak_client.py:
maintainers: koke1997
$modules/keycloak_client_rolemapping.py:
maintainers: Gaetan2907
$modules/keycloak_client_rolescope.py:
maintainers: desand01
$modules/keycloak_clientscope.py:
maintainers: Gaetan2907
$modules/keycloak_clientscope_type.py:
@@ -871,8 +841,6 @@ files:
maintainers: fynncfchen johncant
$modules/keycloak_component.py:
maintainers: fivetide
$modules/keycloak_component_info.py:
maintainers: desand01
$modules/keycloak_group.py:
maintainers: adamgoossens
$modules/keycloak_identity_provider.py:
@@ -882,23 +850,23 @@ files:
$modules/keycloak_realm_info.py:
maintainers: fynncfchen
$modules/keycloak_realm_key.py:
maintainers: mattock koke1997
$modules/keycloak_realm_localization.py:
maintainers: danekja
$modules/keycloak_realm_rolemapping.py:
maintainers: agross mhuysamen Gaetan2907
maintainers: mattock
$modules/keycloak_role.py:
maintainers: laurpaum
$modules/keycloak_user.py:
maintainers: elfelip
$modules/keycloak_user_execute_actions_email.py:
maintainers: mariusbertram
$modules/keycloak_user_federation.py:
maintainers: laurpaum
$modules/keycloak_user_rolemapping.py:
maintainers: bratwurzt koke1997
$modules/keycloak_userprofile.py:
maintainers: yeoldegrove
$modules/keycloak_component_info.py:
maintainers: desand01
$modules/keycloak_client_rolescope.py:
maintainers: desand01
$modules/keycloak_user_rolemapping.py:
maintainers: bratwurzt
$modules/keycloak_realm_rolemapping.py:
maintainers: agross mhuysamen Gaetan2907
$modules/keyring.py:
maintainers: ahussey-redhat
$modules/keyring_info.py:
@@ -941,8 +909,6 @@ files:
labels: logentries
$modules/logentries_msg.py:
maintainers: jcftang
$modules/logrotate.py:
maintainers: a-gabidullin
$modules/logstash_plugin.py:
maintainers: nerzhul
$modules/lvg.py:
@@ -965,10 +931,6 @@ files:
maintainers: conloos
$modules/lxd_project.py:
maintainers: we10710aa
$modules/lxd_storage_pool_info.py:
maintainers: smcavoy
$modules/lxd_storage_volume_info.py:
maintainers: smcavoy
$modules/macports.py:
ignore: ryansb
keywords: brew cask darwin homebrew macosx macports osx
@@ -1345,11 +1307,8 @@ files:
$modules/snap_alias.py:
labels: snap
maintainers: russoz
$modules/snap_connect.py:
labels: snap
maintainers: russoz
$modules/snmp_facts.py:
maintainers: ogenstad ujwalkomarla lalten
maintainers: ogenstad ujwalkomarla
$modules/solaris_zone.py:
keywords: beadm dladm illumos ipadm nexenta omnios openindiana pfexec smartos solaris sunos zfs zpool
labels: solaris
@@ -1366,8 +1325,6 @@ files:
maintainers: farhan7500 gautamphegde
$modules/ssh_config.py:
maintainers: gaqzi Akasurde
$modules/sssd_info.py:
maintainers: a-gabidullin
$modules/stacki_host.py:
labels: stacki_host
maintainers: bsanders bbyhuy
@@ -1524,24 +1481,22 @@ files:
ignore: matze
labels: zypper
maintainers: $team_suse
$plugin_utils/ansible_type.py:
maintainers: vbotka
$modules/zypper_repository_info.py:
labels: zypper
maintainers: $team_suse TobiasZeuch181
$plugin_utils/ansible_type.py:
maintainers: vbotka
$plugin_utils/keys_filter.py:
maintainers: vbotka
$plugin_utils/unsafe.py:
maintainers: felixfontein
$plugin_utils/_tags.py:
maintainers: felixfontein
$tests/a_module.py:
maintainers: felixfontein
$tests/ansible_type.py:
maintainers: vbotka
$tests/fqdn_valid.py:
maintainers: vbotka
#########################
#########################
docs/docsite/rst/filter_guide.rst: {}
docs/docsite/rst/filter_guide_abstract_informations.rst: {}
docs/docsite/rst/filter_guide_abstract_informations_counting_elements_in_sequence.rst:
@@ -1580,8 +1535,6 @@ files:
maintainers: russoz
docs/docsite/rst/guide_deps.rst:
maintainers: russoz
docs/docsite/rst/guide_ee.rst:
maintainers: russoz
docs/docsite/rst/guide_iocage.rst:
maintainers: russoz felixfontein
docs/docsite/rst/guide_iocage_inventory.rst:
@@ -1612,7 +1565,7 @@ files:
maintainers: russoz
docs/docsite/rst/test_guide.rst:
maintainers: felixfontein
#########################
#########################
tests/:
labels: tests
tests/integration:
@@ -1639,7 +1592,7 @@ macros:
plugin_utils: plugins/plugin_utils
tests: plugins/test
team_ansible_core:
team_aix: MorrisA bcoca d-little flynn1973 gforster kairoaraujo marvin-sinister molekuul ramooncamacho wtcross
team_aix: MorrisA bcoca d-little flynn1973 gforster kairoaraujo marvin-sinister mator molekuul ramooncamacho wtcross
team_bsd: JoergFiedler MacLemon bcoca dch jasperla mekanix opoplawski overhacked tuxillo
team_consul: sgargan apollo13 Ilgmi
team_cyberark_conjur: jvanderhoof ryanprior
@@ -1657,10 +1610,11 @@ macros:
team_networking: NilashishC Qalthos danielmellado ganeshrn justjais trishnaguha sganesh-infoblox privateip
team_opennebula: ilicmilan meerkampdvv rsmontero xorel nilsding
team_oracle: manojmeda mross22 nalsaber
team_purestorage: bannaych dnix101 genegr lionmax opslounge raekins sdodsley sile16
team_redfish: mraineri tomasg2012 xmadsen renxulei rajeevkallur bhavya06 jyundt
team_rhsm: cnsnyder ptoscano
team_scaleway: remyleone abarbare
team_solaris: bcoca fishman jasperla jpdasma scathatheworm troy2914 xen0l
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l
team_suse: commel evrardjp lrupp AnderEnder alxgu andytom sealor
team_virt: joshainglis karmab Thulium-Drake Ajpantuso
team_wdc: mikemoerk

View File

@@ -29,8 +29,8 @@ jobs:
strategy:
matrix:
ansible:
- '2.16'
- '2.17'
- '2.18'
runs-on: ubuntu-latest
steps:
- name: Perform sanity testing
@@ -58,18 +58,18 @@ jobs:
exclude:
- ansible: ''
include:
- ansible: '2.16'
python: '2.7'
- ansible: '2.16'
python: '3.6'
- ansible: '2.16'
python: '3.11'
- ansible: '2.17'
python: '3.7'
- ansible: '2.17'
python: '3.10'
- ansible: '2.17'
python: '3.12'
- ansible: '2.18'
python: '3.8'
- ansible: '2.18'
python: '3.11'
- ansible: '2.18'
python: '3.13'
steps:
- name: >-
@@ -105,6 +105,44 @@ jobs:
exclude:
- ansible: ''
include:
# 2.16
# CentOS 7 does not work in GHA, that's why it's not listed here.
- ansible: '2.16'
docker: fedora38
python: ''
target: azp/posix/1/
- ansible: '2.16'
docker: fedora38
python: ''
target: azp/posix/2/
- ansible: '2.16'
docker: fedora38
python: ''
target: azp/posix/3/
- ansible: '2.16'
docker: opensuse15
python: ''
target: azp/posix/1/
- ansible: '2.16'
docker: opensuse15
python: ''
target: azp/posix/2/
- ansible: '2.16'
docker: opensuse15
python: ''
target: azp/posix/3/
- ansible: '2.16'
docker: alpine3
python: ''
target: azp/posix/1/
- ansible: '2.16'
docker: alpine3
python: ''
target: azp/posix/2/
- ansible: '2.16'
docker: alpine3
python: ''
target: azp/posix/3/
# 2.17
- ansible: '2.17'
docker: fedora39
@@ -118,18 +156,6 @@ jobs:
docker: fedora39
python: ''
target: azp/posix/3/
- ansible: '2.17'
docker: ubuntu2004
python: ''
target: azp/posix/1/
- ansible: '2.17'
docker: ubuntu2004
python: ''
target: azp/posix/2/
- ansible: '2.17'
docker: ubuntu2004
python: ''
target: azp/posix/3/
- ansible: '2.17'
docker: alpine319
python: ''
@@ -142,61 +168,18 @@ jobs:
docker: alpine319
python: ''
target: azp/posix/3/
# Right now all generic tests are disabled. Uncomment when at least one of them is re-enabled.
# - ansible: '2.17'
# docker: default
# python: '3.7'
# target: azp/generic/1/
# - ansible: '2.17'
# docker: default
# python: '3.12'
# target: azp/generic/1/
# 2.18
- ansible: '2.18'
docker: fedora40
- ansible: '2.17'
docker: ubuntu2004
python: ''
target: azp/posix/1/
- ansible: '2.18'
docker: fedora40
- ansible: '2.17'
docker: ubuntu2004
python: ''
target: azp/posix/2/
- ansible: '2.18'
docker: fedora40
- ansible: '2.17'
docker: ubuntu2004
python: ''
target: azp/posix/3/
- ansible: '2.18'
docker: ubuntu2404
python: ''
target: azp/posix/1/
- ansible: '2.18'
docker: ubuntu2404
python: ''
target: azp/posix/2/
- ansible: '2.18'
docker: ubuntu2404
python: ''
target: azp/posix/3/
- ansible: '2.18'
docker: alpine320
python: ''
target: azp/posix/1/
- ansible: '2.18'
docker: alpine320
python: ''
target: azp/posix/2/
- ansible: '2.18'
docker: alpine320
python: ''
target: azp/posix/3/
# Right now all generic tests are disabled. Uncomment when at least one of them is re-enabled.
# - ansible: '2.18'
# docker: default
# python: '3.8'
# target: azp/generic/1/
# - ansible: '2.18'
# docker: default
# python: '3.13'
# target: azp/generic/1/
steps:
- name: >-

View File

@@ -1,34 +0,0 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
name: nox
'on':
push:
branches:
- main
- stable-*
paths:
- docs/**
pull_request:
paths:
- docs/**
# Run CI once per day (at 08:00 UTC)
schedule:
- cron: '0 8 * * *'
workflow_dispatch:
jobs:
nox:
runs-on: ubuntu-latest
name: "Validate generated Ansible output"
steps:
- name: Check out collection
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Run nox
uses: ansible-community/antsibull-nox@main
with:
sessions: ansible-output

242
.mypy.ini
View File

@@ -1,242 +0,0 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
[mypy]
# check_untyped_defs = True
# disallow_untyped_defs = True
# strict = True -- only try to enable once everything (including dependencies!) is typed
strict_equality = True
strict_bytes = True
warn_redundant_casts = True
# warn_return_any = True
warn_unreachable = True
exclude = tests/integration/targets/django_.*/files/.*
[mypy-ansible.*]
# ansible-core has partial typing information
follow_untyped_imports = True
# The following imports are Python packages that:
# 1. We do not install (we can't install everything!);
# 2. That have type stubs, but we don't install them (again, we can't install everything!); or
# 3. That have no types and type stubs.
[mypy-aerospike.*]
ignore_missing_imports = True
[mypy-antsibull_nox.*]
ignore_missing_imports = True
[mypy-asyncore.*]
ignore_missing_imports = True
[mypy-boto3.*]
ignore_missing_imports = True
[mypy-bs4.*]
ignore_missing_imports = True
[mypy-cgi.*]
ignore_missing_imports = True
[mypy-chef.*]
ignore_missing_imports = True
[mypy-consul.*]
ignore_missing_imports = True
[mypy-credstash.*]
ignore_missing_imports = True
[mypy-crypt.*]
ignore_missing_imports = True
[mypy-daemon.*]
ignore_missing_imports = True
[mypy-datadog.*]
ignore_missing_imports = True
[mypy-dbus.*]
ignore_missing_imports = True
[mypy-delinea.*]
ignore_missing_imports = True
[mypy-dnf.*]
ignore_missing_imports = True
[mypy-dnsimple.*]
ignore_missing_imports = True
[mypy-etcd3.*]
ignore_missing_imports = True
[mypy-flatdict.*]
ignore_missing_imports = True
[mypy-footmark.*]
ignore_missing_imports = True
[mypy-fqdn.*]
ignore_missing_imports = True
[mypy-func.*]
ignore_missing_imports = True
[mypy-gi.*]
ignore_missing_imports = True
[mypy-github3.*]
ignore_missing_imports = True
[mypy-gssapi.*]
ignore_missing_imports = True
[mypy-hashids.*]
ignore_missing_imports = True
[mypy-heroku3.*]
ignore_missing_imports = True
[mypy-hpe3parclient.*]
ignore_missing_imports = True
[mypy-hpe3par_sdk.*]
ignore_missing_imports = True
[mypy-hpilo.*]
ignore_missing_imports = True
[mypy-hpOneView.*]
ignore_missing_imports = True
[mypy-httmock.*] # TODO!
ignore_missing_imports = True
[mypy-influxdb.*]
ignore_missing_imports = True
[mypy-jc.*]
ignore_missing_imports = True
[mypy-jenkins.*]
ignore_missing_imports = True
[mypy-jmespath.*]
ignore_missing_imports = True
[mypy-jsonpatch.*]
ignore_missing_imports = True
[mypy-kazoo.*]
ignore_missing_imports = True
[mypy-keyring.*]
ignore_missing_imports = True
[mypy-keystoneauth1.*]
ignore_missing_imports = True
[mypy-layman.*]
ignore_missing_imports = True
[mypy-ldap.*]
ignore_missing_imports = True
[mypy-legacycrypt.*]
ignore_missing_imports = True
[mypy-libcloud.*]
ignore_missing_imports = True
[mypy-linode.*]
ignore_missing_imports = True
[mypy-linode_api4.*]
ignore_missing_imports = True
[mypy-lmdb.*]
ignore_missing_imports = True
[mypy-logdna.*]
ignore_missing_imports = True
[mypy-logstash.*]
ignore_missing_imports = True
[mypy-lxc.*]
ignore_missing_imports = True
[mypy-manageiq_client.*]
ignore_missing_imports = True
[mypy-matrix_client.*]
ignore_missing_imports = True
[mypy-memcache.*]
ignore_missing_imports = True
[mypy-nc_dnsapi.*]
ignore_missing_imports = True
[mypy-nomad.*]
ignore_missing_imports = True
[mypy-nopackagewiththisname.*]
ignore_missing_imports = True
[mypy-nox.*]
ignore_missing_imports = True
[mypy-oci.*]
ignore_missing_imports = True
[mypy-oneandone.*]
ignore_missing_imports = True
[mypy-opentelemetry.*]
ignore_missing_imports = True
[mypy-ovh.*]
ignore_missing_imports = True
[mypy-ovirtsdk.*]
ignore_missing_imports = True
[mypy-packet.*]
ignore_missing_imports = True
[mypy-paho.*]
ignore_missing_imports = True
[mypy-pam.*]
ignore_missing_imports = True
[mypy-pdpyras.*]
ignore_missing_imports = True
[mypy-petname.*]
ignore_missing_imports = True
[mypy-pingdom.*]
ignore_missing_imports = True
[mypy-pkg_resources.*]
ignore_missing_imports = True
[mypy-portage.*]
ignore_missing_imports = True
[mypy-potatoes_that_will_never_be_there.*]
ignore_missing_imports = True
[mypy-prettytable.*]
ignore_missing_imports = True
[mypy-pubnub_blocks_client.*]
ignore_missing_imports = True
[mypy-pushbullet.*]
ignore_missing_imports = True
[mypy-pycdlib.*]
ignore_missing_imports = True
[mypy-pyghmi.*]
ignore_missing_imports = True
[mypy-pylxca.*]
ignore_missing_imports = True
[mypy-pymssql.*]
ignore_missing_imports = True
[mypy-pyodbc.*]
ignore_missing_imports = True
[mypy-pyone.*]
ignore_missing_imports = True
[mypy-pypureomapi.*]
ignore_missing_imports = True
[mypy-pysnmp.*]
ignore_missing_imports = True
[mypy-pyxcli.*]
ignore_missing_imports = True
[mypy-rpm.*]
ignore_missing_imports = True
[mypy-ruamel.yaml.*]
ignore_missing_imports = True
[mypy-salt.*]
ignore_missing_imports = True
[mypy-selinux.*]
ignore_missing_imports = True
[mypy-semantic_version.*]
ignore_missing_imports = True
[mypy-sendgrid.*]
ignore_missing_imports = True
[mypy-seobject.*]
ignore_missing_imports = True
[mypy-sha.*]
ignore_missing_imports = True
[mypy-smtpd.*]
ignore_missing_imports = True
[mypy-smtpd_tls.*]
ignore_missing_imports = True
[mypy-SoftLayer.*]
ignore_missing_imports = True
[mypy-spotinst_sdk.*]
ignore_missing_imports = True
[mypy-statsd.*]
ignore_missing_imports = True
[mypy-storops.*]
ignore_missing_imports = True
[mypy-taiga.*]
ignore_missing_imports = True
[mypy-thycotic.*]
ignore_missing_imports = True
[mypy-tomlkit.*]
ignore_missing_imports = True
[mypy-univention.*]
ignore_missing_imports = True
[mypy-vexatapi.*]
ignore_missing_imports = True
[mypy-voluptuous.*]
ignore_missing_imports = True
[mypy-websocket.*]
ignore_missing_imports = True
[mypy-XenAPI.*]
ignore_missing_imports = True
[mypy-xkcdpass.*]
ignore_missing_imports = True
[mypy-xmljson.*]
ignore_missing_imports = True
[mypy-xmltodict.*]
ignore_missing_imports = True
[mypy-xmpp.*]
ignore_missing_imports = True

View File

@@ -1,13 +0,0 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Alexei Znamensky <russoz@gmail.com>
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.15.9
hooks:
# Run the linter.
- id: ruff-check
# Run the formatter.
- id: ruff-format

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -39,7 +39,7 @@ Please read our ['Contributing to collections'](https://docs.ansible.com/project
* Make sure your PR includes a [changelog fragment](https://docs.ansible.com/projects/ansible/devel/community/collection_development_process.html#creating-a-changelog-fragment).
* You must not include a fragment for new modules or new plugins. Also you shouldn't include one for docs-only changes. (If you're not sure, simply don't include one, we'll tell you whether one is needed or not :) )
* Please always include a link to the pull request itself, and if the PR is about an issue, also a link to the issue. Also make sure the fragment ends with a period, and begins with a lower-case letter after `-`. (Again, if you don't do this, we'll add suggestions to fix it, so don't worry too much :) )
* Note that we format the code with `ruff format`. If your change does not match the formatters expectations, CI will fail and your PR will not get merged. See below for how to format code with antsibull-nox.
* Avoid reformatting unrelated parts of the codebase in your PR. These types of changes will likely be requested for reversion, create additional work for reviewers, and may cause approval to be delayed.
You can also read the Ansible community's [Quick-start development guide](https://docs.ansible.com/projects/ansible/devel/community/create_pr_quick_start.html).
@@ -49,24 +49,11 @@ If you want to test a PR locally, refer to [our testing guide](https://docs.ansi
If you find any inconsistencies or places in this document which can be improved, feel free to raise an issue or pull request to fix it.
## Format code; and run sanity or unit tests locally (with antsibull-nox)
## Run sanity or unit locally (with antsibull-nox)
The easiest way to format the code, and to run sanity and unit tests locally is to use [antsibull-nox](https://docs.ansible.com/projects/antsibull-nox/).
The easiest way to run sanity and unit tests locally is to use [antsibull-nox](https://docs.ansible.com/projects/antsibull-nox/).
(If you have [nox](https://nox.thea.codes/en/stable/) installed, it will automatically install antsibull-nox in a virtual environment for you.)
### Format code
The following commands show how to run ruff format:
```.bash
# Run all configured formatters:
nox -Re formatters
# If you notice discrepancies between your local formatter and CI, you might
# need to re-generate the virtual environment:
nox -e formatters
```
### Sanity tests
The following commands show how to run ansible-test sanity tests:
@@ -133,7 +120,6 @@ ansible-test sanity --docker -v plugins/modules/system/pids.py tests/integration
Note that for running unit tests, you need to install required collections in the same folder structure that `community.general` is checked out in.
Right now, you need to install [`community.internal_test_tools`](https://github.com/ansible-collections/community.internal_test_tools).
If you want to use the latest version from GitHub, you can run:
```
git clone https://github.com/ansible-collections/community.internal_test_tools.git ~/dev/ansible_collections/community/internal_test_tools
```
@@ -156,7 +142,6 @@ ansible-test units --docker -v --python 3.8 tests/unit/plugins/modules/net_tools
Note that for running integration tests, you need to install required collections in the same folder structure that `community.general` is checked out in.
Right now, depending on the test, you need to install [`ansible.posix`](https://github.com/ansible-collections/ansible.posix), [`community.crypto`](https://github.com/ansible-collections/community.crypto), and [`community.docker`](https://github.com/ansible-collections/community.docker):
If you want to use the latest versions from GitHub, you can run:
```
mkdir -p ~/dev/ansible_collections/ansible
git clone https://github.com/ansible-collections/ansible.posix.git ~/dev/ansible_collections/ansible/posix
@@ -169,13 +154,11 @@ The following commands show how to run integration tests:
#### In Docker
Integration tests on Docker have the following parameters:
- `image_name` (required): The name of the Docker image. To get the list of supported Docker images, run
`ansible-test integration --help` and look for _target docker images_.
- `test_name` (optional): The name of the integration test.
For modules, this equals the short name of the module; for example, `pacman` in case of `community.general.pacman`.
For plugins, the plugin type is added before the plugin's short name, for example `callback_yaml` for the `community.general.yaml` callback.
```.bash
# Test all plugins/modules on fedora40
ansible-test integration -v --docker fedora40
@@ -196,31 +179,6 @@ ansible-test integration -v lookup_flattened
If you are unsure about the integration test target name for a module or plugin, you can take a look in `tests/integration/targets/`. Tests for plugins have the plugin type prepended.
## Devcontainer
Since community.general 12.2.0, the project repository supports [devcontainers](https://containers.dev/). In short, it is a standard mechanism to
create a container that is then used during the development cycle. Many tools are pre-installed in the container and will be already available
to you as a developer. A number of different IDEs support that configuration, the most prominent ones being VSCode and PyCharm.
See the files under [.devcontainer](.devcontainer) for details on what is deployed inside that container.
Beware of:
- By default, the devcontainer installs the latest version of `ansible-core`.
When testing your changes locally, keep in mind that the collection must support older versions of
`ansible-core` and, depending on what is being tested, results may vary.
- Integration tests executed directly inside the devcontainer without isolation (see above) may fail if
they expected to be run in full fledged VMs. On the other hand, the devcontainer setup allows running
containers inside the container (the `docker-in-docker` feature).
- The devcontainer is built with a directory structure such that
`.../ansible_collections/community/general` contains the project repository, so `ansible-test` and
other standard tools should work without any additional setup
- By default, the devcontainer installs `pre-commit` and configures it to perform `ruff check` and
`ruff format` on the Python files, prior to commiting. That configuration is going to be used by
`git` even outside the devcontainer. To prevent errors, you have to either install `pre-commit` in
your computer, outside the devcontainer, or run `pre-commit uninstall` from within the devcontainer
before quitting it.
## Creating new modules or plugins
Creating new modules and plugins requires a bit more work than other Pull Requests.
@@ -230,7 +188,7 @@ Creating new modules and plugins requires a bit more work than other Pull Reques
2. Please do not add more than one plugin/module in one PR, especially if it is the first plugin/module you are contributing.
That makes it easier for reviewers, and increases the chance that your PR will get merged. If you plan to contribute a group
of plugins/modules (say, more than a module and a corresponding `_info` module), please mention that in the first PR. In
of plugins/modules (say, more than a module and a corresponding ``_info`` module), please mention that in the first PR. In
such cases, you also have to think whether it is better to publish the group of plugins/modules in a new collection.
3. When creating a new module or plugin, please make sure that you follow various guidelines:

48
LICENSES/PSF-2.0.txt Normal file
View File

@@ -0,0 +1,48 @@
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 Python Software Foundation;
All Rights Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.

View File

@@ -7,9 +7,9 @@ SPDX-License-Identifier: GPL-3.0-or-later
# Community General Collection
[![Documentation](https://img.shields.io/badge/docs-brightgreen.svg)](https://docs.ansible.com/projects/ansible/devel/collections/community/general/)
[![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-12)](https://dev.azure.com/ansible/community.general/_build?definitionId=31)
[![EOL CI](https://github.com/ansible-collections/community.general/actions/workflows/ansible-test.yml/badge.svg?branch=stable-12)](https://github.com/ansible-collections/community.general/actions)
[![Nox CI](https://github.com/ansible-collections/community.general/actions/workflows/nox.yml/badge.svg?branch=stable-12)](https://github.com/ansible-collections/community.general/actions)
[![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-11)](https://dev.azure.com/ansible/community.general/_build?definitionId=31)
[![EOL CI](https://github.com/ansible-collections/community.general/actions/workflows/ansible-test.yml/badge.svg?branch=stable-11)](https://github.com/ansible-collections/community.general/actions)
[![Nox CI](https://github.com/ansible-collections/community.general/actions/workflows/nox.yml/badge.svg?branch=stable-11)](https://github.com/ansible-collections/community.general/actions)
[![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general)
[![REUSE status](https://api.reuse.software/badge/github.com/ansible-collections/community.general)](https://api.reuse.software/info/github.com/ansible-collections/community.general)
@@ -39,7 +39,7 @@ For more information about communication, see the [Ansible communication guide](
## Tested with Ansible
Tested with the current ansible-core 2.17, ansible-core 2.18, ansible-core 2.19, ansible-core 2.20, ansible-core 2.21 releases and the current development version of ansible-core. Ansible-core versions before 2.17.0 are not supported. This includes all ansible-base 2.10 and Ansible 2.9 releases.
Tested with the current ansible-core 2.16, ansible-core 2.17, ansible-core 2.18, ansible-core 2.19, ansible-core 2.20 releases and the current development version of ansible-core. Ansible-core versions before 2.16.0 are not supported. This includes all ansible-base 2.10 and Ansible 2.9 releases.
## External requirements
@@ -86,13 +86,13 @@ We are actively accepting new contributors.
All types of contributions are very welcome.
You don't know how to start? Refer to our [contribution guide](https://github.com/ansible-collections/community.general/blob/stable-12/CONTRIBUTING.md)!
You don't know how to start? Refer to our [contribution guide](https://github.com/ansible-collections/community.general/blob/main/CONTRIBUTING.md)!
The current maintainers are listed in the [commit-rights.md](https://github.com/ansible-collections/community.general/blob/stable-12/commit-rights.md#people) file. If you have questions or need help, feel free to mention them in the proposals.
The current maintainers are listed in the [commit-rights.md](https://github.com/ansible-collections/community.general/blob/main/commit-rights.md#people) file. If you have questions or need help, feel free to mention them in the proposals.
You can find more information in the [developer guide for collections](https://docs.ansible.com/projects/ansible/devel/dev_guide/developing_collections.html#contributing-to-collections), and in the [Ansible Community Guide](https://docs.ansible.com/projects/ansible/latest/community/index.html).
Also for some notes specific to this collection see [our CONTRIBUTING documentation](https://github.com/ansible-collections/community.general/blob/stable-12/CONTRIBUTING.md).
Also for some notes specific to this collection see [our CONTRIBUTING documentation](https://github.com/ansible-collections/community.general/blob/main/CONTRIBUTING.md).
### Running tests
@@ -102,8 +102,8 @@ See [here](https://docs.ansible.com/projects/ansible/devel/dev_guide/developing_
To learn how to maintain / become a maintainer of this collection, refer to:
* [Committer guidelines](https://github.com/ansible-collections/community.general/blob/stable-12/commit-rights.md).
* [Maintainer guidelines](https://github.com/ansible/community-docs/blob/stable-12/maintaining.rst).
* [Committer guidelines](https://github.com/ansible-collections/community.general/blob/main/commit-rights.md).
* [Maintainer guidelines](https://github.com/ansible/community-docs/blob/main/maintaining.rst).
It is necessary for maintainers of this collection to be subscribed to:
@@ -118,7 +118,7 @@ See the [Releasing guidelines](https://github.com/ansible/community-docs/blob/ma
## Release notes
See the [changelog](https://github.com/ansible-collections/community.general/blob/stable-12/CHANGELOG.md).
See the [changelog](https://github.com/ansible-collections/community.general/blob/stable-11/CHANGELOG.md).
## Roadmap
@@ -137,8 +137,8 @@ See [this issue](https://github.com/ansible-collections/community.general/issues
This collection is primarily licensed and distributed as a whole under the GNU General Public License v3.0 or later.
See [LICENSES/GPL-3.0-or-later.txt](https://github.com/ansible-collections/community.general/blob/stable-12/COPYING) for the full text.
See [LICENSES/GPL-3.0-or-later.txt](https://github.com/ansible-collections/community.general/blob/stable-11/COPYING) for the full text.
Parts of the collection are licensed under the [BSD 2-Clause license](https://github.com/ansible-collections/community.general/blob/stable-12/LICENSES/BSD-2-Clause.txt) and the [MIT license](https://github.com/ansible-collections/community.general/blob/stable-12/LICENSES/MIT.txt).
Parts of the collection are licensed under the [BSD 2-Clause license](https://github.com/ansible-collections/community.general/blob/stable-11/LICENSES/BSD-2-Clause.txt), the [MIT license](https://github.com/ansible-collections/community.general/blob/stable-11/LICENSES/MIT.txt), and the [PSF 2.0 license](https://github.com/ansible-collections/community.general/blob/stable-11/LICENSES/PSF-2.0.txt).
All files have a machine readable `SDPX-License-Identifier:` comment denoting its respective license(s) or an equivalent entry in an accompanying `.license` file. Only changelog fragments (which will not be part of a release) are covered by a blanket statement in `REUSE.toml`. This conforms to the [REUSE specification](https://reuse.software/spec/).

View File

@@ -20,39 +20,15 @@ stable_branches = [ "stable-*" ]
[sessions]
[sessions.lint]
code_files = ["."] # consider all Python files in the collection
run_isort = false
run_black = false
run_ruff_autofix = true
ruff_autofix_config = "ruff.toml"
ruff_autofix_select = [
"I",
"RUF022",
]
run_ruff_check = true
ruff_check_config = "ruff.toml"
run_ruff_format = true
ruff_format_config = "ruff.toml"
run_flake8 = false
run_pylint = false
run_yamllint = true
yamllint_config = ".yamllint"
# yamllint_config_plugins = ".yamllint-docs"
# yamllint_config_plugins_examples = ".yamllint-examples"
run_mypy = true
mypy_ansible_core_package = "ansible-core>=2.19.0"
mypy_config = ".mypy.ini"
mypy_extra_deps = [
"cryptography",
"dnspython",
"lxml-stubs",
"types-mock",
"types-paramiko",
"types-passlib",
"types-psutil",
"types-PyYAML",
"types-requests",
]
run_mypy = false
[sessions.docs_check]
validate_collection_refs="all"

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +0,0 @@
bugfixes:
- scaleway_image_info, scaleway_ip_info, scaleway_organization_info, scaleway_security_group_info, scaleway_server_info, scaleway_snapshot_info, scaleway_volume_info - fix ``NoneType`` error when the Scaleway API returns an empty or non-JSON response body (https://github.com/ansible-collections/community.general/issues/11361, https://github.com/ansible-collections/community.general/pull/11918).

View File

@@ -1,2 +0,0 @@
minor_changes:
- "mattermost, rocketchat, slack - update default ``icon_url`` to ansible favicon (https://github.com/ansible-collections/community.general/pull/11909)."

View File

@@ -1 +0,0 @@
release_summary: Regular bugfix release.

View File

@@ -5,13 +5,3 @@
changelog:
write_changelog: true
ansible_output:
global_env:
ANSIBLE_STDOUT_CALLBACK: community.general.tasks_only
ANSIBLE_COLLECTIONS_TASKS_ONLY_NUMBER_OF_COLUMNS: 90
global_postprocessors:
reformat-yaml:
command:
- python
- docs/docsite/reformat-yaml.py

View File

@@ -8,9 +8,6 @@ sections:
toctree:
- filter_guide
- test_guide
- title: Deployment Guides
toctree:
- guide_ee
- title: Technology Guides
toctree:
- guide_alicloud

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
import sys
from io import StringIO
from ruamel.yaml import YAML # type: ignore[import-not-found]
def main() -> None:
yaml = YAML(typ="rt")
yaml.indent(mapping=2, sequence=4, offset=2)
# Load
data = yaml.load(sys.stdin)
# Dump
sio = StringIO()
yaml.dump(data, sio)
print(sio.getvalue().rstrip("\n"))
if __name__ == "__main__":
main()

View File

@@ -13,34 +13,6 @@ Use the filter :ansplugin:`community.general.keep_keys#filter` if you have a lis
Let us use the below list in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
input:
@@ -65,48 +37,24 @@ Let us use the below list in the following examples:
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k0_x0: A0
k1_x1: B0
- k0_x0: A1
k1_x1: B1
- {k0_x0: A0, k1_x1: B0}
- {k0_x0: A1, k1_x1: B1}
.. versionadded:: 9.1.0
* The results of the below examples 1-5 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: ['k0_x0', 'k1_x1']
result: "{{ input | community.general.keep_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k0_x0: A0
k1_x1: B0
- k0_x0: A1
k1_x1: B1
- {k0_x0: A0, k1_x1: B0}
- {k0_x0: A1, k1_x1: B1}
1. Match keys that equal any of the items in the target.
@@ -157,28 +105,12 @@ gives
* The results of the below examples 6-9 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: k0_x0
result: "{{ input | community.general.keep_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k0_x0: A0
- k0_x0: A1
- {k0_x0: A0}
- {k0_x0: A1}
6. Match keys that equal the target.
@@ -216,3 +148,4 @@ gives
mp: regex
target: ^.*0_x.*$
result: "{{ input | community.general.keep_keys(target=target, matching_parameter=mp) }}"

View File

@@ -13,34 +13,6 @@ Use the filter :ansplugin:`community.general.remove_keys#filter` if you have a l
Let us use the below list in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
input:
@@ -65,19 +37,13 @@ Let us use the below list in the following examples:
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k2_x2:
- C0
- k2_x2: [C0]
k3_x3: foo
- k2_x2:
- C1
- k2_x2: [C1]
k3_x3: bar
@@ -85,31 +51,13 @@ gives
* The results of the below examples 1-5 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: ['k0_x0', 'k1_x1']
result: "{{ input | community.general.remove_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k2_x2:
- C0
- k2_x2: [C0]
k3_x3: foo
- k2_x2:
- C1
- k2_x2: [C1]
k3_x3: bar
@@ -161,33 +109,15 @@ gives
* The results of the below examples 6-9 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: k0_x0
result: "{{ input | community.general.remove_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k1_x1: B0
k2_x2:
- C0
k2_x2: [C0]
k3_x3: foo
- k1_x1: B1
k2_x2:
- C1
k2_x2: [C1]
k3_x3: bar
@@ -226,3 +156,4 @@ gives
mp: regex
target: ^.*0_x.*$
result: "{{ input | community.general.remove_keys(target=target, matching_parameter=mp) }}"

View File

@@ -13,34 +13,6 @@ Use the filter :ansplugin:`community.general.replace_keys#filter` if you have a
Let us use the below list in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
input:
@@ -68,23 +40,17 @@ Let us use the below list in the following examples:
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- a0: A0
a1: B0
k2_x2:
- C0
k2_x2: [C0]
k3_x3: foo
- a0: A1
a1: B1
k2_x2:
- C1
k2_x2: [C1]
k3_x3: bar
@@ -92,37 +58,17 @@ gives
* The results of the below examples 1-3 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: starts_with
target:
- {after: a0, before: k0}
- {after: a1, before: k1}
result: "{{ input | community.general.replace_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- a0: A0
a1: B0
k2_x2:
- C0
k2_x2: [C0]
k3_x3: foo
- a0: A1
a1: B1
k2_x2:
- C1
k2_x2: [C1]
k3_x3: bar
@@ -165,29 +111,12 @@ gives
* The results of the below examples 4-5 are the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: regex
target:
- {after: X, before: ^.*_x.*$}
result: "{{ input | community.general.replace_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- X: foo
- X: bar
- {X: foo}
- {X: bar}
4. If more keys match the same attribute before the last one will be used.
@@ -216,11 +145,6 @@ gives
6. If there are more matches for a key the first one will be used.
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
.. code-block:: yaml
:emphasize-lines: 1-
@@ -241,17 +165,11 @@ gives
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- X: A
bbb1: B
ccc1: C
- X: D
bbb2: E
ccc2: F
- {X: A, bbb1: B, ccc1: C}
- {X: D, bbb2: E, ccc2: F}

View File

@@ -20,17 +20,6 @@ The :ansplugin:`community.general.counter filter plugin <community.general.count
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Count character occurrences in a string] ********************************************
@@ -83,20 +72,9 @@ This plugin is useful for selecting resources based on current allocation:
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Get ID of SCSI controller(s) with less than 4 disks attached and choose the one with the least disks] ***
TASK [Get ID of SCSI controller(s) with less than 4 disks attached and choose the one with the least disks]
ok: [localhost] => {
"msg": "scsi_2"
}

View File

@@ -31,27 +31,16 @@ You can use the :ansplugin:`community.general.dict_kv filter <community.general.
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Create a single-entry dictionary] ***************************************************
TASK [Create a single-entry dictionary] **************************************************
ok: [localhost] => {
"msg": {
"thatsmyvar": "myvalue"
}
}
TASK [Create a list of dictionaries where the 'server' field is taken from a list] ********
TASK [Create a list of dictionaries where the 'server' field is taken from a list] *******
ok: [localhost] => {
"msg": [
{
@@ -98,20 +87,9 @@ If you need to convert a list of key-value pairs to a dictionary, you can use th
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Create a dictionary with the dict function] *****************************************
TASK [Create a dictionary with the dict function] ****************************************
ok: [localhost] => {
"msg": {
"1": 2,
@@ -119,7 +97,7 @@ This produces:
}
}
TASK [Create a dictionary with the community.general.dict filter] *************************
TASK [Create a dictionary with the community.general.dict filter] ************************
ok: [localhost] => {
"msg": {
"1": 2,
@@ -127,7 +105,7 @@ This produces:
}
}
TASK [Create a list of dictionaries with map and the community.general.dict filter] *******
TASK [Create a list of dictionaries with map and the community.general.dict filter] ******
ok: [localhost] => {
"msg": [
{

View File

@@ -22,49 +22,6 @@ One example is ``ansible_facts.mounts``, which is a list of dictionaries where e
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
skip_first_lines: 3 # the set_fact task
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- ansible.builtin.set_fact:
ansible_facts:
mounts:
- block_available: 2000
block_size: 4096
block_total: 2345
block_used: 345
device: "/dev/sda1"
fstype: "ext4"
inode_available: 500
inode_total: 512
inode_used: 12
mount: "/boot"
options: "rw,relatime,data=ordered"
size_available: 56821
size_total: 543210
uuid: "ab31cade-d9c1-484d-8482-8a4cbee5241a"
- block_available: 1234
block_size: 4096
block_total: 12345
block_used: 11111
device: "/dev/sda2"
fstype: "ext4"
inode_available: 1111
inode_total: 1234
inode_used: 123
mount: "/"
options: "rw,relatime"
size_available: 42143
size_total: 543210
uuid: "abcdef01-2345-6789-0abc-def012345678"
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Output mount facts grouped by device name] ******************************************
@@ -122,7 +79,7 @@ This produces:
"options": "rw,relatime",
"size_available": 42143,
"size_total": 543210,
"uuid": "abcdef01-2345-6789-0abc-def012345678"
"uuid": "bdf50b7d-4859-40af-8665-c637ee7a7808"
},
"/boot": {
"block_available": 2000,

View File

@@ -21,34 +21,6 @@ These filters preserve the item order, eliminate duplicates and are an extended
Let us use the lists below in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
A: [9, 5, 7, 1, 9, 4, 10, 5, 9, 7]
@@ -63,22 +35,9 @@ The union of ``A`` and ``B`` can be written as:
This statement produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
result:
- 9
- 5
- 7
- 1
- 4
- 10
- 2
- 8
- 3
result: [9, 5, 7, 1, 4, 10, 2, 8, 3]
If you want to calculate the intersection of ``A``, ``B`` and ``C``, you can use the following statement:
@@ -100,14 +59,9 @@ or
All three statements are equivalent and give:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
result:
- 1
result: [1]
.. note:: Be aware that in most cases, filter calls without any argument require ``flatten=true``, otherwise the input is returned as result. The reason for this is, that the input is considered as a variable argument and is wrapped by an additional outer list. ``flatten=true`` ensures that this list is removed before the input is processed by the filter logic.
@@ -121,14 +75,7 @@ For example, the symmetric difference of ``A``, ``B`` and ``C`` may be written a
This gives:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
result:
- 5
- 8
- 3
- 1
result: [5, 8, 3, 1]

View File

@@ -12,34 +12,6 @@ If you have two or more lists of dictionaries and want to combine them into a li
Let us use the lists below in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: list3
.. code-block:: yaml
list1:
@@ -62,22 +34,13 @@ In the example below the lists are merged by the attribute ``name``:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- extra: false
name: bar
- name: baz
path: /baz
- extra: true
name: foo
path: /foo
- extra: true
name: meh
- {name: bar, extra: false}
- {name: baz, path: /baz}
- {name: foo, extra: true, path: /foo}
- {name: meh, extra: true}
.. versionadded:: 2.0.0
@@ -93,22 +56,13 @@ It is possible to use a list of lists as an input of the filter:
This produces the same result as in the previous example:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- extra: false
name: bar
- name: baz
path: /baz
- extra: true
name: foo
path: /foo
- extra: true
name: meh
- {name: bar, extra: false}
- {name: baz, path: /baz}
- {name: foo, extra: true, path: /foo}
- {name: meh, extra: true}
Single list
"""""""""""
@@ -121,22 +75,13 @@ It is possible to merge single list:
This produces the same result as in the previous example:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- extra: false
name: bar
- name: baz
path: /baz
- extra: true
name: foo
path: /foo
- extra: true
name: meh
- {name: bar, extra: false}
- {name: baz, path: /baz}
- {name: foo, extra: true, path: /foo}
- {name: meh, extra: true}
The filter also accepts two optional parameters: :ansopt:`community.general.lists_mergeby#filter:recursive` and :ansopt:`community.general.lists_mergeby#filter:list_merge`. This is available since community.general 4.4.0.
@@ -151,11 +96,6 @@ The examples below set :ansopt:`community.general.lists_mergeby#filter:recursive
Let us use the lists below in the following examples
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
.. code-block:: yaml
list1:
@@ -188,25 +128,17 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=replace` (def
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- patch_value
x: default_value
y: patch_value
list: [patch_value]
z: patch_value
- name: myname02
param01:
- 3
- 4
- 4
param01: [3, 4, 4]
list_merge=keep
"""""""""""""""
@@ -221,26 +153,17 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=keep`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- default_value
x: default_value
y: patch_value
list: [default_value]
z: patch_value
- name: myname02
param01:
- 1
- 1
- 2
- 3
param01: [1, 1, 2, 3]
list_merge=append
"""""""""""""""""
@@ -255,30 +178,17 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=append`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- default_value
- patch_value
x: default_value
y: patch_value
list: [default_value, patch_value]
z: patch_value
- name: myname02
param01:
- 1
- 1
- 2
- 3
- 3
- 4
- 4
param01: [1, 1, 2, 3, 3, 4, 4]
list_merge=prepend
""""""""""""""""""
@@ -293,30 +203,17 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=prepend`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- patch_value
- default_value
x: default_value
y: patch_value
list: [patch_value, default_value]
z: patch_value
- name: myname02
param01:
- 3
- 4
- 4
- 1
- 1
- 2
- 3
param01: [3, 4, 4, 1, 1, 2, 3]
list_merge=append_rp
""""""""""""""""""""
@@ -331,29 +228,17 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=append_rp`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- default_value
- patch_value
x: default_value
y: patch_value
list: [default_value, patch_value]
z: patch_value
- name: myname02
param01:
- 1
- 1
- 2
- 3
- 4
- 4
param01: [1, 1, 2, 3, 4, 4]
list_merge=prepend_rp
"""""""""""""""""""""
@@ -368,26 +253,15 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=prepend_rp`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- patch_value
- default_value
x: default_value
y: patch_value
list: [patch_value, default_value]
z: patch_value
- name: myname02
param01:
- 3
- 4
- 4
- 1
- 1
- 2
param01: [3, 4, 4, 1, 1, 2]

View File

@@ -24,17 +24,6 @@ Ansible offers the :ansplugin:`community.general.read_csv module <community.gene
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Parse CSV from string] **************************************************************
@@ -80,34 +69,6 @@ Converting to JSON
This produces:
.. ansible-output-data::
skip_first_lines: 3
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- ansible.builtin.set_fact:
result_stdout: |-
bin
boot
dev
etc
home
lib
proc
root
run
tmp
- name: Run 'ls' to list files in /
command: ls /
register: result
- name: Parse the ls output
debug:
msg: "{{ result_stdout | community.general.jc('ls') }}"
.. code-block:: ansible-output
TASK [Run 'ls' to list files in /] ********************************************************

View File

@@ -25,17 +25,6 @@ Hashids
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Create hashid] **********************************************************************
@@ -77,32 +66,16 @@ You can use the :ansplugin:`community.general.random_mac filter <community.gener
This produces:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- name: "Create a random MAC starting with ff:"
debug:
# We're using a seed here to avoid randomness in the output
msg: "{{ 'FF' | community.general.random_mac(seed='') }}"
- name: "Create a random MAC starting with 00:11:22:"
debug:
# We're using a seed here to avoid randomness in the output
msg: "{{ '00:11:22' | community.general.random_mac(seed='') }}"
.. code-block:: ansible-output
TASK [Create a random MAC starting with ff:] **********************************************
ok: [localhost] => {
"msg": "ff:84:f5:d1:59:20"
"msg": "ff:69:d3:78:7f:b4"
}
TASK [Create a random MAC starting with 00:11:22:] ****************************************
ok: [localhost] => {
"msg": "00:11:22:84:f5:d1"
"msg": "00:11:22:71:5d:3b"
}
You can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:

View File

@@ -69,32 +69,21 @@ Note that months and years are using a simplified representation: a month is 30
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Convert string to seconds] **********************************************************
ok: [localhost] => {
"msg": 109210.123
"msg": "109210.123"
}
TASK [Convert string to hours] ************************************************************
ok: [localhost] => {
"msg": 30.336145277778
"msg": "30.336145277778"
}
TASK [Convert string to years (using 365.25 days == 1 year)] ******************************
ok: [localhost] => {
"msg": 1.096851471595
"msg": "1.096851471595"
}
.. versionadded: 0.2.0

View File

@@ -21,20 +21,9 @@ You can use the :ansplugin:`community.general.unicode_normalize filter <communit
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Compare Unicode representations] ****************************************************
TASK [Compare Unicode representations] ********************************************************
ok: [localhost] => {
"msg": true
}

View File

@@ -23,17 +23,6 @@ If you need to sort a list of version numbers, the Jinja ``sort`` filter is prob
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Sort list by version number] ********************************************************

View File

@@ -1,114 +0,0 @@
..
Copyright (c) Ansible Project
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
.. _ansible_collections.community.general.docsite.guide_ee:
Execution Environment Guide
===========================
`Ansible Execution Environments <https://docs.ansible.com/projects/ansible/latest/getting_started_ee/index.html>`_
(EEs) are container images that bundle ansible-core, collections, and their Python and system dependencies.
They are the standard runtime for Red Hat Ansible Automation Platform and AWX, replacing the older virtualenv model.
They can also be used outside of RHAAP and AWX by using `ansible-navigator <https://docs.ansible.com/projects/navigator/>`__, or by using ansible-runner directly.
What runs in the EE
^^^^^^^^^^^^^^^^^^^
Only **controller-side plugins** run inside the EE. Their Python and system dependencies must be installed there.
This includes: lookup plugins, inventory plugins, callback plugins, connection plugins, become plugins, and filter plugins.
Modules run on the managed nodes and are transferred there at runtime — their dependencies must be present on the
target, not in the EE.
.. note::
Modules delegated to ``localhost`` (for example, those that interact with a remote API) are an exception:
they run on the controller and their dependencies must therefore be available in the EE.
Why community.general does not provide EE metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``community.general`` ships dozens of controller-side plugins covering a very broad range of technologies.
Bundling the dependencies for all of them into a single EE image would almost certainly create irreconcilable
conflicts — both within the collection and with other collections or tools (such as ``ansible-lint``) that
share the same image.
For that reason, ``community.general`` does **not** provide Python or system package dependency metadata.
Users are expected to build purpose-built, minimal EEs containing only the dependencies
required by the specific plugins they actually use.
Finding the dependencies you need
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Every plugin that has external dependencies documents them in its ``requirements`` field.
You can inspect those with ``ansible-doc``:
.. code-block:: shell
$ ansible-doc -t lookup community.general.some_lookup | grep -A 10 "REQUIREMENTS"
Or browse the plugin's documentation page on `docs.ansible.com <https://docs.ansible.com/ansible/latest/collections/community/general/>`_.
For example, a lookup plugin that wraps an external service might list:
.. code-block:: yaml
requirements:
- some-python-library >= 1.2
An inventory plugin backed by a REST API might list:
.. code-block:: yaml
requirements:
- requests
- some-sdk
These are the packages you need to add to your EE.
Building a minimal EE with ansible-builder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`ansible-builder <https://docs.ansible.com/projects/builder/en/latest/>`_ is the standard tool for creating EEs.
Install it with:
.. code-block:: shell
$ pip install ansible-builder
Create an ``execution-environment.yml`` **in your own project** — not inside ``community.general``
that includes only the dependencies needed for the plugins you use:
.. code-block:: yaml
version: 3
dependencies:
galaxy:
collections:
- name: community.general
python:
- some-python-library>=1.2
- requests
system:
- libxml2-devel [platform:rpm]
images:
base_image:
name: ghcr.io/ansible/community-ee-base:latest
Then build the image:
.. code-block:: shell
$ ansible-builder build -t my-custom-ee:latest
.. seealso::
- `ansible-builder documentation <https://docs.ansible.com/projects/builder/en/latest/>`_
- `Building EEs with ansible-builder <https://ansible-builder.readthedocs.io/en/latest/definition/>`_
- `Issue #2968 — original request for EE requirements support <https://github.com/ansible-collections/community.general/issues/2968>`_
- `Issue #4512 — design discussion for EE support in community.general <https://github.com/ansible-collections/community.general/issues/4512>`_

View File

@@ -12,7 +12,7 @@ The inventory plugin :ansplugin:`community.general.iocage#inventory` gets the in
See:
* `iocage - A FreeBSD Jail Manager <https://freebsd.github.io/iocage/>`_
* `iocage - A FreeBSD Jail Manager <https://iocage.readthedocs.io/en/latest>`_
* `man iocage <https://man.freebsd.org/cgi/man.cgi?query=iocage>`_
* `Jails and Containers <https://docs.freebsd.org/en/books/handbook/jails>`_

View File

@@ -20,7 +20,7 @@ As root at the iocage host, create three VNET jails with a DHCP interface from t
shell> iocage create --template ansible_client --name srv_3 bpf=1 dhcp=1 vnet=1
srv_3 successfully created!
See: `Configuring VNET <https://freebsd.github.io/iocage/networking.html#vimage-vnet>`_.
See: `Configuring a VNET Jail <https://iocage.readthedocs.io/en/latest/networking.html#configuring-a-vnet-jail>`_.
As admin at the controller, list the jails:
@@ -115,7 +115,7 @@ Optionally, create shared IP jails:
| None | srv_3 | off | down | jail | 14.2-RELEASE-p3 | em0|10.1.0.103/24 | - | ansible_client | no |
+------+-------+------+-------+------+-----------------+-------------------+-----+----------------+----------+
See: `Configuring a Shared IP Jail <https://freebsd.github.io/iocage/networking.html#shared-ip>`_
See: `Configuring a Shared IP Jail <https://iocage.readthedocs.io/en/latest/networking.html#configuring-a-shared-ip-jail>`_
If iocage needs environment variable(s), use the option :ansopt:`community.general.iocage#inventory:env`. For example,

View File

@@ -5,7 +5,7 @@
namespace: community
name: general
version: 12.6.1
version: 11.4.4
readme: README.md
authors:
- Ansible (https://github.com/ansible)
@@ -19,5 +19,3 @@ repository: https://github.com/ansible-collections/community.general
documentation: https://docs.ansible.com/projects/ansible/latest/collections/community/general/
homepage: https://github.com/ansible-collections/community.general
issues: https://github.com/ansible-collections/community.general/issues
build_ignore:
- .nox

View File

@@ -3,7 +3,7 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
requires_ansible: '>=2.17.0'
requires_ansible: '>=2.16.0'
action_groups:
consul:
- consul_agent_check
@@ -21,7 +21,6 @@ action_groups:
keycloak:
- keycloak_authentication
- keycloak_authentication_required_actions
- keycloak_authentication_v2
- keycloak_authz_authorization_scope
- keycloak_authz_custom_policy
- keycloak_authz_permission
@@ -41,14 +40,12 @@ action_groups:
- keycloak_realm
- keycloak_realm_key
- keycloak_realm_keys_metadata_info
- keycloak_realm_localization
- keycloak_realm_rolemapping
- keycloak_role
- keycloak_user
- keycloak_user_federation
- keycloak_user_rolemapping
- keycloak_userprofile
- keycloak_user_execute_actions_email
scaleway:
- scaleway_compute
- scaleway_compute_private_network
@@ -103,7 +100,7 @@ plugin_routing:
warning_text: Use the 'default' callback plugin with 'display_failed_stderr
= yes' option.
yaml:
tombstone:
deprecation:
removal_version: 12.0.0
warning_text: >-
The plugin has been superseded by the option `result_format=yaml` in callback plugin ansible.builtin.default from ansible-core 2.13 onwards.
@@ -156,7 +153,7 @@ plugin_routing:
removal_version: 13.0.0
warning_text: Project Atomic was sunset by the end of 2019.
bearychat:
tombstone:
deprecation:
removal_version: 12.0.0
warning_text: Chat service is no longer available.
catapult:
@@ -205,14 +202,6 @@ plugin_routing:
tombstone:
removal_version: 10.0.0
warning_text: Use community.general.consul_token and/or community.general.consul_policy instead.
dimensiondata_network:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
dimensiondata_vlan:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
docker_compose:
redirect: community.docker.docker_compose
docker_config:
@@ -268,7 +257,7 @@ plugin_routing:
docker_volume_info:
redirect: community.docker.docker_volume_info
facter:
tombstone:
deprecation:
removal_version: 12.0.0
warning_text: Use community.general.facter_facts instead.
flowdock:
@@ -372,26 +361,6 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.hpilo_info instead.
aix_devices:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.devices instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_filesystem:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.filesystem instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_inittab:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.inittab instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_lvg:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.lvg instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_lvol:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.lvol instead. The C(ibm.power_aix) collection is actively maintained by IBM.
idrac_firmware:
redirect: dellemc.openmanage.idrac_firmware
idrac_redfish_facts:
@@ -400,10 +369,6 @@ plugin_routing:
warning_text: Use community.general.idrac_redfish_info instead.
idrac_server_config_profile:
redirect: dellemc.openmanage.idrac_server_config_profile
jboss:
deprecation:
removal_version: 14.0.0
warning_text: Use role middleware_automation.wildfly.wildfly_app_deploy instead.
jenkins_job_facts:
tombstone:
removal_version: 3.0.0
@@ -424,10 +389,6 @@ plugin_routing:
redirect: community.kubevirt.kubevirt_template
kubevirt_vm:
redirect: community.kubevirt.kubevirt_vm
layman:
deprecation:
removal_version: 14.0.0
warning_text: Gentoo deprecated C(layman) in mid-2023.
ldap_attr:
tombstone:
removal_version: 3.0.0
@@ -532,30 +493,6 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.one_image_info instead.
oneandone_firewall_policy:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_load_balancer:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_monitoring_policy:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_private_network:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_public_ip:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_server:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
onepassword_facts:
tombstone:
removal_version: 3.0.0
@@ -863,10 +800,6 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use purestorage.flashblade.purefb_info instead.
pushbullet:
deprecation:
removal_version: 13.0.0
warning_text: Module relies on Python package pushbullet.py which is not maintained and supports only up to Python 3.2.
python_requirements_facts:
tombstone:
removal_version: 3.0.0
@@ -1063,24 +996,12 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.smartos_image_info instead.
spotinst_aws_elastigroup:
deprecation:
removal_version: 13.0.0
warning_text: Module relies on unsupported Python package. Use the module spot.cloud_modules.aws_elastigroup instead.
stackdriver:
tombstone:
removal_version: 9.0.0
warning_text: This module relied on HTTPS APIs that do not exist anymore,
and any new development in the direction of providing an alternative should
happen in the context of the google.cloud collection.
swupd:
deprecation:
removal_version: 15.0.0
warning_text: ClearLinux was made EOL in July 2025. If you think the module is still useful for another distribution, please create an issue in the community.general repository.
typetalk:
deprecation:
removal_version: 13.0.0
warning_text: The typetalk service will be discontinued on Dec 2025.
vertica_facts:
tombstone:
removal_version: 3.0.0
@@ -1117,14 +1038,6 @@ plugin_routing:
doc_fragments:
_gcp:
redirect: community.google._gcp
dimensiondata:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
dimensiondata_wait:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
docker:
redirect: community.docker.docker
hetzner:
@@ -1167,7 +1080,7 @@ plugin_routing:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
purestorage:
tombstone:
deprecation:
removal_version: 12.0.0
warning_text: The modules for purestorage were removed in community.general 3.0.0, this document fragment was left behind.
rackspace:
@@ -1176,18 +1089,6 @@ plugin_routing:
warning_text: This doc fragment was used by rax modules, that relied on the deprecated
package pyrax.
module_utils:
cloud:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
database:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
dimensiondata:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
docker.common:
redirect: community.docker.common
docker.swarm:
@@ -1200,10 +1101,6 @@ plugin_routing:
redirect: community.google.gcp
hetzner:
redirect: community.hrobot.robot
known_hosts:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
kubevirt:
redirect: community.kubevirt.kubevirt
net_tools.nios.api:
@@ -1212,10 +1109,6 @@ plugin_routing:
deprecation:
removal_version: 13.0.0
warning_text: Code is unmaintained here and official Oracle collection is available for a number of years.
oneandone:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
postgresql:
redirect: community.postgresql.postgresql
proxmox:
@@ -1224,7 +1117,7 @@ plugin_routing:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
pure:
tombstone:
deprecation:
removal_version: 12.0.0
warning_text: The modules for purestorage were removed in community.general 3.0.0, this module util was left behind.
rax:
@@ -1235,10 +1128,6 @@ plugin_routing:
redirect: dellemc.openmanage.dellemc_idrac
remote_management.dellemc.ome:
redirect: dellemc.openmanage.ome
saslprep:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
inventory:
docker_machine:
redirect: community.docker.docker_machine

View File

@@ -6,17 +6,13 @@
# dependencies = ["nox>=2025.02.09", "antsibull-nox"]
# ///
import os
import sys
import nox # type: ignore[import-not-found]
# Whether the noxfile is running in CI:
IN_CI = os.environ.get("CI") == "true"
import nox
try:
import antsibull_nox # type: ignore[import-not-found]
import antsibull_nox
except ImportError:
print("You need to install antsibull-nox in the same Python environment as nox.")
sys.exit(1)
@@ -36,23 +32,6 @@ def botmeta(session: nox.Session) -> None:
session.run("python", "tests/sanity/extra/botmeta.py")
@nox.session(name="ansible-output", default=False)
def ansible_output(session: nox.Session) -> None:
session.install(
"ansible-core",
"antsibull-docs",
# Needed libs for some code blocks:
"jc",
"hashids",
# Tools for post-processing
"ruamel.yaml", # used by docs/docsite/reformat-yaml.py
)
args = []
if IN_CI:
args.append("--check")
session.run("antsibull-docs", "ansible-output", *args, *session.posargs)
# Allow to run the noxfile with `python noxfile.py`, `pipx run noxfile.py`, or similar.
# Requires nox >= 2025.02.09
if __name__ == "__main__":

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, quidame <quidame@poivron.org>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -5,96 +6,85 @@
from __future__ import annotations
import time
import typing as t
from ansible.errors import AnsibleActionFail, AnsibleConnectionFailure
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
from ansible.errors import AnsibleActionFail, AnsibleConnectionFailure
from ansible.utils.vars import merge_hash
from ansible.utils.display import Display
display = Display()
class ActionModule(ActionBase):
# Keep internal params away from user interactions
_VALID_ARGS = frozenset(("path", "state", "table", "noflush", "counters", "modprobe", "ip_version", "wait"))
_VALID_ARGS = frozenset(('path', 'state', 'table', 'noflush', 'counters', 'modprobe', 'ip_version', 'wait'))
DEFAULT_SUDOABLE = True
@staticmethod
def msg_error__async_and_poll_not_zero(task_poll, task_async, max_timeout) -> str:
def msg_error__async_and_poll_not_zero(task_poll, task_async, max_timeout):
return (
"This module doesn't support async>0 and poll>0 when its 'state' param "
"is set to 'restored'. To enable its rollback feature (that needs the "
"module to run asynchronously on the remote), please set task attribute "
f"'poll' (={task_poll}) to 0, and 'async' (={task_async}) to a value >2 and not greater than "
f"'ansible_timeout' (={max_timeout}) (recommended)."
)
f"'ansible_timeout' (={max_timeout}) (recommended).")
@staticmethod
def msg_warning__no_async_is_no_rollback(task_poll, task_async, max_timeout) -> str:
def msg_warning__no_async_is_no_rollback(task_poll, task_async, max_timeout):
return (
"Attempts to restore iptables state without rollback in case of mistake "
"may lead the ansible controller to loose access to the hosts and never "
"regain it before fixing firewall rules through a serial console, or any "
f"other way except SSH. Please set task attribute 'poll' (={task_poll}) to 0, and "
f"'async' (={task_async}) to a value >2 and not greater than 'ansible_timeout' (={max_timeout}) "
"(recommended)."
)
"(recommended).")
@staticmethod
def msg_warning__async_greater_than_timeout(task_poll, task_async, max_timeout) -> str:
def msg_warning__async_greater_than_timeout(task_poll, task_async, max_timeout):
return (
"You attempt to restore iptables state with rollback in case of mistake, "
"but with settings that will lead this rollback to happen AFTER that the "
"controller will reach its own timeout. Please set task attribute 'poll' "
f"(={task_poll}) to 0, and 'async' (={task_async}) to a value >2 and not greater than "
f"'ansible_timeout' (={max_timeout}) (recommended)."
)
f"'ansible_timeout' (={max_timeout}) (recommended).")
def _async_result(
self, async_status_args: dict[str, t.Any], task_vars: dict[str, t.Any], timeout: int
) -> dict[str, t.Any]:
"""
def _async_result(self, async_status_args, task_vars, timeout):
'''
Retrieve results of the asynchronous task, and display them in place of
the async wrapper results (those with the ansible_job_id key).
"""
'''
async_status = self._task.copy()
async_status.args = async_status_args
async_status.action = "ansible.builtin.async_status"
async_status.action = 'ansible.builtin.async_status'
async_status.async_val = 0
async_action = self._shared_loader_obj.action_loader.get(
async_status.action,
task=async_status,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=self._templar,
shared_loader_obj=self._shared_loader_obj,
)
async_status.action, task=async_status, connection=self._connection,
play_context=self._play_context, loader=self._loader, templar=self._templar,
shared_loader_obj=self._shared_loader_obj)
if async_status.args["mode"] == "cleanup":
if async_status.args['mode'] == 'cleanup':
return async_action.run(task_vars=task_vars)
# At least one iteration is required, even if timeout is 0.
for dummy in range(max(1, timeout)):
async_result = async_action.run(task_vars=task_vars)
if async_result.get("finished", 0) == 1:
if async_result.get('finished', 0) == 1:
break
time.sleep(min(1, timeout))
return async_result
def run(self, tmp: str | None = None, task_vars: dict[str, t.Any] | None = None) -> dict[str, t.Any]:
def run(self, tmp=None, task_vars=None):
self._supports_check_mode = True
self._supports_async = True
if task_vars is None:
task_vars = {}
result = super().run(tmp, task_vars)
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
if not result.get("skipped"):
if not result.get('skipped'):
# FUTURE: better to let _execute_module calculate this internally?
wrap_async = self._task.async_val and not self._connection.has_native_async
@@ -109,38 +99,41 @@ class ActionModule(ActionBase):
starter_cmd = None
confirm_cmd = None
if module_args.get("state", None) == "restored":
if module_args.get('state', None) == 'restored':
if not wrap_async:
if not check_mode:
display.warning(self.msg_error__async_and_poll_not_zero(task_poll, task_async, max_timeout))
display.warning(self.msg_error__async_and_poll_not_zero(
task_poll,
task_async,
max_timeout))
elif task_poll:
raise AnsibleActionFail(
self.msg_warning__no_async_is_no_rollback(task_poll, task_async, max_timeout)
)
raise AnsibleActionFail(self.msg_warning__no_async_is_no_rollback(
task_poll,
task_async,
max_timeout))
else:
if task_async > max_timeout and not check_mode:
display.warning(
self.msg_warning__async_greater_than_timeout(task_poll, task_async, max_timeout)
)
display.warning(self.msg_warning__async_greater_than_timeout(
task_poll,
task_async,
max_timeout))
# inject the async directory based on the shell option into the
# module args
async_dir = self.get_shell_option("async_dir", default="~/.ansible_async")
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
# Bind the loop max duration to consistent values on both
# remote and local sides (if not the same, make the loop
# longer on the controller); and set a backup file path.
module_args["_timeout"] = task_async
module_args["_back"] = f"{async_dir}/iptables.state"
async_status_args = dict(mode="status")
module_args['_timeout'] = task_async
module_args['_back'] = f'{async_dir}/iptables.state'
async_status_args = dict(mode='status')
confirm_cmd = f"rm -f {module_args['_back']}"
starter_cmd = f"touch {module_args['_back']}.starter"
remaining_time = max(task_async, max_timeout)
# do work!
result = merge_hash(
result, self._execute_module(module_args=module_args, task_vars=task_vars, wrap_async=wrap_async)
)
result = merge_hash(result, self._execute_module(module_args=module_args, task_vars=task_vars, wrap_async=wrap_async))
# Then the 3-steps "go ahead or rollback":
# 1. Catch early errors of the module (in asynchronous task) if any.
@@ -148,9 +141,9 @@ class ActionModule(ActionBase):
# 2. Reset connection to ensure a persistent one will not be reused.
# 3. Confirm the restored state by removing the backup on the remote.
# Retrieve the results of the asynchronous task to return them.
if "_back" in module_args:
async_status_args["jid"] = result.get("ansible_job_id", None)
if async_status_args["jid"] is None:
if '_back' in module_args:
async_status_args['jid'] = result.get('ansible_job_id', None)
if async_status_args['jid'] is None:
raise AnsibleActionFail("Unable to get 'ansible_job_id'.")
# Catch early errors due to missing mandatory option, bad
@@ -164,7 +157,7 @@ class ActionModule(ActionBase):
# As the main command is not yet executed on the target, here
# 'finished' means 'failed before main command be executed'.
if not result["finished"]:
if not result['finished']:
try:
self._connection.reset()
except AttributeError:
@@ -186,16 +179,16 @@ class ActionModule(ActionBase):
result = merge_hash(result, self._async_result(async_status_args, task_vars, remaining_time))
# Cleanup async related stuff and internal params
for key in ("ansible_job_id", "results_file", "started", "finished"):
for key in ('ansible_job_id', 'results_file', 'started', 'finished'):
if result.get(key):
del result[key]
if result.get("invocation", {}).get("module_args"):
for key in ("_back", "_timeout", "_async_dir", "jid"):
if result["invocation"]["module_args"].get(key):
del result["invocation"]["module_args"][key]
if result.get('invocation', {}).get('module_args'):
for key in ('_back', '_timeout', '_async_dir', 'jid'):
if result['invocation']['module_args'].get(key):
del result['invocation']['module_args'][key]
async_status_args["mode"] = "cleanup"
async_status_args['mode'] = 'cleanup'
dummy = self._async_result(async_status_args, task_vars, 0)
if not wrap_async:

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Amin Vakil <info@aminvakil.com>
# Copyright (c) 2016-2018, Matt Davis <mdavis@ansible.com>
# Copyright (c) 2018, Sam Doran <sdoran@redhat.com>
@@ -6,117 +7,121 @@
from __future__ import annotations
import typing as t
from ansible.errors import AnsibleConnectionFailure, AnsibleError
from ansible.module_utils.common.collections import is_string
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.common.collections import is_string
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
if t.TYPE_CHECKING:
class Distribution(t.TypedDict):
name: str
version: str
family: str
display = Display()
def fmt(mapping, key):
return to_native(mapping[key]).strip()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset(("msg", "delay", "search_paths"))
_VALID_ARGS = frozenset((
'msg',
'delay',
'search_paths'
))
DEFAULT_CONNECT_TIMEOUT = None
DEFAULT_PRE_SHUTDOWN_DELAY = 0
DEFAULT_SHUTDOWN_MESSAGE = "Shut down initiated by Ansible"
DEFAULT_SHUTDOWN_COMMAND = "shutdown"
DEFAULT_SHUTDOWN_MESSAGE = 'Shut down initiated by Ansible'
DEFAULT_SHUTDOWN_COMMAND = 'shutdown'
DEFAULT_SHUTDOWN_COMMAND_ARGS = '-h {delay_min} "{message}"'
DEFAULT_SUDOABLE = True
SHUTDOWN_COMMANDS = {
"alpine": "poweroff",
"vmkernel": "halt",
'alpine': 'poweroff',
'vmkernel': 'halt',
}
SHUTDOWN_COMMAND_ARGS = {
"alpine": "",
"void": '-h +{delay_min} "{message}"',
"freebsd": '-p +{delay_sec}s "{message}"',
"linux": DEFAULT_SHUTDOWN_COMMAND_ARGS,
"macosx": '-h +{delay_min} "{message}"',
"openbsd": '-h +{delay_min} "{message}"',
"solaris": '-y -g {delay_sec} -i 5 "{message}"',
"sunos": '-y -g {delay_sec} -i 5 "{message}"',
"vmkernel": "-d {delay_sec}",
"aix": "-Fh",
'alpine': '',
'void': '-h +{delay_min} "{message}"',
'freebsd': '-p +{delay_sec}s "{message}"',
'linux': DEFAULT_SHUTDOWN_COMMAND_ARGS,
'macosx': '-h +{delay_min} "{message}"',
'openbsd': '-h +{delay_min} "{message}"',
'solaris': '-y -g {delay_sec} -i 5 "{message}"',
'sunos': '-y -g {delay_sec} -i 5 "{message}"',
'vmkernel': '-d {delay_sec}',
'aix': '-Fh',
}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
super(ActionModule, self).__init__(*args, **kwargs)
@property
def delay(self):
return self._check_delay("delay", self.DEFAULT_PRE_SHUTDOWN_DELAY)
return self._check_delay('delay', self.DEFAULT_PRE_SHUTDOWN_DELAY)
def _check_delay(self, key: str, default: int) -> int:
def _check_delay(self, key, default):
"""Ensure that the value is positive or zero"""
value = int(self._task.args.get(key, default))
if value < 0:
value = 0
return value
@staticmethod
def _get_value_from_facts(data: dict[str, str], distribution: Distribution, default_value: str) -> str:
def _get_value_from_facts(self, variable_name, distribution, default_value):
"""Get dist+version specific args first, then distribution, then family, lastly use default"""
return data.get(
distribution["name"] + distribution["version"],
data.get(distribution["name"], data.get(distribution["family"], default_value)),
)
attr = getattr(self, variable_name)
value = attr.get(
distribution['name'] + distribution['version'],
attr.get(
distribution['name'],
attr.get(
distribution['family'],
getattr(self, default_value))))
return value
def get_distribution(self, task_vars: dict[str, t.Any]) -> Distribution:
def get_distribution(self, task_vars):
# FIXME: only execute the module if we don't already have the facts we need
display.debug(f"{self._task.action}: running setup module to get distribution")
distribution = {}
display.debug(f'{self._task.action}: running setup module to get distribution')
module_output = self._execute_module(
task_vars=task_vars, module_name="ansible.legacy.setup", module_args={"gather_subset": "min"}
)
task_vars=task_vars,
module_name='ansible.legacy.setup',
module_args={'gather_subset': 'min'})
try:
if module_output.get("failed", False):
raise AnsibleError(
f"Failed to determine system distribution. {to_native(module_output['module_stdout'])}, {to_native(module_output['module_stderr'])}"
)
distribution: Distribution = {
"name": module_output["ansible_facts"]["ansible_distribution"].lower(),
"version": to_text(module_output["ansible_facts"]["ansible_distribution_version"].split(".")[0]),
"family": to_text(module_output["ansible_facts"]["ansible_os_family"].lower()),
}
if module_output.get('failed', False):
raise AnsibleError(f"Failed to determine system distribution. {fmt(module_output, 'module_stdout')}, {fmt(module_output, 'module_stderr')}")
distribution['name'] = module_output['ansible_facts']['ansible_distribution'].lower()
distribution['version'] = to_text(
module_output['ansible_facts']['ansible_distribution_version'].split('.')[0])
distribution['family'] = to_text(module_output['ansible_facts']['ansible_os_family'].lower())
display.debug(f"{self._task.action}: distribution: {distribution}")
return distribution
except KeyError as ke:
raise AnsibleError(f'Failed to get distribution information. Missing "{ke.args[0]}" in output.') from ke
raise AnsibleError(f'Failed to get distribution information. Missing "{ke.args[0]}" in output.')
def get_shutdown_command(self, task_vars: dict[str, t.Any], distribution: Distribution) -> str:
def find_command(command: str, find_search_paths: list[str]) -> list[str]:
display.debug(
f'{self._task.action}: running find module looking in {find_search_paths} to get path for "{command}"'
)
def get_shutdown_command(self, task_vars, distribution):
def find_command(command, find_search_paths):
display.debug(f'{self._task.action}: running find module looking in {find_search_paths} to get path for "{command}"')
find_result = self._execute_module(
task_vars=task_vars,
# prevent collection search by calling with ansible.legacy (still allows library/ override of find)
module_name="ansible.legacy.find",
module_args={"paths": find_search_paths, "patterns": [command], "file_type": "any"},
module_name='ansible.legacy.find',
module_args={
'paths': find_search_paths,
'patterns': [command],
'file_type': 'any'
}
)
return [x["path"] for x in find_result["files"]]
return [x['path'] for x in find_result['files']]
shutdown_bin = self._get_value_from_facts(self.SHUTDOWN_COMMANDS, distribution, self.DEFAULT_SHUTDOWN_COMMAND)
default_search_paths = ["/sbin", "/usr/sbin", "/usr/local/sbin"]
search_paths = self._task.args.get("search_paths", default_search_paths)
shutdown_bin = self._get_value_from_facts('SHUTDOWN_COMMANDS', distribution, 'DEFAULT_SHUTDOWN_COMMAND')
default_search_paths = ['/sbin', '/usr/sbin', '/usr/local/sbin']
search_paths = self._task.args.get('search_paths', default_search_paths)
# FIXME: switch all this to user arg spec validation methods when they are available
# Convert bare strings to a list
@@ -127,38 +132,36 @@ class ActionModule(ActionBase):
incorrect_type = any(not is_string(x) for x in search_paths)
if not isinstance(search_paths, list) or incorrect_type:
raise TypeError
except TypeError as e:
except TypeError:
# Error if we didn't get a list
err_msg = f"'search_paths' must be a string or flat list of strings, got {search_paths}"
raise AnsibleError(err_msg) from e
raise AnsibleError(err_msg)
full_path = find_command(shutdown_bin, search_paths) # find the path to the shutdown command
if not full_path: # if we could not find the shutdown command
# tell the user we will try with systemd
display.vvv(
f'Unable to find command "{shutdown_bin}" in search paths: {search_paths}, will attempt a shutdown using systemd directly.'
)
systemctl_search_paths = ["/bin", "/usr/bin"]
full_path = find_command("systemctl", systemctl_search_paths) # find the path to the systemctl command
display.vvv(f'Unable to find command "{shutdown_bin}" in search paths: {search_paths}, will attempt a shutdown using systemd directly.')
systemctl_search_paths = ['/bin', '/usr/bin']
full_path = find_command('systemctl', systemctl_search_paths) # find the path to the systemctl command
if not full_path: # if we couldn't find systemctl
raise AnsibleError(
f'Could not find command "{shutdown_bin}" in search paths: {search_paths} or systemctl'
f" command in search paths: {systemctl_search_paths}, unable to shutdown."
) # we give up here
f' command in search paths: {systemctl_search_paths}, unable to shutdown.') # we give up here
else:
return f"{full_path[0]} poweroff" # done, since we cannot use args with systemd shutdown
# systemd case taken care of, here we add args to the command
args = self._get_value_from_facts(self.SHUTDOWN_COMMAND_ARGS, distribution, self.DEFAULT_SHUTDOWN_COMMAND_ARGS)
args = self._get_value_from_facts('SHUTDOWN_COMMAND_ARGS', distribution, 'DEFAULT_SHUTDOWN_COMMAND_ARGS')
# Convert seconds to minutes. If less that 60, set it to 0.
delay_sec = self.delay
shutdown_message = self._task.args.get("msg", self.DEFAULT_SHUTDOWN_MESSAGE)
shutdown_message = self._task.args.get('msg', self.DEFAULT_SHUTDOWN_MESSAGE)
af = args.format(delay_sec=delay_sec, delay_min=delay_sec // 60, message=shutdown_message)
return f"{full_path[0]} {af}"
return f'{full_path[0]} {af}'
def perform_shutdown(self, task_vars, distribution) -> dict[str, t.Any]:
result: dict[str, t.Any] = {}
def perform_shutdown(self, task_vars, distribution):
result = {}
shutdown_result = {}
shutdown_command_exec = self.get_shutdown_command(task_vars, distribution)
@@ -167,41 +170,40 @@ class ActionModule(ActionBase):
display.vvv(f"{self._task.action}: shutting down server...")
display.debug(f"{self._task.action}: shutting down server with command '{shutdown_command_exec}'")
if self._play_context.check_mode:
shutdown_result["rc"] = 0
shutdown_result['rc'] = 0
else:
shutdown_result = self._low_level_execute_command(shutdown_command_exec, sudoable=self.DEFAULT_SUDOABLE)
except AnsibleConnectionFailure as e:
# If the connection is closed too quickly due to the system being shutdown, carry on
display.debug(f"{self._task.action}: AnsibleConnectionFailure caught and handled: {e}")
shutdown_result["rc"] = 0
display.debug(
f'{self._task.action}: AnsibleConnectionFailure caught and handled: {e}')
shutdown_result['rc'] = 0
if shutdown_result["rc"] != 0:
result["failed"] = True
result["shutdown"] = False
result["msg"] = (
f"Shutdown command failed. Error was {to_native(shutdown_result['stdout'])}, {to_native(shutdown_result['stderr'])}"
)
if shutdown_result['rc'] != 0:
result['failed'] = True
result['shutdown'] = False
result['msg'] = f"Shutdown command failed. Error was {fmt(shutdown_result, 'stdout')}, {fmt(shutdown_result, 'stderr')}"
return result
result["failed"] = False
result["shutdown_command"] = shutdown_command_exec
result['failed'] = False
result['shutdown_command'] = shutdown_command_exec
return result
def run(self, tmp: str | None = None, task_vars: dict[str, t.Any] | None = None) -> dict[str, t.Any]:
def run(self, tmp=None, task_vars=None):
self._supports_check_mode = True
self._supports_async = True
# If running with local connection, fail so we don't shutdown ourself
if self._connection.transport == "local" and (not self._play_context.check_mode):
msg = f"Running {self._task.action} with local connection would shutdown the control node."
return {"changed": False, "elapsed": 0, "shutdown": False, "failed": True, "msg": msg}
if self._connection.transport == 'local' and (not self._play_context.check_mode):
msg = f'Running {self._task.action} with local connection would shutdown the control node.'
return {'changed': False, 'elapsed': 0, 'shutdown': False, 'failed': True, 'msg': msg}
if task_vars is None:
task_vars = {}
result = super().run(tmp, task_vars)
result = super(ActionModule, self).run(tmp, task_vars)
if result.get("skipped", False) or result.get("failed", False):
if result.get('skipped', False) or result.get('failed', False):
return result
distribution = self.get_distribution(task_vars)
@@ -209,12 +211,12 @@ class ActionModule(ActionBase):
# Initiate shutdown
shutdown_result = self.perform_shutdown(task_vars, distribution)
if shutdown_result["failed"]:
if shutdown_result['failed']:
result = shutdown_result
return result
result["shutdown"] = True
result["changed"] = True
result["shutdown_command"] = shutdown_result["shutdown_command"]
result['shutdown'] = True
result['changed'] = True
result['shutdown_command'] = shutdown_result['shutdown_command']
return result

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -82,26 +83,9 @@ options:
- name: ansible_doas_prompt_l10n
env:
- name: ANSIBLE_DOAS_PROMPT_L10N
allow_pipelining:
description:
- When set to V(true), do allow pipelining with ansible-core 2.19+.
- This should only be used when doas is configured to not ask for a password (C(nopass)).
type: boolean
default: false
version_added: 12.4.0
ini:
- section: doas_become_plugin
key: allow_pipelining
vars:
- name: ansible_doas_allow_pipelining
env:
- name: ANSIBLE_DOAS_ALLOW_PIPELINING
notes:
- This become plugin does not work when connection pipelining is enabled
and doas requests a password.
With ansible-core 2.19+, using this plugin automatically disables pipelining,
unless O(allow_pipelining=true) is explicitly set by the user.
On ansible-core 2.18 and before, pipelining must explicitly be disabled by the user.
- This become plugin does not work when connection pipelining is enabled. With ansible-core 2.19+, using it automatically
disables pipelining. On ansible-core 2.18 and before, pipelining must explicitly be disabled by the user.
"""
import re
@@ -111,47 +95,45 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.doas"
name = 'community.general.doas'
# messages for detecting prompted password issues
fail = ("Permission denied",)
missing = ("Authorization required",)
fail = ('Permission denied',)
missing = ('Authorization required',)
# See https://github.com/ansible-collections/community.general/issues/9977,
# https://github.com/ansible/ansible/pull/78111,
# https://github.com/ansible-collections/community.general/issues/11411
@property
def pipelining(self) -> bool: # type: ignore[override]
return self.get_option("allow_pipelining")
# https://github.com/ansible/ansible/pull/78111
pipelining = False
def check_password_prompt(self, b_output):
"""checks if the expected password prompt exists in b_output"""
''' checks if the expected password prompt exists in b_output '''
# FIXME: more accurate would be: 'doas (%s@' % remote_user
# however become plugins don't have that information currently
b_prompts = [to_bytes(p) for p in self.get_option("prompt_l10n")] or [rb"doas \(", rb"Password:"]
b_prompts = [to_bytes(p) for p in self.get_option('prompt_l10n')] or [br'doas \(', br'Password:']
b_prompt = b"|".join(b_prompts)
return bool(re.match(b_prompt, b_output))
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
self.prompt = True
become_exe = self.get_option("become_exe")
become_exe = self.get_option('become_exe')
flags = self.get_option("become_flags")
if not self.get_option("become_pass") and "-n" not in flags:
flags += " -n"
flags = self.get_option('become_flags')
if not self.get_option('become_pass') and '-n' not in flags:
flags += ' -n'
become_user = self.get_option("become_user")
user = f"-u {become_user}" if become_user else ""
become_user = self.get_option('become_user')
user = f'-u {become_user}' if become_user else ''
success_cmd = self._build_success_command(cmd, shell, noexe=True)
executable = getattr(shell, "executable", shell.SHELL_FAMILY)
executable = getattr(shell, 'executable', shell.SHELL_FAMILY)
return f"{become_exe} {flags} {user} {executable} -c {success_cmd}"
return f'{become_exe} {flags} {user} {executable} -c {success_cmd}'

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -74,25 +75,26 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.dzdo"
name = 'community.general.dzdo'
# messages for detecting prompted password issues
fail = ("Sorry, try again.",)
fail = ('Sorry, try again.',)
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
becomecmd = self.get_option("become_exe")
becomecmd = self.get_option('become_exe')
flags = self.get_option("become_flags")
if self.get_option("become_pass"):
self.prompt = f"[dzdo via ansible, key={self._id}] password:"
flags = f'{flags.replace("-n", "")} -p "{self.prompt}"'
flags = self.get_option('become_flags')
if self.get_option('become_pass'):
self.prompt = f'[dzdo via ansible, key={self._id}] password:'
flags = f"{flags.replace('-n', '')} -p \"{self.prompt}\""
become_user = self.get_option("become_user")
user = f"-u {become_user}" if become_user else ""
become_user = self.get_option('become_user')
user = f'-u {become_user}' if become_user else ''
return f"{becomecmd} {flags} {user} {self._build_success_command(cmd, shell)}"

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -92,22 +93,24 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.ksu"
name = 'community.general.ksu'
# messages for detecting prompted password issues
fail = ("Password incorrect",)
missing = ("No password given",)
fail = ('Password incorrect',)
missing = ('No password given',)
def check_password_prompt(self, b_output):
"""checks if the expected password prompt exists in b_output"""
''' checks if the expected password prompt exists in b_output '''
prompts = self.get_option("prompt_l10n") or ["Kerberos password for .*@.*:"]
prompts = self.get_option('prompt_l10n') or ["Kerberos password for .*@.*:"]
b_prompt = b"|".join(to_bytes(p) for p in prompts)
return bool(re.match(b_prompt, b_output))
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
# Prompt handling for ``ksu`` is more complicated, this
# is used to satisfy the connection plugin
@@ -116,8 +119,8 @@ class BecomeModule(BecomeBase):
if not cmd:
return cmd
exe = self.get_option("become_exe")
exe = self.get_option('become_exe')
flags = self.get_option("become_flags")
user = self.get_option("become_user")
return f"{exe} {user} {flags} -e {self._build_success_command(cmd, shell)} "
flags = self.get_option('become_flags')
user = self.get_option('become_user')
return f'{exe} {user} {flags} -e {self._build_success_command(cmd, shell)} '

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -92,18 +93,20 @@ EXAMPLES = r"""
from re import compile as re_compile
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.become import BecomeBase
from ansible.module_utils.common.text.converters import to_bytes
ansi_color_codes = re_compile(to_bytes(r"\x1B\[[0-9;]+m"))
ansi_color_codes = re_compile(to_bytes(r'\x1B\[[0-9;]+m'))
class BecomeModule(BecomeBase):
name = "community.general.machinectl"
prompt = "Password: "
fail = ("==== AUTHENTICATION FAILED ====",)
success = ("==== AUTHENTICATION COMPLETE ====",)
name = 'community.general.machinectl'
prompt = 'Password: '
fail = ('==== AUTHENTICATION FAILED ====',)
success = ('==== AUTHENTICATION COMPLETE ====',)
require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932
# See https://github.com/ansible/ansible/issues/81254,
@@ -115,19 +118,16 @@ class BecomeModule(BecomeBase):
return ansi_color_codes.sub(b"", line)
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
become = self.get_option("become_exe")
become = self.get_option('become_exe')
flags = self.get_option("become_flags")
user = self.get_option("become_user")
# SYSTEMD_COLORS=0 stops machinectl from appending ANSI reset
# sequences (ESC[0m, ESC[J) after the child exits, which would
# otherwise land after the module JSON and break result parsing.
return f"SYSTEMD_COLORS=0 {become} -q shell {flags} {user}@ {self._build_success_command(cmd, shell)}"
flags = self.get_option('become_flags')
user = self.get_option('become_user')
return f'{become} -q shell {flags} {user}@ {self._build_success_command(cmd, shell)}'
def check_success(self, b_output):
b_output = self.remove_ansi_codes(b_output)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -86,21 +87,22 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.pbrun"
prompt = "Password:"
name = 'community.general.pbrun'
prompt = 'Password:'
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
become_exe = self.get_option("become_exe")
become_exe = self.get_option('become_exe')
flags = self.get_option("become_flags")
become_user = self.get_option("become_user")
user = f"-u {become_user}" if become_user else ""
noexe = not self.get_option("wrap_exe")
flags = self.get_option('become_flags')
become_user = self.get_option('become_user')
user = f'-u {become_user}' if become_user else ''
noexe = not self.get_option('wrap_exe')
return f"{become_exe} {flags} {user} {self._build_success_command(cmd, shell, noexe=noexe)}"

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -91,16 +92,17 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.pfexec"
name = 'community.general.pfexec'
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
exe = self.get_option("become_exe")
exe = self.get_option('become_exe')
flags = self.get_option("become_flags")
noexe = not self.get_option("wrap_exe")
return f"{exe} {flags} {self._build_success_command(cmd, shell, noexe=noexe)}"
flags = self.get_option('become_flags')
noexe = not self.get_option('wrap_exe')
return f'{exe} {flags} {self._build_success_command(cmd, shell, noexe=noexe)}'

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -59,21 +60,21 @@ notes:
"""
from shlex import quote as shlex_quote
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.pmrun"
prompt = "Enter UPM user password:"
name = 'community.general.pmrun'
prompt = 'Enter UPM user password:'
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
become = self.get_option("become_exe")
become = self.get_option('become_exe')
flags = self.get_option("become_flags")
return f"{become} {flags} {shlex_quote(self._build_success_command(cmd, shell))}"
flags = self.get_option('become_flags')
return f'{become} {flags} {shlex_quote(self._build_success_command(cmd, shell))}'

View File

@@ -1,9 +1,11 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = r"""
name: run0
short_description: Systemd's run0
@@ -60,8 +62,6 @@ options:
type: string
notes:
- This plugin only works when a C(polkit) rule is in place.
- This become plugin does not work when connection pipelining is enabled. With ansible-core 2.19+, using it automatically
disables pipelining. On ansible-core 2.18 and before, pipelining must explicitly be disabled by the user.
"""
EXAMPLES = r"""
@@ -79,23 +79,22 @@ EXAMPLES = r"""
from re import compile as re_compile
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.become import BecomeBase
from ansible.module_utils.common.text.converters import to_bytes
ansi_color_codes = re_compile(to_bytes(r"\x1B\[[0-9;]+m"))
class BecomeModule(BecomeBase):
name = "community.general.run0"
prompt = "Password: "
fail = ("==== AUTHENTICATION FAILED ====",)
success = ("==== AUTHENTICATION COMPLETE ====",)
require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932
# See https://github.com/ansible/ansible/issues/81254,
# https://github.com/ansible/ansible/pull/78111
pipelining = False
require_tty = (
True # see https://github.com/ansible-collections/community.general/issues/6932
)
@staticmethod
def remove_ansi_codes(line):
@@ -111,11 +110,9 @@ class BecomeModule(BecomeBase):
flags = self.get_option("become_flags")
user = self.get_option("become_user")
# SYSTEMD_COLORS=0 stops run0 from emitting terminal control
# sequences (window title OSC, ANSI reset) around the child
# command, which would otherwise corrupt the module JSON and
# break result parsing.
return f"SYSTEMD_COLORS=0 {become} --user={user} {flags} {self._build_success_command(cmd, shell)}"
return (
f"{become} --user={user} {flags} {self._build_success_command(cmd, shell)}"
)
def check_success(self, b_output):
b_output = self.remove_ansi_codes(b_output)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -75,19 +76,20 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.sesu"
prompt = "Please enter your password:"
fail = missing = ("Sorry, try again with sesu.",)
name = 'community.general.sesu'
prompt = 'Please enter your password:'
fail = missing = ('Sorry, try again with sesu.',)
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
become = self.get_option("become_exe")
become = self.get_option('become_exe')
flags = self.get_option("become_flags")
user = self.get_option("become_user")
return f"{become} {flags} {user} -c {self._build_success_command(cmd, shell)}"
flags = self.get_option('become_flags')
user = self.get_option('become_user')
return f'{become} {flags} {user} -c {self._build_success_command(cmd, shell)}'

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -79,33 +80,34 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.sudosu"
name = 'community.general.sudosu'
# messages for detecting prompted password issues
fail = ("Sorry, try again.",)
missing = ("Sorry, a password is required to run sudo", "sudo: a password is required")
fail = ('Sorry, try again.',)
missing = ('Sorry, a password is required to run sudo', 'sudo: a password is required')
def build_become_command(self, cmd, shell):
super().build_become_command(cmd, shell)
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
becomecmd = "sudo"
becomecmd = 'sudo'
flags = self.get_option("become_flags") or ""
prompt = ""
if self.get_option("become_pass"):
self.prompt = f"[sudo via ansible, key={self._id}] password:"
flags = self.get_option('become_flags') or ''
prompt = ''
if self.get_option('become_pass'):
self.prompt = f'[sudo via ansible, key={self._id}] password:'
if flags: # this could be simplified, but kept as is for now for backwards string matching
flags = flags.replace("-n", "")
flags = flags.replace('-n', '')
prompt = f'-p "{self.prompt}"'
user = self.get_option("become_user") or ""
user = self.get_option('become_user') or ''
if user:
user = f"{user}"
user = f'{user}'
if self.get_option("alt_method"):
if self.get_option('alt_method'):
return f"{becomecmd} {flags} {prompt} su -l {user} -c {self._build_success_command(cmd, shell, True)}"
else:
return f"{becomecmd} {flags} {prompt} su -l {user} {self._build_success_command(cmd, shell)}"

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Brian Coca, Josh Drake, et al
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -49,17 +50,16 @@ options:
import collections
import os
import time
from collections.abc import MutableSet
from itertools import chain
from multiprocessing import Lock
from itertools import chain
from ansible.errors import AnsibleError
from collections.abc import MutableSet
from ansible.plugins.cache import BaseCacheModule
from ansible.utils.display import Display
try:
import memcache
HAS_MEMCACHE = True
except ImportError:
HAS_MEMCACHE = False
@@ -67,7 +67,7 @@ except ImportError:
display = Display()
class ProxyClientPool:
class ProxyClientPool(object):
"""
Memcached connection pooling for thread/fork safety. Inspired by py-redis
connection pool.
@@ -76,7 +76,7 @@ class ProxyClientPool:
"""
def __init__(self, *args, **kwargs):
self.max_connections = kwargs.pop("max_connections", 1024)
self.max_connections = kwargs.pop('max_connections', 1024)
self.connection_args = args
self.connection_kwargs = kwargs
self.reset()
@@ -124,7 +124,6 @@ class ProxyClientPool:
def __getattr__(self, name):
def wrapped(*args, **kwargs):
return self._proxy_client(name, *args, **kwargs)
return wrapped
def _proxy_client(self, name, *args, **kwargs):
@@ -141,8 +140,7 @@ class CacheModuleKeys(MutableSet):
A set subclass that keeps track of insertion time and persists
the set in memcached.
"""
PREFIX = "ansible_cache_keys"
PREFIX = 'ansible_cache_keys'
def __init__(self, cache, *args, **kwargs):
self._cache = cache
@@ -174,14 +172,15 @@ class CacheModuleKeys(MutableSet):
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
connection = ["127.0.0.1:11211"]
super().__init__(*args, **kwargs)
if self.get_option("_uri"):
connection = self.get_option("_uri")
self._timeout = self.get_option("_timeout")
self._prefix = self.get_option("_prefix")
def __init__(self, *args, **kwargs):
connection = ['127.0.0.1:11211']
super(CacheModule, self).__init__(*args, **kwargs)
if self.get_option('_uri'):
connection = self.get_option('_uri')
self._timeout = self.get_option('_timeout')
self._prefix = self.get_option('_prefix')
if not HAS_MEMCACHE:
raise AnsibleError("python-memcached is required for the memcached fact cache")

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017, Brian Coca
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -42,7 +43,10 @@ options:
type: float
"""
import pickle
try:
import cPickle as pickle
except ImportError:
import pickle
from ansible.plugins.cache import BaseFileCacheModule
@@ -51,15 +55,14 @@ class CacheModule(BaseFileCacheModule):
"""
A caching module backed by pickle files.
"""
_persistent = False # prevent unnecessary JSON serialization and key munging
def _load(self, filepath):
# Pickle is a binary format
with open(filepath, "rb") as f:
return pickle.load(f, encoding="bytes")
with open(filepath, 'rb') as f:
return pickle.load(f, encoding='bytes')
def _dump(self, value, filepath):
with open(filepath, "wb") as f:
with open(filepath, 'wb') as f:
# Use pickle protocol 2 which is compatible with Python 2.3+.
pickle.dump(value, f, protocol=2)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Brian Coca, Josh Drake, et al
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -66,18 +67,17 @@ options:
section: defaults
"""
import json
import re
import time
import json
from ansible.errors import AnsibleError
from ansible.parsing.ajson import AnsibleJSONDecoder, AnsibleJSONEncoder
from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder
from ansible.plugins.cache import BaseCacheModule
from ansible.utils.display import Display
try:
from redis import VERSION, StrictRedis
from redis import StrictRedis, VERSION
HAS_REDIS = True
except ImportError:
HAS_REDIS = False
@@ -94,35 +94,32 @@ class CacheModule(BaseCacheModule):
to expire keys. This mechanism is used or a pattern matched 'scan' for
performance.
"""
_sentinel_service_name = None
re_url_conn = re.compile(r"^([^:]+|\[[^]]+\]):(\d+):(\d+)(?::(.*))?$")
re_sent_conn = re.compile(r"^(.*):(\d+)$")
re_url_conn = re.compile(r'^([^:]+|\[[^]]+\]):(\d+):(\d+)(?::(.*))?$')
re_sent_conn = re.compile(r'^(.*):(\d+)$')
def __init__(self, *args, **kwargs):
uri = ""
uri = ''
super().__init__(*args, **kwargs)
if self.get_option("_uri"):
uri = self.get_option("_uri")
self._timeout = float(self.get_option("_timeout"))
self._prefix = self.get_option("_prefix")
self._keys_set = self.get_option("_keyset_name")
self._sentinel_service_name = self.get_option("_sentinel_service_name")
super(CacheModule, self).__init__(*args, **kwargs)
if self.get_option('_uri'):
uri = self.get_option('_uri')
self._timeout = float(self.get_option('_timeout'))
self._prefix = self.get_option('_prefix')
self._keys_set = self.get_option('_keyset_name')
self._sentinel_service_name = self.get_option('_sentinel_service_name')
if not HAS_REDIS:
raise AnsibleError(
"The 'redis' python module (version 2.4.5 or newer) is required for the redis fact cache, 'pip install redis'"
)
raise AnsibleError("The 'redis' python module (version 2.4.5 or newer) is required for the redis fact cache, 'pip install redis'")
self._cache = {}
kw = {}
# tls connection
tlsprefix = "tls://"
tlsprefix = 'tls://'
if uri.startswith(tlsprefix):
kw["ssl"] = True
uri = uri[len(tlsprefix) :]
kw['ssl'] = True
uri = uri[len(tlsprefix):]
# redis sentinel connection
if self._sentinel_service_name:
@@ -132,7 +129,7 @@ class CacheModule(BaseCacheModule):
connection = self._parse_connection(self.re_url_conn, uri)
self._db = StrictRedis(*connection, **kw)
display.vv(f"Redis connection: {self._db}")
display.vv(f'Redis connection: {self._db}')
@staticmethod
def _parse_connection(re_patt, uri):
@@ -147,37 +144,36 @@ class CacheModule(BaseCacheModule):
"""
try:
from redis.sentinel import Sentinel
except ImportError as e:
raise AnsibleError(
"The 'redis' python module (version 2.9.0 or newer) is required to use redis sentinel."
) from e
except ImportError:
raise AnsibleError("The 'redis' python module (version 2.9.0 or newer) is required to use redis sentinel.")
if ";" not in uri:
raise AnsibleError("_uri does not have sentinel syntax.")
if ';' not in uri:
raise AnsibleError('_uri does not have sentinel syntax.')
# format: "localhost:26379;localhost2:26379;0:changeme"
connections = uri.split(";")
connections = uri.split(';')
connection_args = connections.pop(-1)
if len(connection_args) > 0: # handle if no db nr is given
connection_args = connection_args.split(":")
kw["db"] = connection_args.pop(0)
connection_args = connection_args.split(':')
kw['db'] = connection_args.pop(0)
try:
kw["password"] = connection_args.pop(0)
kw['password'] = connection_args.pop(0)
except IndexError:
pass # password is optional
sentinels = [self._parse_connection(self.re_sent_conn, shost) for shost in connections]
display.vv(f"\nUsing redis sentinels: {sentinels}")
display.vv(f'\nUsing redis sentinels: {sentinels}')
scon = Sentinel(sentinels, **kw)
try:
return scon.master_for(self._sentinel_service_name, socket_timeout=0.2)
except Exception as exc:
raise AnsibleError(f"Could not connect to redis sentinel: {exc}") from exc
raise AnsibleError(f'Could not connect to redis sentinel: {exc}')
def _make_key(self, key):
return self._prefix + key
def get(self, key):
if key not in self._cache:
value = self._db.get(self._make_key(key))
# guard against the key not being removed from the zset;
@@ -191,6 +187,7 @@ class CacheModule(BaseCacheModule):
return self._cache.get(key)
def set(self, key, value):
value2 = json.dumps(value, cls=AnsibleJSONEncoder, sort_keys=True, indent=4)
if self._timeout > 0: # a timeout of 0 is handled as meaning 'never expire'
self._db.setex(self._make_key(key), int(self._timeout), value2)
@@ -214,7 +211,7 @@ class CacheModule(BaseCacheModule):
def contains(self, key):
self._expire_keys()
return self._db.zrank(self._keys_set, key) is not None
return (self._db.zrank(self._keys_set, key) is not None)
def delete(self, key):
if key in self._cache:

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017, Brian Coca
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -46,8 +47,9 @@ options:
import os
import yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.plugins.cache import BaseFileCacheModule
@@ -57,9 +59,9 @@ class CacheModule(BaseFileCacheModule):
"""
def _load(self, filepath):
with open(os.path.abspath(filepath), encoding="utf-8") as f:
with open(os.path.abspath(filepath), 'r', encoding='utf-8') as f:
return AnsibleLoader(f).get_single_data()
def _dump(self, value, filepath):
with open(os.path.abspath(filepath), "w", encoding="utf-8") as f:
with open(os.path.abspath(filepath), 'w', encoding='utf-8') as f:
yaml.dump(value, f, Dumper=AnsibleDumper, default_flow_style=False)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018 Matt Martz <matt@sivel.net>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -41,15 +42,14 @@ options:
key: cur_mem_file
"""
import threading
import time
import threading
from ansible.plugins.callback import CallbackBase
class MemProf(threading.Thread):
"""Python thread for recording memory usage"""
def __init__(self, path, obj=None):
threading.Thread.__init__(self)
self.obj = obj
@@ -67,25 +67,25 @@ class MemProf(threading.Thread):
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "aggregate"
CALLBACK_NAME = "community.general.cgroup_memory_recap"
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'community.general.cgroup_memory_recap'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display)
super(CallbackModule, self).__init__(display)
self._task_memprof = None
self.task_results = []
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.cgroup_max_file = self.get_option("max_mem_file")
self.cgroup_current_file = self.get_option("cur_mem_file")
self.cgroup_max_file = self.get_option('max_mem_file')
self.cgroup_current_file = self.get_option('cur_mem_file')
with open(self.cgroup_max_file, "w+") as f:
f.write("0")
with open(self.cgroup_max_file, 'w+') as f:
f.write('0')
def _profile_memory(self, obj=None):
prev_task = None
@@ -113,8 +113,8 @@ class CallbackModule(CallbackBase):
with open(self.cgroup_max_file) as f:
max_results = int(f.read().strip()) / 1024 / 1024
self._display.banner("CGROUP MEMORY RECAP")
self._display.display(f"Execution Maximum: {max_results:0.2f}MB\n\n")
self._display.banner('CGROUP MEMORY RECAP')
self._display.display(f'Execution Maximum: {max_results:0.2f}MB\n\n')
for task, memory in self.task_results:
self._display.display(f"{task.get_name()} ({task._uuid}): {memory:0.2f}MB")
self._display.display(f'{task.get_name()} ({task._uuid}): {memory:0.2f}MB')

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -25,14 +26,13 @@ class CallbackModule(CallbackBase):
This is a very trivial example of how any callback function can get at play and task objects.
play will be 'None' for runner invocations, and task will be None for 'setup' invocations.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "aggregate"
CALLBACK_NAME = "community.general.context_demo"
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'community.general.context_demo'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
super(CallbackModule, self).__init__(*args, **kwargs)
self.task = None
self.play = None
@@ -41,11 +41,11 @@ class CallbackModule(CallbackBase):
self._display.display(" --- ARGS ")
for i, a in enumerate(args):
self._display.display(f" {i}: {a}")
self._display.display(f' {i}: {a}')
self._display.display(" --- KWARGS ")
for k in kwargs:
self._display.display(f" {k}: {kwargs[k]}")
self._display.display(f' {k}: {kwargs[k]}')
def v2_playbook_on_play_start(self, play):
self.play = play

View File

@@ -1,9 +1,10 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ivan Aragones Muniesa <ivan.aragones.muniesa@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
"""
Counter enabled Ansible callback plugin (See DOCUMENTATION for more information)
"""
'''
Counter enabled Ansible callback plugin (See DOCUMENTATION for more information)
'''
from __future__ import annotations
@@ -23,20 +24,21 @@ requirements:
"""
from ansible import constants as C
from ansible.playbook.task_include import TaskInclude
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
from ansible.playbook.task_include import TaskInclude
class CallbackModule(CallbackBase):
"""
'''
This is the default callback interface, which simply prints messages
to stdout when new callback events are received.
"""
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.counter_enabled"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.counter_enabled'
_task_counter = 1
_task_total = 0
@@ -46,7 +48,7 @@ class CallbackModule(CallbackBase):
_previous_batch_total = 0
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
self._playbook = ""
self._play = ""
@@ -54,7 +56,11 @@ class CallbackModule(CallbackBase):
def _all_vars(self, host=None, task=None):
# host and task need to be specified in case 'magic variables' (host vars, group vars, etc)
# need to be loaded as well
return self._play.get_variable_manager().get_vars(play=self._play, host=host, task=task)
return self._play.get_variable_manager().get_vars(
play=self._play,
host=host,
task=task
)
def v2_playbook_on_start(self, playbook):
self._playbook = playbook
@@ -62,7 +68,7 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if not name:
msg = "play"
msg = u"play"
else:
msg = f"PLAY [{name}]"
@@ -72,8 +78,8 @@ class CallbackModule(CallbackBase):
self._play = play
self._previous_batch_total = self._current_batch_total
self._current_batch_total = self._previous_batch_total + len(self._all_vars()["vars"]["ansible_play_batch"])
self._host_total = len(self._all_vars()["vars"]["ansible_play_hosts_all"])
self._current_batch_total = self._previous_batch_total + len(self._all_vars()['vars']['ansible_play_batch'])
self._host_total = len(self._all_vars()['vars']['ansible_play_hosts_all'])
self._task_total = len(self._play.get_tasks()[0])
self._task_counter = 1
@@ -88,39 +94,39 @@ class CallbackModule(CallbackBase):
f"{hostcolor(host, stat)} : {colorize('ok', stat['ok'], C.COLOR_OK)} {colorize('changed', stat['changed'], C.COLOR_CHANGED)} "
f"{colorize('unreachable', stat['unreachable'], C.COLOR_UNREACHABLE)} {colorize('failed', stat['failures'], C.COLOR_ERROR)} "
f"{colorize('rescued', stat['rescued'], C.COLOR_OK)} {colorize('ignored', stat['ignored'], C.COLOR_WARN)}",
screen_only=True,
screen_only=True
)
self._display.display(
f"{hostcolor(host, stat, False)} : {colorize('ok', stat['ok'], None)} {colorize('changed', stat['changed'], None)} "
f"{colorize('unreachable', stat['unreachable'], None)} {colorize('failed', stat['failures'], None)} "
f"{colorize('rescued', stat['rescued'], None)} {colorize('ignored', stat['ignored'], None)}",
log_only=True,
log_only=True
)
self._display.display("", screen_only=True)
# print custom stats
if self._plugin_options.get("show_custom_stats", C.SHOW_CUSTOM_STATS) and stats.custom:
if self._plugin_options.get('show_custom_stats', C.SHOW_CUSTOM_STATS) and stats.custom:
# fallback on constants for inherited plugins missing docs
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == "_run":
if k == '_run':
continue
_custom_stats = self._dump_results(stats.custom[k], indent=1).replace("\n", "")
self._display.display(f"\t{k}: {_custom_stats}")
_custom_stats = self._dump_results(stats.custom[k], indent=1).replace('\n', '')
self._display.display(f'\t{k}: {_custom_stats}')
# print per run custom stats
if "_run" in stats.custom:
if '_run' in stats.custom:
self._display.display("", screen_only=True)
_custom_stats_run = self._dump_results(stats.custom["_run"], indent=1).replace("\n", "")
self._display.display(f"\tRUN: {_custom_stats_run}")
_custom_stats_run = self._dump_results(stats.custom['_run'], indent=1).replace('\n', '')
self._display.display(f'\tRUN: {_custom_stats_run}')
self._display.display("", screen_only=True)
def v2_playbook_on_task_start(self, task, is_conditional):
args = ""
args = ''
# args can be specified as no_log in several places: in the task or in
# the argument spec. We can check whether the task is no_log but the
# argument spec can't be because that is only run on the target
@@ -130,8 +136,8 @@ class CallbackModule(CallbackBase):
# that they can secure this if they feel that their stdout is insecure
# (shoulder surfing, logging stdout straight to a file, etc).
if not task.no_log and C.DISPLAY_ARGS_TO_STDOUT:
args = ", ".join(("{k}={v}" for k, v in task.args.items()))
args = f" {args}"
args = ', '.join(('{k}={v}' for k, v in task.args.items()))
args = f' {args}'
self._display.banner(f"TASK {self._task_counter}/{self._task_total} [{task.get_name().strip()}{args}]")
if self._display.verbosity >= 2:
path = task.get_path()
@@ -141,24 +147,23 @@ class CallbackModule(CallbackBase):
self._task_counter += 1
def v2_runner_on_ok(self, result):
self._host_counter += 1
delegated_vars = result._result.get("_ansible_delegated_vars", None)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if isinstance(result._task, TaskInclude):
return
elif result._result.get("changed", False):
elif result._result.get('changed', False):
if delegated_vars:
msg = f"changed: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> {delegated_vars['ansible_host']}]"
else:
msg = f"changed: {self._host_counter}/{self._host_total} [{result._host.get_name()}]"
color = C.COLOR_CHANGED
else:
if not self._plugin_options.get("display_ok_hosts", True):
return
if delegated_vars:
msg = f"ok: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> {delegated_vars['ansible_host']}]"
else:
@@ -167,7 +172,7 @@ class CallbackModule(CallbackBase):
self._handle_warnings(result._result)
if result._task.loop and "results" in result._result:
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
self._clean_results(result._result, result._task.action)
@@ -177,18 +182,19 @@ class CallbackModule(CallbackBase):
self._display.display(msg, color=color)
def v2_runner_on_failed(self, result, ignore_errors=False):
self._host_counter += 1
delegated_vars = result._result.get("_ansible_delegated_vars", None)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
self._clean_results(result._result, result._task.action)
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._handle_exception(result._result)
self._handle_warnings(result._result)
if result._task.loop and "results" in result._result:
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
@@ -196,12 +202,12 @@ class CallbackModule(CallbackBase):
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> "
f"{delegated_vars['ansible_host']}]: FAILED! => {self._dump_results(result._result)}",
color=C.COLOR_ERROR,
color=C.COLOR_ERROR
)
else:
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()}]: FAILED! => {self._dump_results(result._result)}",
color=C.COLOR_ERROR,
color=C.COLOR_ERROR
)
if ignore_errors:
@@ -210,15 +216,14 @@ class CallbackModule(CallbackBase):
def v2_runner_on_skipped(self, result):
self._host_counter += 1
if self._plugin_options.get(
"show_skipped_hosts", C.DISPLAY_SKIPPED_HOSTS
): # fallback on constants for inherited plugins missing docs
if self._plugin_options.get('show_skipped_hosts', C.DISPLAY_SKIPPED_HOSTS): # fallback on constants for inherited plugins missing docs
self._clean_results(result._result, result._task.action)
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if result._task.loop and "results" in result._result:
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
msg = f"skipping: {self._host_counter}/{self._host_total} [{result._host.get_name()}]"
@@ -229,18 +234,18 @@ class CallbackModule(CallbackBase):
def v2_runner_on_unreachable(self, result):
self._host_counter += 1
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
delegated_vars = result._result.get("_ansible_delegated_vars", None)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if delegated_vars:
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> "
f"{delegated_vars['ansible_host']}]: UNREACHABLE! => {self._dump_results(result._result)}",
color=C.COLOR_UNREACHABLE,
color=C.COLOR_UNREACHABLE
)
else:
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()}]: UNREACHABLE! => {self._dump_results(result._result)}",
color=C.COLOR_UNREACHABLE,
color=C.COLOR_UNREACHABLE
)

View File

@@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -35,8 +37,8 @@ from ansible.plugins.callback.default import CallbackModule as Default
class CallbackModule(Default):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.default_without_diff"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.default_without_diff'
def v2_on_file_diff(self, result):
pass

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2016, Dag Wieers <dag@wieers.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -22,18 +23,17 @@ requirements:
HAS_OD = False
try:
from collections import OrderedDict
HAS_OD = True
except ImportError:
pass
import sys
from collections.abc import MutableMapping, MutableSequence
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
from ansible.utils.color import colorize, hostcolor
from ansible.utils.display import Display
import sys
display = Display()
@@ -70,66 +70,66 @@ display = Display()
# FIXME: Importing constants as C simply does not work, beats me :-/
# from ansible import constants as C
class C:
COLOR_HIGHLIGHT = "white"
COLOR_VERBOSE = "blue"
COLOR_WARN = "bright purple"
COLOR_ERROR = "red"
COLOR_DEBUG = "dark gray"
COLOR_DEPRECATE = "purple"
COLOR_SKIP = "cyan"
COLOR_UNREACHABLE = "bright red"
COLOR_OK = "green"
COLOR_CHANGED = "yellow"
COLOR_HIGHLIGHT = 'white'
COLOR_VERBOSE = 'blue'
COLOR_WARN = 'bright purple'
COLOR_ERROR = 'red'
COLOR_DEBUG = 'dark gray'
COLOR_DEPRECATE = 'purple'
COLOR_SKIP = 'cyan'
COLOR_UNREACHABLE = 'bright red'
COLOR_OK = 'green'
COLOR_CHANGED = 'yellow'
# Taken from Dstat
class vt100:
black = "\033[0;30m"
darkred = "\033[0;31m"
darkgreen = "\033[0;32m"
darkyellow = "\033[0;33m"
darkblue = "\033[0;34m"
darkmagenta = "\033[0;35m"
darkcyan = "\033[0;36m"
gray = "\033[0;37m"
black = '\033[0;30m'
darkred = '\033[0;31m'
darkgreen = '\033[0;32m'
darkyellow = '\033[0;33m'
darkblue = '\033[0;34m'
darkmagenta = '\033[0;35m'
darkcyan = '\033[0;36m'
gray = '\033[0;37m'
darkgray = "\033[1;30m"
red = "\033[1;31m"
green = "\033[1;32m"
yellow = "\033[1;33m"
blue = "\033[1;34m"
magenta = "\033[1;35m"
cyan = "\033[1;36m"
white = "\033[1;37m"
darkgray = '\033[1;30m'
red = '\033[1;31m'
green = '\033[1;32m'
yellow = '\033[1;33m'
blue = '\033[1;34m'
magenta = '\033[1;35m'
cyan = '\033[1;36m'
white = '\033[1;37m'
blackbg = "\033[40m"
redbg = "\033[41m"
greenbg = "\033[42m"
yellowbg = "\033[43m"
bluebg = "\033[44m"
magentabg = "\033[45m"
cyanbg = "\033[46m"
whitebg = "\033[47m"
blackbg = '\033[40m'
redbg = '\033[41m'
greenbg = '\033[42m'
yellowbg = '\033[43m'
bluebg = '\033[44m'
magentabg = '\033[45m'
cyanbg = '\033[46m'
whitebg = '\033[47m'
reset = "\033[0;0m"
bold = "\033[1m"
reverse = "\033[2m"
underline = "\033[4m"
reset = '\033[0;0m'
bold = '\033[1m'
reverse = '\033[2m'
underline = '\033[4m'
clear = "\033[2J"
# clearline = '\033[K'
clearline = "\033[2K"
save = "\033[s"
restore = "\033[u"
save_all = "\0337"
restore_all = "\0338"
linewrap = "\033[7h"
nolinewrap = "\033[7l"
clear = '\033[2J'
# clearline = '\033[K'
clearline = '\033[2K'
save = '\033[s'
restore = '\033[u'
save_all = '\0337'
restore_all = '\0338'
linewrap = '\033[7h'
nolinewrap = '\033[7l'
up = "\033[1A"
down = "\033[1B"
right = "\033[1C"
left = "\033[1D"
up = '\033[1A'
down = '\033[1B'
right = '\033[1C'
left = '\033[1D'
colors = dict(
@@ -141,38 +141,41 @@ colors = dict(
unreachable=vt100.red,
)
states = ("skipped", "ok", "changed", "failed", "unreachable")
states = ('skipped', 'ok', 'changed', 'failed', 'unreachable')
class CallbackModule(CallbackModule_default):
"""
'''
This is the dense callback interface, where screen estate is still valued.
"""
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "dense"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'dense'
def __init__(self):
# From CallbackModule
self._display = display
if HAS_OD:
self.disabled = False
self.super_ref = super()
self.super_ref = super(CallbackModule, self)
self.super_ref.__init__()
# Attributes to remove from results for more density
self.removed_attributes = (
# 'changed',
"delta",
'delta',
# 'diff',
"end",
"failed",
"failed_when_result",
"invocation",
"start",
"stdout_lines",
'end',
'failed',
'failed_when_result',
'invocation',
'start',
'stdout_lines',
)
# Initiate data structures
@@ -180,15 +183,13 @@ class CallbackModule(CallbackModule_default):
self.keep = False
self.shown_title = False
self.count = dict(play=0, handler=0, task=0)
self.type = "foo"
self.type = 'foo'
# Start immediately on the first line
sys.stdout.write(vt100.reset + vt100.save + vt100.clearline)
sys.stdout.flush()
else:
display.warning(
"The 'dense' callback plugin requires OrderedDict which is not available in this version of python, disabling."
)
display.warning("The 'dense' callback plugin requires OrderedDict which is not available in this version of python, disabling.")
self.disabled = True
def __del__(self):
@@ -198,27 +199,27 @@ class CallbackModule(CallbackModule_default):
name = result._host.get_name()
# Add a new status in case a failed task is ignored
if status == "failed" and result._task.ignore_errors:
status = "ignored"
if status == 'failed' and result._task.ignore_errors:
status = 'ignored'
# Check if we have to update an existing state (when looping over items)
if name not in self.hosts:
self.hosts[name] = dict(state=status)
elif states.index(self.hosts[name]["state"]) < states.index(status):
self.hosts[name]["state"] = status
elif states.index(self.hosts[name]['state']) < states.index(status):
self.hosts[name]['state'] = status
# Store delegated hostname, if needed
delegated_vars = result._result.get("_ansible_delegated_vars", None)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if delegated_vars:
self.hosts[name]["delegate"] = delegated_vars["ansible_host"]
self.hosts[name]['delegate'] = delegated_vars['ansible_host']
# Print progress bar
self._display_progress(result)
# # Ensure that tasks with changes/failures stay on-screen, and during diff-mode
# if status in ['changed', 'failed', 'unreachable'] or (result.get('_diff_mode', False) and result._resultget('diff', False)):
# # Ensure that tasks with changes/failures stay on-screen, and during diff-mode
# if status in ['changed', 'failed', 'unreachable'] or (result.get('_diff_mode', False) and result._resultget('diff', False)):
# Ensure that tasks with changes/failures stay on-screen
if status in ["changed", "failed", "unreachable"]:
if status in ['changed', 'failed', 'unreachable']:
self.keep = True
if self._display.verbosity == 1:
@@ -239,9 +240,9 @@ class CallbackModule(CallbackModule_default):
del result[attr]
def _handle_exceptions(self, result):
if "exception" in result:
if 'exception' in result:
# Remove the exception from the result so it is not shown every time
del result["exception"]
del result['exception']
if self._display.verbosity == 1:
return "An exception occurred during task execution. To see the full traceback, use -vvv."
@@ -249,16 +250,16 @@ class CallbackModule(CallbackModule_default):
def _display_progress(self, result=None):
# Always rewrite the complete line
sys.stdout.write(vt100.restore + vt100.reset + vt100.clearline + vt100.nolinewrap + vt100.underline)
sys.stdout.write(f"{self.type} {self.count[self.type]}:")
sys.stdout.write(f'{self.type} {self.count[self.type]}:')
sys.stdout.write(vt100.reset)
sys.stdout.flush()
# Print out each host in its own status-color
for name in self.hosts:
sys.stdout.write(" ")
if self.hosts[name].get("delegate", None):
sys.stdout.write(' ')
if self.hosts[name].get('delegate', None):
sys.stdout.write(f"{self.hosts[name]['delegate']}>")
sys.stdout.write(colors[self.hosts[name]["state"]] + name + vt100.reset)
sys.stdout.write(colors[self.hosts[name]['state']] + name + vt100.reset)
sys.stdout.flush()
sys.stdout.write(vt100.linewrap)
@@ -267,7 +268,7 @@ class CallbackModule(CallbackModule_default):
if not self.shown_title:
self.shown_title = True
sys.stdout.write(vt100.restore + vt100.reset + vt100.clearline + vt100.underline)
sys.stdout.write(f"{self.type} {self.count[self.type]}: {self.task.get_name().strip()}")
sys.stdout.write(f'{self.type} {self.count[self.type]}: {self.task.get_name().strip()}')
sys.stdout.write(f"{vt100.restore}{vt100.reset}\n{vt100.save}{vt100.clearline}")
sys.stdout.flush()
else:
@@ -284,31 +285,29 @@ class CallbackModule(CallbackModule_default):
self._clean_results(result._result)
dump = ""
if result._task.action == "include":
dump = ''
if result._task.action == 'include':
return
elif status == "ok":
elif status == 'ok':
return
elif status == "ignored":
elif status == 'ignored':
dump = self._handle_exceptions(result._result)
elif status == "failed":
elif status == 'failed':
dump = self._handle_exceptions(result._result)
elif status == "unreachable":
dump = result._result["msg"]
elif status == 'unreachable':
dump = result._result['msg']
if not dump:
dump = self._dump_results(result._result)
if result._task.loop and "results" in result._result:
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
sys.stdout.write(f"{colors[status] + status}: ")
delegated_vars = result._result.get("_ansible_delegated_vars", None)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if delegated_vars:
sys.stdout.write(
f"{vt100.reset}{result._host.get_name()}>{colors[status]}{delegated_vars['ansible_host']}"
)
sys.stdout.write(f"{vt100.reset}{result._host.get_name()}>{colors[status]}{delegated_vars['ansible_host']}")
else:
sys.stdout.write(result._host.get_name())
@@ -316,7 +315,7 @@ class CallbackModule(CallbackModule_default):
sys.stdout.write(f"{vt100.reset}{vt100.save}{vt100.clearline}")
sys.stdout.flush()
if status == "changed":
if status == 'changed':
self._handle_warnings(result._result)
def v2_playbook_on_play_start(self, play):
@@ -329,13 +328,13 @@ class CallbackModule(CallbackModule_default):
# Reset at the start of each play
self.keep = False
self.count.update(dict(handler=0, task=0))
self.count["play"] += 1
self.count['play'] += 1
self.play = play
# Write the next play on screen IN UPPERCASE, and make it permanent
name = play.get_name().strip()
if not name:
name = "unnamed"
name = 'unnamed'
sys.stdout.write(f"PLAY {self.count['play']}: {name.upper()}")
sys.stdout.write(f"{vt100.restore}{vt100.reset}\n{vt100.save}{vt100.clearline}")
sys.stdout.flush()
@@ -353,14 +352,14 @@ class CallbackModule(CallbackModule_default):
self.shown_title = False
self.hosts = OrderedDict()
self.task = task
self.type = "task"
self.type = 'task'
# Enumerate task if not setup (task names are too long for dense output)
if task.get_name() != "setup":
self.count["task"] += 1
if task.get_name() != 'setup':
self.count['task'] += 1
# Write the next task on screen (behind the prompt is the previous output)
sys.stdout.write(f"{self.type} {self.count[self.type]}.")
sys.stdout.write(f'{self.type} {self.count[self.type]}.')
sys.stdout.write(vt100.reset)
sys.stdout.flush()
@@ -376,36 +375,36 @@ class CallbackModule(CallbackModule_default):
self.shown_title = False
self.hosts = OrderedDict()
self.task = task
self.type = "handler"
self.type = 'handler'
# Enumerate handler if not setup (handler names may be too long for dense output)
if task.get_name() != "setup":
if task.get_name() != 'setup':
self.count[self.type] += 1
# Write the next task on screen (behind the prompt is the previous output)
sys.stdout.write(f"{self.type} {self.count[self.type]}.")
sys.stdout.write(f'{self.type} {self.count[self.type]}.')
sys.stdout.write(vt100.reset)
sys.stdout.flush()
def v2_playbook_on_cleanup_task_start(self, task):
# TBD
sys.stdout.write("cleanup.")
sys.stdout.write('cleanup.')
sys.stdout.flush()
def v2_runner_on_failed(self, result, ignore_errors=False):
self._add_host(result, "failed")
self._add_host(result, 'failed')
def v2_runner_on_ok(self, result):
if result._result.get("changed", False):
self._add_host(result, "changed")
if result._result.get('changed', False):
self._add_host(result, 'changed')
else:
self._add_host(result, "ok")
self._add_host(result, 'ok')
def v2_runner_on_skipped(self, result):
self._add_host(result, "skipped")
self._add_host(result, 'skipped')
def v2_runner_on_unreachable(self, result):
self._add_host(result, "unreachable")
self._add_host(result, 'unreachable')
def v2_runner_on_include(self, included_file):
pass
@@ -425,24 +424,24 @@ class CallbackModule(CallbackModule_default):
self.v2_runner_item_on_ok(result)
def v2_runner_item_on_ok(self, result):
if result._result.get("changed", False):
self._add_host(result, "changed")
if result._result.get('changed', False):
self._add_host(result, 'changed')
else:
self._add_host(result, "ok")
self._add_host(result, 'ok')
# Old definition in v2.0
def v2_playbook_item_on_failed(self, result):
self.v2_runner_item_on_failed(result)
def v2_runner_item_on_failed(self, result):
self._add_host(result, "failed")
self._add_host(result, 'failed')
# Old definition in v2.0
def v2_playbook_item_on_skipped(self, result):
self.v2_runner_item_on_skipped(result)
def v2_runner_item_on_skipped(self, result):
self._add_host(result, "skipped")
self._add_host(result, 'skipped')
def v2_playbook_on_no_hosts_remaining(self):
if self._display.verbosity == 0 and self.keep:
@@ -469,7 +468,7 @@ class CallbackModule(CallbackModule_default):
return
sys.stdout.write(vt100.bold + vt100.underline)
sys.stdout.write("SUMMARY")
sys.stdout.write('SUMMARY')
sys.stdout.write(f"{vt100.restore}{vt100.reset}\n{vt100.save}{vt100.clearline}")
sys.stdout.flush()
@@ -481,10 +480,10 @@ class CallbackModule(CallbackModule_default):
f"{hostcolor(h, t)} : {colorize('ok', t['ok'], C.COLOR_OK)} {colorize('changed', t['changed'], C.COLOR_CHANGED)} "
f"{colorize('unreachable', t['unreachable'], C.COLOR_UNREACHABLE)} {colorize('failed', t['failures'], C.COLOR_ERROR)} "
f"{colorize('rescued', t['rescued'], C.COLOR_OK)} {colorize('ignored', t['ignored'], C.COLOR_WARN)}",
screen_only=True,
screen_only=True
)
# When using -vv or higher, simply do the default action
if display.verbosity >= 2 or not HAS_OD:
CallbackModule = CallbackModule_default # type: ignore
CallbackModule = CallbackModule_default

View File

@@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2019, Trevor Highfill <trevor.highfill@outlook.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -778,21 +780,19 @@ playbook.yml: >-
import sys
from contextlib import contextmanager
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback.default import CallbackModule as Default
from ansible.template import Templar
from ansible.vars.manager import VariableManager
from ansible.plugins.callback.default import CallbackModule as Default
from ansible.module_utils.common.text.converters import to_text
try:
from ansible.template import trust_as_template # noqa: F401, pylint: disable=unused-import
SUPPORTS_DATA_TAGGING = True
except ImportError:
SUPPORTS_DATA_TAGGING = False
class DummyStdout:
class DummyStdout(object):
def flush(self):
pass
@@ -807,12 +807,11 @@ class CallbackModule(Default):
"""
Callback plugin that allows you to supply your own custom callback templates to be output.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.diy"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.diy'
DIY_NS = "ansible_callback_diy"
DIY_NS = 'ansible_callback_diy'
@contextmanager
def _suppress_stdout(self, enabled):
@@ -825,48 +824,50 @@ class CallbackModule(Default):
def _get_output_specification(self, loader, variables):
_ret = {}
_calling_method = sys._getframe(1).f_code.co_name
_callback_type = _calling_method[3:] if _calling_method[:3] == "v2_" else _calling_method
_callback_options = ["msg", "msg_color"]
_callback_type = (_calling_method[3:] if _calling_method[:3] == "v2_" else _calling_method)
_callback_options = ['msg', 'msg_color']
for option in _callback_options:
_option_name = f"{_callback_type}_{option}"
_option_template = variables.get(f"{self.DIY_NS}_{_option_name}", self.get_option(_option_name))
_ret.update({option: self._template(loader=loader, template=_option_template, variables=variables)})
_option_name = f'{_callback_type}_{option}'
_option_template = variables.get(
f"{self.DIY_NS}_{_option_name}",
self.get_option(_option_name)
)
_ret.update({option: self._template(
loader=loader,
template=_option_template,
variables=variables
)})
_ret.update({"vars": variables})
_ret.update({'vars': variables})
return _ret
def _using_diy(self, spec):
sentinel = object()
omit = spec["vars"].get("omit", sentinel)
omit = spec['vars'].get('omit', sentinel)
# With Data Tagging, omit is sentinel
return (spec["msg"] is not None) and (spec["msg"] != omit or omit is sentinel)
return (spec['msg'] is not None) and (spec['msg'] != omit or omit is sentinel)
def _parent_has_callback(self):
return hasattr(super(), sys._getframe(1).f_code.co_name)
return hasattr(super(CallbackModule, self), sys._getframe(1).f_code.co_name)
def _template(self, loader, template, variables):
_templar = Templar(loader=loader, variables=variables)
return _templar.template(template, preserve_trailing_newlines=True, convert_data=False, escape_backslashes=True)
return _templar.template(
template,
preserve_trailing_newlines=True,
convert_data=False,
escape_backslashes=True
)
def _output(self, spec, stderr=False):
_msg = to_text(spec["msg"])
_msg = to_text(spec['msg'])
if len(_msg) > 0:
self._display.display(msg=_msg, color=spec["msg_color"], stderr=stderr)
self._display.display(msg=_msg, color=spec['msg_color'], stderr=stderr)
def _get_vars(
self,
playbook,
play=None,
host=None,
task=None,
included_file=None,
handler=None,
result=None,
stats=None,
remove_attr_ref_loop=True,
):
def _get_vars(self, playbook, play=None, host=None, task=None, included_file=None,
handler=None, result=None, stats=None, remove_attr_ref_loop=True):
def _get_value(obj, attr=None, method=None):
if attr:
return getattr(obj, attr, getattr(obj, f"_{attr}", None))
@@ -876,8 +877,8 @@ class CallbackModule(Default):
return _method()
def _remove_attr_ref_loop(obj, attributes):
_loop_var = getattr(obj, "loop_control", None)
_loop_var = _loop_var or "item"
_loop_var = getattr(obj, 'loop_control', None)
_loop_var = (_loop_var or 'item')
for attr in attributes:
if str(_loop_var) in str(_get_value(obj=obj, attr=attr)):
@@ -896,128 +897,56 @@ class CallbackModule(Default):
_all = _variable_manager.get_vars()
if play:
_all = play.get_variable_manager().get_vars(
play=play, host=(host if host else getattr(result, "_host", None)), task=(handler if handler else task)
play=play,
host=(host if host else getattr(result, '_host', None)),
task=(handler if handler else task)
)
_ret.update(_all)
_ret.update(_ret.get(self.DIY_NS, {self.DIY_NS: {} if SUPPORTS_DATA_TAGGING else CallbackDIYDict()}))
_ret[self.DIY_NS].update({"playbook": {}})
_playbook_attributes = ["entries", "file_name", "basedir"]
_ret[self.DIY_NS].update({'playbook': {}})
_playbook_attributes = ['entries', 'file_name', 'basedir']
for attr in _playbook_attributes:
_ret[self.DIY_NS]["playbook"].update({attr: _get_value(obj=playbook, attr=attr)})
_ret[self.DIY_NS]['playbook'].update({attr: _get_value(obj=playbook, attr=attr)})
if play:
_ret[self.DIY_NS].update({"play": {}})
_play_attributes = [
"any_errors_fatal",
"become",
"become_flags",
"become_method",
"become_user",
"check_mode",
"collections",
"connection",
"debugger",
"diff",
"environment",
"fact_path",
"finalized",
"force_handlers",
"gather_facts",
"gather_subset",
"gather_timeout",
"handlers",
"hosts",
"ignore_errors",
"ignore_unreachable",
"included_conditional",
"included_path",
"max_fail_percentage",
"module_defaults",
"name",
"no_log",
"only_tags",
"order",
"port",
"post_tasks",
"pre_tasks",
"remote_user",
"removed_hosts",
"roles",
"run_once",
"serial",
"skip_tags",
"squashed",
"strategy",
"tags",
"tasks",
"uuid",
"validated",
"vars_files",
"vars_prompt",
]
_ret[self.DIY_NS].update({'play': {}})
_play_attributes = ['any_errors_fatal', 'become', 'become_flags', 'become_method',
'become_user', 'check_mode', 'collections', 'connection',
'debugger', 'diff', 'environment', 'fact_path', 'finalized',
'force_handlers', 'gather_facts', 'gather_subset',
'gather_timeout', 'handlers', 'hosts', 'ignore_errors',
'ignore_unreachable', 'included_conditional', 'included_path',
'max_fail_percentage', 'module_defaults', 'name', 'no_log',
'only_tags', 'order', 'port', 'post_tasks', 'pre_tasks',
'remote_user', 'removed_hosts', 'roles', 'run_once', 'serial',
'skip_tags', 'squashed', 'strategy', 'tags', 'tasks', 'uuid',
'validated', 'vars_files', 'vars_prompt']
for attr in _play_attributes:
_ret[self.DIY_NS]["play"].update({attr: _get_value(obj=play, attr=attr)})
_ret[self.DIY_NS]['play'].update({attr: _get_value(obj=play, attr=attr)})
if host:
_ret[self.DIY_NS].update({"host": {}})
_host_attributes = ["name", "uuid", "address", "implicit"]
_ret[self.DIY_NS].update({'host': {}})
_host_attributes = ['name', 'uuid', 'address', 'implicit']
for attr in _host_attributes:
_ret[self.DIY_NS]["host"].update({attr: _get_value(obj=host, attr=attr)})
_ret[self.DIY_NS]['host'].update({attr: _get_value(obj=host, attr=attr)})
if task:
_ret[self.DIY_NS].update({"task": {}})
_task_attributes = [
"action",
"any_errors_fatal",
"args",
"async",
"async_val",
"become",
"become_flags",
"become_method",
"become_user",
"changed_when",
"check_mode",
"collections",
"connection",
"debugger",
"delay",
"delegate_facts",
"delegate_to",
"diff",
"environment",
"failed_when",
"finalized",
"ignore_errors",
"ignore_unreachable",
"loop",
"loop_control",
"loop_with",
"module_defaults",
"name",
"no_log",
"notify",
"parent",
"poll",
"port",
"register",
"remote_user",
"retries",
"role",
"run_once",
"squashed",
"tags",
"untagged",
"until",
"uuid",
"validated",
"when",
]
_ret[self.DIY_NS].update({'task': {}})
_task_attributes = ['action', 'any_errors_fatal', 'args', 'async', 'async_val',
'become', 'become_flags', 'become_method', 'become_user',
'changed_when', 'check_mode', 'collections', 'connection',
'debugger', 'delay', 'delegate_facts', 'delegate_to', 'diff',
'environment', 'failed_when', 'finalized', 'ignore_errors',
'ignore_unreachable', 'loop', 'loop_control', 'loop_with',
'module_defaults', 'name', 'no_log', 'notify', 'parent', 'poll',
'port', 'register', 'remote_user', 'retries', 'role', 'run_once',
'squashed', 'tags', 'untagged', 'until', 'uuid', 'validated',
'when']
# remove arguments that reference a loop var because they cause templating issues in
# callbacks that do not have the loop context(e.g. playbook_on_task_start)
@@ -1025,128 +954,91 @@ class CallbackModule(Default):
_task_attributes = _remove_attr_ref_loop(obj=task, attributes=_task_attributes)
for attr in _task_attributes:
_ret[self.DIY_NS]["task"].update({attr: _get_value(obj=task, attr=attr)})
_ret[self.DIY_NS]['task'].update({attr: _get_value(obj=task, attr=attr)})
if included_file:
_ret[self.DIY_NS].update({"included_file": {}})
_included_file_attributes = ["args", "filename", "hosts", "is_role", "task"]
_ret[self.DIY_NS].update({'included_file': {}})
_included_file_attributes = ['args', 'filename', 'hosts', 'is_role', 'task']
for attr in _included_file_attributes:
_ret[self.DIY_NS]["included_file"].update({attr: _get_value(obj=included_file, attr=attr)})
_ret[self.DIY_NS]['included_file'].update({attr: _get_value(
obj=included_file,
attr=attr
)})
if handler:
_ret[self.DIY_NS].update({"handler": {}})
_handler_attributes = [
"action",
"any_errors_fatal",
"args",
"async",
"async_val",
"become",
"become_flags",
"become_method",
"become_user",
"changed_when",
"check_mode",
"collections",
"connection",
"debugger",
"delay",
"delegate_facts",
"delegate_to",
"diff",
"environment",
"failed_when",
"finalized",
"ignore_errors",
"ignore_unreachable",
"listen",
"loop",
"loop_control",
"loop_with",
"module_defaults",
"name",
"no_log",
"notified_hosts",
"notify",
"parent",
"poll",
"port",
"register",
"remote_user",
"retries",
"role",
"run_once",
"squashed",
"tags",
"untagged",
"until",
"uuid",
"validated",
"when",
]
_ret[self.DIY_NS].update({'handler': {}})
_handler_attributes = ['action', 'any_errors_fatal', 'args', 'async', 'async_val',
'become', 'become_flags', 'become_method', 'become_user',
'changed_when', 'check_mode', 'collections', 'connection',
'debugger', 'delay', 'delegate_facts', 'delegate_to', 'diff',
'environment', 'failed_when', 'finalized', 'ignore_errors',
'ignore_unreachable', 'listen', 'loop', 'loop_control',
'loop_with', 'module_defaults', 'name', 'no_log',
'notified_hosts', 'notify', 'parent', 'poll', 'port',
'register', 'remote_user', 'retries', 'role', 'run_once',
'squashed', 'tags', 'untagged', 'until', 'uuid', 'validated',
'when']
if handler.loop and remove_attr_ref_loop:
_handler_attributes = _remove_attr_ref_loop(obj=handler, attributes=_handler_attributes)
_handler_attributes = _remove_attr_ref_loop(obj=handler,
attributes=_handler_attributes)
for attr in _handler_attributes:
_ret[self.DIY_NS]["handler"].update({attr: _get_value(obj=handler, attr=attr)})
_ret[self.DIY_NS]['handler'].update({attr: _get_value(obj=handler, attr=attr)})
_ret[self.DIY_NS]["handler"].update({"is_host_notified": handler.is_host_notified(host)})
_ret[self.DIY_NS]['handler'].update({'is_host_notified': handler.is_host_notified(host)})
if result:
_ret[self.DIY_NS].update({"result": {}})
_result_attributes = ["host", "task", "task_name"]
_ret[self.DIY_NS].update({'result': {}})
_result_attributes = ['host', 'task', 'task_name']
for attr in _result_attributes:
_ret[self.DIY_NS]["result"].update({attr: _get_value(obj=result, attr=attr)})
_ret[self.DIY_NS]['result'].update({attr: _get_value(obj=result, attr=attr)})
_result_methods = ["is_changed", "is_failed", "is_skipped", "is_unreachable"]
_result_methods = ['is_changed', 'is_failed', 'is_skipped', 'is_unreachable']
for method in _result_methods:
_ret[self.DIY_NS]["result"].update({method: _get_value(obj=result, method=method)})
_ret[self.DIY_NS]['result'].update({method: _get_value(obj=result, method=method)})
_ret[self.DIY_NS]["result"].update({"output": getattr(result, "_result", None)})
_ret[self.DIY_NS]['result'].update({'output': getattr(result, '_result', None)})
_ret.update(result._result)
if stats:
_ret[self.DIY_NS].update({"stats": {}})
_stats_attributes = [
"changed",
"custom",
"dark",
"failures",
"ignored",
"ok",
"processed",
"rescued",
"skipped",
]
_ret[self.DIY_NS].update({'stats': {}})
_stats_attributes = ['changed', 'custom', 'dark', 'failures', 'ignored',
'ok', 'processed', 'rescued', 'skipped']
for attr in _stats_attributes:
_ret[self.DIY_NS]["stats"].update({attr: _get_value(obj=stats, attr=attr)})
_ret[self.DIY_NS]['stats'].update({attr: _get_value(obj=stats, attr=attr)})
_ret[self.DIY_NS].update({"top_level_var_names": list(_ret.keys())})
_ret[self.DIY_NS].update({'top_level_var_names': list(_ret.keys())})
return _ret
def v2_on_any(self, *args, **kwargs):
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_on_any(*args, **kwargs)
super(CallbackModule, self).v2_on_any(*args, **kwargs)
def v2_runner_on_failed(self, result, ignore_errors=False):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1154,14 +1046,17 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_on_failed(result, ignore_errors)
super(CallbackModule, self).v2_runner_on_failed(result, ignore_errors)
def v2_runner_on_ok(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1169,14 +1064,17 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_on_ok(result)
super(CallbackModule, self).v2_runner_on_ok(result)
def v2_runner_on_skipped(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1184,14 +1082,17 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_on_skipped(result)
super(CallbackModule, self).v2_runner_on_skipped(result)
def v2_runner_on_unreachable(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1199,7 +1100,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_on_unreachable(result)
super(CallbackModule, self).v2_runner_on_unreachable(result)
# not implemented as the call to this is not implemented yet
def v2_runner_on_async_poll(self, result):
@@ -1221,8 +1122,8 @@ class CallbackModule(Default):
play=self._diy_play,
task=self._diy_task,
result=result,
remove_attr_ref_loop=False,
),
remove_attr_ref_loop=False
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1230,7 +1131,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_item_on_ok(result)
super(CallbackModule, self).v2_runner_item_on_ok(result)
def v2_runner_item_on_failed(self, result):
self._diy_spec = self._get_output_specification(
@@ -1240,8 +1141,8 @@ class CallbackModule(Default):
play=self._diy_play,
task=self._diy_task,
result=result,
remove_attr_ref_loop=False,
),
remove_attr_ref_loop=False
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1249,7 +1150,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_item_on_failed(result)
super(CallbackModule, self).v2_runner_item_on_failed(result)
def v2_runner_item_on_skipped(self, result):
self._diy_spec = self._get_output_specification(
@@ -1259,8 +1160,8 @@ class CallbackModule(Default):
play=self._diy_play,
task=self._diy_task,
result=result,
remove_attr_ref_loop=False,
),
remove_attr_ref_loop=False
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1268,14 +1169,17 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_item_on_skipped(result)
super(CallbackModule, self).v2_runner_item_on_skipped(result)
def v2_runner_retry(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1283,7 +1187,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_retry(result)
super(CallbackModule, self).v2_runner_retry(result)
def v2_runner_on_start(self, host, task):
self._diy_host = host
@@ -1292,8 +1196,11 @@ class CallbackModule(Default):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, host=self._diy_host, task=self._diy_task
),
playbook=self._diy_playbook,
play=self._diy_play,
host=self._diy_host,
task=self._diy_task
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1301,14 +1208,17 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_runner_on_start(host, task)
super(CallbackModule, self).v2_runner_on_start(host, task)
def v2_playbook_on_start(self, playbook):
self._diy_playbook = playbook
self._diy_loader = self._diy_playbook.get_loader()
self._diy_spec = self._get_output_specification(
loader=self._diy_loader, variables=self._get_vars(playbook=self._diy_playbook)
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1316,7 +1226,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_start(playbook)
super(CallbackModule, self).v2_playbook_on_start(playbook)
def v2_playbook_on_notify(self, handler, host):
self._diy_handler = handler
@@ -1325,8 +1235,11 @@ class CallbackModule(Default):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, host=self._diy_host, handler=self._diy_handler
),
playbook=self._diy_playbook,
play=self._diy_play,
host=self._diy_host,
handler=self._diy_handler
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1334,34 +1247,44 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_notify(handler, host)
super(CallbackModule, self).v2_playbook_on_notify(handler, host)
def v2_playbook_on_no_hosts_matched(self):
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_no_hosts_matched()
super(CallbackModule, self).v2_playbook_on_no_hosts_matched()
def v2_playbook_on_no_hosts_remaining(self):
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_no_hosts_remaining()
super(CallbackModule, self).v2_playbook_on_no_hosts_remaining()
def v2_playbook_on_task_start(self, task, is_conditional):
self._diy_task = task
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task),
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1369,7 +1292,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_task_start(task, is_conditional)
super(CallbackModule, self).v2_playbook_on_task_start(task, is_conditional)
# not implemented as the call to this is not implemented yet
def v2_playbook_on_cleanup_task_start(self, task):
@@ -1380,7 +1303,11 @@ class CallbackModule(Default):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task),
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1388,29 +1315,25 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_handler_task_start(task)
super(CallbackModule, self).v2_playbook_on_handler_task_start(task)
def v2_playbook_on_vars_prompt(
self,
varname,
private=True,
prompt=None,
encrypt=None,
confirm=False,
salt_size=None,
salt=None,
default=None,
unsafe=None,
):
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
def v2_playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None,
confirm=False, salt_size=None, salt=None, default=None,
unsafe=None):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_vars_prompt(
varname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe
super(CallbackModule, self).v2_playbook_on_vars_prompt(
varname, private, prompt, encrypt,
confirm, salt_size, salt, default,
unsafe
)
# not implemented as the call to this is not implemented yet
@@ -1425,7 +1348,11 @@ class CallbackModule(Default):
self._diy_play = play
self._diy_spec = self._get_output_specification(
loader=self._diy_loader, variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play)
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1433,14 +1360,18 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_play_start(play)
super(CallbackModule, self).v2_playbook_on_play_start(play)
def v2_playbook_on_stats(self, stats):
self._diy_stats = stats
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play, stats=self._diy_stats),
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
stats=self._diy_stats
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1448,7 +1379,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_stats(stats)
super(CallbackModule, self).v2_playbook_on_stats(stats)
def v2_playbook_on_include(self, included_file):
self._diy_included_file = included_file
@@ -1459,8 +1390,8 @@ class CallbackModule(Default):
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_included_file._task,
included_file=self._diy_included_file,
),
included_file=self._diy_included_file
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1468,14 +1399,17 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_playbook_on_include(included_file)
super(CallbackModule, self).v2_playbook_on_include(included_file)
def v2_on_file_diff(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
)
if self._using_diy(spec=self._diy_spec):
@@ -1483,4 +1417,4 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super().v2_on_file_diff(result)
super(CallbackModule, self).v2_on_file_diff(result)

View File

@@ -81,6 +81,7 @@ import getpass
import socket
import time
import uuid
from collections import OrderedDict
from contextlib import closing
from os.path import basename
@@ -89,9 +90,8 @@ from ansible.errors import AnsibleError, AnsibleRuntimeError
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.plugins.callback import CallbackBase
ELASTIC_LIBRARY_IMPORT_ERROR: ImportError | None
try:
from elasticapm import Client, capture_span, instrument, label, trace_parent_from_string
from elasticapm import Client, capture_span, trace_parent_from_string, instrument, label
except ImportError as imp_exc:
ELASTIC_LIBRARY_IMPORT_ERROR = imp_exc
else:
@@ -115,9 +115,9 @@ class TaskData:
def add_host(self, host):
if host.uuid in self.host_data:
if host.status == "included":
if host.status == 'included':
# concatenate task include output from multiple items
host.result = f"{self.host_data[host.uuid].result}\n{host.result}"
host.result = f'{self.host_data[host.uuid].result}\n{host.result}'
else:
return
@@ -137,21 +137,21 @@ class HostData:
self.finish = time.time()
class ElasticSource:
class ElasticSource(object):
def __init__(self, display):
self.ansible_playbook = ""
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
try:
self.ip_address = socket.gethostbyname(socket.gethostname())
except Exception:
except Exception as e:
self.ip_address = None
self.user = getpass.getuser()
self._display = display
def start_task(self, tasks_data, hide_task_arguments, play_name, task):
"""record the start of a task for one or more hosts"""
""" record the start of a task for one or more hosts """
uuid = task._uuid
@@ -164,50 +164,38 @@ class ElasticSource:
args = None
if not task.no_log and not hide_task_arguments:
args = ", ".join((f"{k}={v}" for k, v in task.args.items()))
args = ', '.join((f'{k}={v}' for k, v in task.args.items()))
tasks_data[uuid] = TaskData(uuid, name, path, play_name, action, args)
def finish_task(self, tasks_data, status, result):
"""record the results of a task for a single host"""
""" record the results of a task for a single host """
task_uuid = result._task._uuid
if hasattr(result, "_host") and result._host is not None:
if hasattr(result, '_host') and result._host is not None:
host_uuid = result._host._uuid
host_name = result._host.name
else:
host_uuid = "include"
host_name = "include"
host_uuid = 'include'
host_name = 'include'
task = tasks_data[task_uuid]
task.add_host(HostData(host_uuid, host_name, status, result))
def generate_distributed_traces(
self,
tasks_data,
status,
end_time,
traceparent,
apm_service_name,
apm_server_url,
apm_verify_server_cert,
apm_secret_token,
apm_api_key,
):
"""generate distributed traces from the collected TaskData and HostData"""
def generate_distributed_traces(self, tasks_data, status, end_time, traceparent, apm_service_name,
apm_server_url, apm_verify_server_cert, apm_secret_token, apm_api_key):
""" generate distributed traces from the collected TaskData and HostData """
tasks = []
parent_start_time = None
for task in tasks_data.values():
for task_uuid, task in tasks_data.items():
if parent_start_time is None:
parent_start_time = task.start
tasks.append(task)
apm_cli = self.init_apm_client(
apm_server_url, apm_service_name, apm_verify_server_cert, apm_secret_token, apm_api_key
)
apm_cli = self.init_apm_client(apm_server_url, apm_service_name, apm_verify_server_cert, apm_secret_token, apm_api_key)
if apm_cli:
with closing(apm_cli):
instrument() # Only call this once, as early as possible.
@@ -223,86 +211,78 @@ class ElasticSource:
label(ansible_host_ip=self.ip_address)
for task_data in tasks:
for host_data in task_data.host_data.values():
for host_uuid, host_data in task_data.host_data.items():
self.create_span_data(apm_cli, task_data, host_data)
apm_cli.end_transaction(name=__name__, result=status, duration=end_time - parent_start_time)
def create_span_data(self, apm_cli, task_data, host_data):
"""create the span with the given TaskData and HostData"""
""" create the span with the given TaskData and HostData """
name = f"[{host_data.name}] {task_data.play}: {task_data.name}"
name = f'[{host_data.name}] {task_data.play}: {task_data.name}'
message = "success"
status = "success"
enriched_error_message = None
if host_data.status == "included":
if host_data.status == 'included':
rc = 0
else:
res = host_data.result._result
rc = res.get("rc", 0)
if host_data.status == "failed":
rc = res.get('rc', 0)
if host_data.status == 'failed':
message = self.get_error_message(res)
enriched_error_message = self.enrich_error_message(res)
status = "failure"
elif host_data.status == "skipped":
if "skip_reason" in res:
message = res["skip_reason"]
elif host_data.status == 'skipped':
if 'skip_reason' in res:
message = res['skip_reason']
else:
message = "skipped"
message = 'skipped'
status = "unknown"
with capture_span(
task_data.name,
start=task_data.start,
span_type="ansible.task.run",
duration=host_data.finish - task_data.start,
labels={
"ansible.task.args": task_data.args,
"ansible.task.message": message,
"ansible.task.module": task_data.action,
"ansible.task.name": name,
"ansible.task.result": rc,
"ansible.task.host.name": host_data.name,
"ansible.task.host.status": host_data.status,
},
) as span:
with capture_span(task_data.name,
start=task_data.start,
span_type="ansible.task.run",
duration=host_data.finish - task_data.start,
labels={"ansible.task.args": task_data.args,
"ansible.task.message": message,
"ansible.task.module": task_data.action,
"ansible.task.name": name,
"ansible.task.result": rc,
"ansible.task.host.name": host_data.name,
"ansible.task.host.status": host_data.status}) as span:
span.outcome = status
if "failure" in status:
exception = AnsibleRuntimeError(
message=f"{task_data.action}: {name} failed with error message {enriched_error_message}"
)
if 'failure' in status:
exception = AnsibleRuntimeError(message=f"{task_data.action}: {name} failed with error message {enriched_error_message}")
apm_cli.capture_exception(exc_info=(type(exception), exception, exception.__traceback__), handled=True)
def init_apm_client(self, apm_server_url, apm_service_name, apm_verify_server_cert, apm_secret_token, apm_api_key):
if apm_server_url:
return Client(
service_name=apm_service_name,
server_url=apm_server_url,
verify_server_cert=False,
secret_token=apm_secret_token,
api_key=apm_api_key,
use_elastic_traceparent_header=True,
debug=True,
)
return Client(service_name=apm_service_name,
server_url=apm_server_url,
verify_server_cert=False,
secret_token=apm_secret_token,
api_key=apm_api_key,
use_elastic_traceparent_header=True,
debug=True)
@staticmethod
def get_error_message(result):
if result.get("exception") is not None:
return ElasticSource._last_line(result["exception"])
return result.get("msg", "failed")
if result.get('exception') is not None:
return ElasticSource._last_line(result['exception'])
return result.get('msg', 'failed')
@staticmethod
def _last_line(text):
lines = text.strip().split("\n")
lines = text.strip().split('\n')
return lines[-1]
@staticmethod
def enrich_error_message(result):
message = result.get("msg", "failed")
exception = result.get("exception")
stderr = result.get("stderr")
return f'message: "{message}"\nexception: "{exception}"\nstderr: "{stderr}"'
message = result.get('msg', 'failed')
exception = result.get('exception')
stderr = result.get('stderr')
return f"message: \"{message}\"\nexception: \"{exception}\"\nstderr: \"{stderr}\""
class CallbackModule(CallbackBase):
@@ -311,12 +291,12 @@ class CallbackModule(CallbackBase):
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.elastic"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.elastic'
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.hide_task_arguments = None
self.apm_service_name = None
self.ansible_playbook = None
@@ -327,28 +307,28 @@ class CallbackModule(CallbackBase):
self.disabled = False
if ELASTIC_LIBRARY_IMPORT_ERROR:
raise AnsibleError(
"The `elastic-apm` must be installed to use this plugin"
) from ELASTIC_LIBRARY_IMPORT_ERROR
raise AnsibleError('The `elastic-apm` must be installed to use this plugin') from ELASTIC_LIBRARY_IMPORT_ERROR
self.tasks_data = OrderedDict()
self.elastic = ElasticSource(display=self._display)
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
self.hide_task_arguments = self.get_option("hide_task_arguments")
self.hide_task_arguments = self.get_option('hide_task_arguments')
self.apm_service_name = self.get_option("apm_service_name")
self.apm_service_name = self.get_option('apm_service_name')
if not self.apm_service_name:
self.apm_service_name = "ansible"
self.apm_service_name = 'ansible'
self.apm_server_url = self.get_option("apm_server_url")
self.apm_secret_token = self.get_option("apm_secret_token")
self.apm_api_key = self.get_option("apm_api_key")
self.apm_verify_server_cert = self.get_option("apm_verify_server_cert")
self.traceparent = self.get_option("traceparent")
self.apm_server_url = self.get_option('apm_server_url')
self.apm_secret_token = self.get_option('apm_secret_token')
self.apm_api_key = self.get_option('apm_api_key')
self.apm_verify_server_cert = self.get_option('apm_verify_server_cert')
self.traceparent = self.get_option('traceparent')
def v2_playbook_on_start(self, playbook):
self.ansible_playbook = basename(playbook._file_name)
@@ -357,29 +337,65 @@ class CallbackModule(CallbackBase):
self.play_name = play.get_name()
def v2_runner_on_no_hosts(self, task):
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_playbook_on_task_start(self, task, is_conditional):
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_playbook_on_cleanup_task_start(self, task):
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_playbook_on_handler_task_start(self, task):
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_runner_on_failed(self, result, ignore_errors=False):
self.errors += 1
self.elastic.finish_task(self.tasks_data, "failed", result)
self.elastic.finish_task(
self.tasks_data,
'failed',
result
)
def v2_runner_on_ok(self, result):
self.elastic.finish_task(self.tasks_data, "ok", result)
self.elastic.finish_task(
self.tasks_data,
'ok',
result
)
def v2_runner_on_skipped(self, result):
self.elastic.finish_task(self.tasks_data, "skipped", result)
self.elastic.finish_task(
self.tasks_data,
'skipped',
result
)
def v2_playbook_on_include(self, included_file):
self.elastic.finish_task(self.tasks_data, "included", included_file)
self.elastic.finish_task(
self.tasks_data,
'included',
included_file
)
def v2_playbook_on_stats(self, stats):
if self.errors == 0:
@@ -395,7 +411,7 @@ class CallbackModule(CallbackBase):
self.apm_server_url,
self.apm_verify_server_cert,
self.apm_secret_token,
self.apm_api_key,
self.apm_api_key
)
def v2_runner_on_async_failed(self, result, **kwargs):

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2016 maxn nikolaev.makc@gmail.com
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -54,31 +55,29 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.jabber"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.jabber'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
if not HAS_XMPP:
self._display.warning(
"The required python xmpp library (xmpppy) is not installed. "
"pip install git+https://github.com/ArchipelProject/xmpppy"
)
self._display.warning("The required python xmpp library (xmpppy) is not installed. "
"pip install git+https://github.com/ArchipelProject/xmpppy")
self.disabled = True
self.serv = os.getenv("JABBER_SERV")
self.j_user = os.getenv("JABBER_USER")
self.j_pass = os.getenv("JABBER_PASS")
self.j_to = os.getenv("JABBER_TO")
self.serv = os.getenv('JABBER_SERV')
self.j_user = os.getenv('JABBER_USER')
self.j_pass = os.getenv('JABBER_PASS')
self.j_to = os.getenv('JABBER_TO')
if (self.j_user or self.j_pass or self.serv or self.j_to) is None:
self.disabled = True
self._display.warning(
"Jabber CallBack wants the JABBER_SERV, JABBER_USER, JABBER_PASS and JABBER_TO environment variables"
)
self._display.warning('Jabber CallBack wants the JABBER_SERV, JABBER_USER, JABBER_PASS and JABBER_TO environment variables')
def send_msg(self, msg):
"""Send message"""
@@ -87,7 +86,7 @@ class CallbackModule(CallbackBase):
client.connect(server=(self.serv, 5222))
client.auth(jid.getNode(), self.j_pass, resource=jid.getResource())
message = xmpp.Message(self.j_to, msg)
message.setAttr("type", "chat")
message.setAttr('type', 'chat')
client.send(message)
client.disconnect()
@@ -111,9 +110,9 @@ class CallbackModule(CallbackBase):
unreachable = False
for h in hosts:
s = stats.summarize(h)
if s["failures"] > 0:
if s['failures'] > 0:
failures = True
if s["unreachable"] > 0:
if s['unreachable'] > 0:
unreachable = True
if failures or unreachable:

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -27,15 +28,16 @@ options:
key: log_folder
"""
import json
import os
import time
from collections.abc import MutableMapping
import json
from ansible.utils.path import makedirs_safe
from ansible.module_utils.common.text.converters import to_bytes
from collections.abc import MutableMapping
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.plugins.callback import CallbackBase
from ansible.utils.path import makedirs_safe
# NOTE: in Ansible 1.2 or later general logging is available without
# this plugin, just set ANSIBLE_LOG_PATH as an environment variable
@@ -48,10 +50,9 @@ class CallbackModule(CallbackBase):
"""
logs playbook results, per host, in /var/log/ansible/hosts
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.log_plays"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.log_plays'
CALLBACK_NEEDS_WHITELIST = True
TIME_FORMAT = "%b %d %Y %H:%M:%S"
@@ -61,10 +62,11 @@ class CallbackModule(CallbackBase):
return f"{now} - {playbook} - {task_name} - {task_action} - {category} - {data}\n\n"
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.log_folder = self.get_option("log_folder")
@@ -74,12 +76,12 @@ class CallbackModule(CallbackBase):
def log(self, result, category):
data = result._result
if isinstance(data, MutableMapping):
if "_ansible_verbose_override" in data:
if '_ansible_verbose_override' in data:
# avoid logging extraneous data
data = "omitted"
data = 'omitted'
else:
data = data.copy()
invocation = data.pop("invocation", None)
invocation = data.pop('invocation', None)
data = json.dumps(data, cls=AnsibleJSONEncoder)
if invocation is not None:
data = f"{json.dumps(invocation)} => {data} "
@@ -92,25 +94,25 @@ class CallbackModule(CallbackBase):
fd.write(msg)
def v2_runner_on_failed(self, result, ignore_errors=False):
self.log(result, "FAILED")
self.log(result, 'FAILED')
def v2_runner_on_ok(self, result):
self.log(result, "OK")
self.log(result, 'OK')
def v2_runner_on_skipped(self, result):
self.log(result, "SKIPPED")
self.log(result, 'SKIPPED')
def v2_runner_on_unreachable(self, result):
self.log(result, "UNREACHABLE")
self.log(result, 'UNREACHABLE')
def v2_runner_on_async_failed(self, result):
self.log(result, "ASYNC_FAILED")
self.log(result, 'ASYNC_FAILED')
def v2_playbook_on_start(self, playbook):
self.playbook = playbook._file_name
def v2_playbook_on_import_for_host(self, result, imported_file):
self.log(result, "IMPORTED", imported_file)
self.log(result, 'IMPORTED', imported_file)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
self.log(result, "NOTIMPORTED", missing_file)
self.log(result, 'NOTIMPORTED', missing_file)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -51,13 +52,14 @@ examples: |-
shared_key = dZD0kCbKl3ehZG6LHFMuhtE0yHiFCmetzFMc2u+roXIUQuatqU924SsAAAAPemhjbGlAemhjbGktTUJQAQIDBA==
"""
import base64
import getpass
import hashlib
import hmac
import base64
import json
import socket
import uuid
import socket
import getpass
from os.path import basename
from ansible.module_utils.ansible_release import __version__ as ansible_version
@@ -70,7 +72,7 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
)
class AzureLogAnalyticsSource:
class AzureLogAnalyticsSource(object):
def __init__(self):
self.ansible_check_mode = False
self.ansible_playbook = ""
@@ -82,10 +84,11 @@ class AzureLogAnalyticsSource:
def __build_signature(self, date, workspace_id, shared_key, content_length):
# Build authorisation signature for Azure log analytics API call
sigs = f"POST\n{content_length}\napplication/json\nx-ms-date:{date}\n/api/logs"
utf8_sigs = sigs.encode("utf-8")
utf8_sigs = sigs.encode('utf-8')
decoded_shared_key = base64.b64decode(shared_key)
hmac_sha256_sigs = hmac.new(decoded_shared_key, utf8_sigs, digestmod=hashlib.sha256).digest()
encoded_hash = base64.b64encode(hmac_sha256_sigs).decode("utf-8")
hmac_sha256_sigs = hmac.new(
decoded_shared_key, utf8_sigs, digestmod=hashlib.sha256).digest()
encoded_hash = base64.b64encode(hmac_sha256_sigs).decode('utf-8')
signature = f"SharedKey {workspace_id}:{encoded_hash}"
return signature
@@ -93,10 +96,10 @@ class AzureLogAnalyticsSource:
return f"https://{workspace_id}.ods.opinsights.azure.com/api/logs?api-version=2016-04-01"
def __rfc1123date(self):
return now().strftime("%a, %d %b %Y %H:%M:%S GMT")
return now().strftime('%a, %d %b %Y %H:%M:%S GMT')
def send_event(self, workspace_id, shared_key, state, result, runtime):
if result._task_fields["args"].get("_ansible_check_mode") is True:
if result._task_fields['args'].get('_ansible_check_mode') is True:
self.ansible_check_mode = True
if result._task._role:
@@ -105,31 +108,31 @@ class AzureLogAnalyticsSource:
ansible_role = None
data = {}
data["uuid"] = result._task._uuid
data["session"] = self.session
data["status"] = state
data["timestamp"] = self.__rfc1123date()
data["host"] = self.host
data["user"] = self.user
data["runtime"] = runtime
data["ansible_version"] = ansible_version
data["ansible_check_mode"] = self.ansible_check_mode
data["ansible_host"] = result._host.name
data["ansible_playbook"] = self.ansible_playbook
data["ansible_role"] = ansible_role
data["ansible_task"] = result._task_fields
data['uuid'] = result._task._uuid
data['session'] = self.session
data['status'] = state
data['timestamp'] = self.__rfc1123date()
data['host'] = self.host
data['user'] = self.user
data['runtime'] = runtime
data['ansible_version'] = ansible_version
data['ansible_check_mode'] = self.ansible_check_mode
data['ansible_host'] = result._host.name
data['ansible_playbook'] = self.ansible_playbook
data['ansible_role'] = ansible_role
data['ansible_task'] = result._task_fields
# Removing args since it can contain sensitive data
if "args" in data["ansible_task"]:
data["ansible_task"].pop("args")
data["ansible_result"] = result._result
if "content" in data["ansible_result"]:
data["ansible_result"].pop("content")
if 'args' in data['ansible_task']:
data['ansible_task'].pop('args')
data['ansible_result'] = result._result
if 'content' in data['ansible_result']:
data['ansible_result'].pop('content')
# Adding extra vars info
data["extra_vars"] = self.extra_vars
data['extra_vars'] = self.extra_vars
# Preparing the playbook logs as JSON format and send to Azure log analytics
jsondata = json.dumps({"event": data}, cls=AnsibleJSONEncoder, sort_keys=True)
jsondata = json.dumps({'event': data}, cls=AnsibleJSONEncoder, sort_keys=True)
content_length = len(jsondata)
rfc1123date = self.__rfc1123date()
signature = self.__build_signature(rfc1123date, workspace_id, shared_key, content_length)
@@ -139,35 +142,38 @@ class AzureLogAnalyticsSource:
workspace_url,
jsondata,
headers={
"content-type": "application/json",
"Authorization": signature,
"Log-Type": "ansible_playbook",
"x-ms-date": rfc1123date,
'content-type': 'application/json',
'Authorization': signature,
'Log-Type': 'ansible_playbook',
'x-ms-date': rfc1123date
},
method="POST",
method='POST'
)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "loganalytics"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'loganalytics'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.start_datetimes = {} # Collect task start times
self.workspace_id = None
self.shared_key = None
self.loganalytics = AzureLogAnalyticsSource()
def _seconds_since_start(self, result):
return (now() - self.start_datetimes[result._task._uuid]).total_seconds()
return (
now() -
self.start_datetimes[result._task._uuid]
).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.workspace_id = self.get_option("workspace_id")
self.shared_key = self.get_option("shared_key")
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.workspace_id = self.get_option('workspace_id')
self.shared_key = self.get_option('shared_key')
def v2_playbook_on_play_start(self, play):
vm = play.get_variable_manager()
@@ -185,25 +191,45 @@ class CallbackModule(CallbackBase):
def v2_runner_on_ok(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id, self.shared_key, "OK", result, self._seconds_since_start(result)
self.workspace_id,
self.shared_key,
'OK',
result,
self._seconds_since_start(result)
)
def v2_runner_on_skipped(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id, self.shared_key, "SKIPPED", result, self._seconds_since_start(result)
self.workspace_id,
self.shared_key,
'SKIPPED',
result,
self._seconds_since_start(result)
)
def v2_runner_on_failed(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id, self.shared_key, "FAILED", result, self._seconds_since_start(result)
self.workspace_id,
self.shared_key,
'FAILED',
result,
self._seconds_since_start(result)
)
def runner_on_async_failed(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id, self.shared_key, "FAILED", result, self._seconds_since_start(result)
self.workspace_id,
self.shared_key,
'FAILED',
result,
self._seconds_since_start(result)
)
def v2_runner_on_unreachable(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id, self.shared_key, "UNREACHABLE", result, self._seconds_since_start(result)
self.workspace_id,
self.shared_key,
'UNREACHABLE',
result,
self._seconds_since_start(result)
)

View File

@@ -1,342 +0,0 @@
#!/usr/bin/env python
# Copyright (c) Ansible project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = """
name: loganalytics_ingestion
type: notification
short_description: Posts task results to an Azure Log Analytics workspace using the new Logs Ingestion API
author:
- Wade Cline (@wtcline-intc) <wade.cline@intel.com>
- Sriramoju Vishal Bharath (@vsh47) <sriramoju.vishal.bharath@intel.com>
- Cyrus Li (@zhcli) <cyrus1006@gmail.com>
description:
- This callback plugin will post task results in JSON format to an Azure Log Analytics workspace using the new Logs Ingestion API.
version_added: "12.4.0"
requirements:
- The callback plugin has been enabled.
- An Azure Log Analytics workspace has been established.
- A Data Collection Rule (DCR) and custom table are created.
options:
dce_url:
description: URL of the Data Collection Endpoint (DCE) for Azure Logs Ingestion API.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_DCE_URL
ini:
- section: callback_loganalytics
key: dce_url
dcr_id:
description: Data Collection Rule (DCR) ID for the Azure Log Ingestion API.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_DCR_ID
ini:
- section: callback_loganalytics
key: dcr_id
disable_attempts:
description:
- When O(disable_on_failure=true), number of plugin failures that must occur before the plugin is disabled.
- This helps prevent outright plugin failure from a single, transient network issue.
type: int
default: 3
env:
- name: ANSIBLE_LOGANALYTICS_DISABLE_ATTEMPTS
ini:
- section: callback_loganalytics
key: disable_attempts
disable_on_failure:
description: Stop trying to send data on plugin failure.
type: bool
default: true
env:
- name: ANSIBLE_LOGANALYTICS_DISABLE_ON_FAILURE
ini:
- section: callback_loganalytics
key: disable_on_failure
client_id:
description: Client ID of the Azure App registration for OAuth2 authentication ("Modern Authentication").
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_CLIENT_ID
ini:
- section: callback_loganalytics
key: client_id
client_secret:
description: Client Secret of the Azure App registration.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_CLIENT_SECRET
ini:
- section: callback_loganalytics
key: client_secret
include_content:
description: Send the content to the Azure Log Analytics workspace.
type: bool
default: false
env:
- name: ANSIBLE_LOGANALYTICS_INCLUDE_CONTENT
ini:
- section: callback_loganalytics
key: include_content
include_task_args:
description: Send the task args to the Azure Log Analytics workspace.
type: bool
default: false
env:
- name: ANSIBLE_LOGANALYTICS_INCLUDE_TASK_ARGS
ini:
- section: callback_loganalytics
key: include_task_args
stream_name:
description: The name of the stream used to send the logs to the Azure Log Analytics workspace.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_STREAM_NAME
ini:
- section: callback_loganalytics
key: stream_name
tenant_id:
description: Tenant ID for the Azure Active Directory.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_TENANT_ID
ini:
- section: callback_loganalytics
key: tenant_id
timeout:
description: Timeout for the HTTP requests to the Azure Log Analytics API.
type: int
default: 2
env:
- name: ANSIBLE_LOGANALYTICS_TIMEOUT
ini:
- section: callback_loganalytics
key: timeout
seealso:
- name: Logs Ingestion API
description: Overview of Logs Ingestion API in Azure Monitor
link: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview
notes:
- Triple verbosity logging (C(-vvv)) can be used to generate JSON sample data for creating the table schema in Azure Log Analytics.
Search for the string C(Event Data:) in the output in order to locate the data sample.
"""
EXAMPLES = """
examples: |
Enable the plugin in ansible.cfg:
[defaults]
callback_enabled = community.general.loganalytics_ingestion
Set the environment variables:
export ANSIBLE_LOGANALYTICS_DCE_URL=https://my-dce.ingest.monitor.azure.com
export ANSIBLE_LOGANALYTICS_DCR_ID=dcr-xxxxxx
export ANSIBLE_LOGANALYTICS_CLIENT_ID=xxxxxxxx
export ANSIBLE_LOGANALYTICS_CLIENT_SECRET=xxxxxxxx
export ANSIBLE_LOGANALYTICS_TENANT_ID=xxxxxxxx
export ANSIBLE_LOGANALYTICS_STREAM_NAME=Custom-MyTable
"""
import getpass
import json
import socket
import uuid
from datetime import datetime, timedelta, timezone
from os.path import basename
from urllib.parse import urlencode
from ansible.module_utils.urls import open_url
from ansible.plugins.callback import CallbackBase
from ansible.utils.display import Display
display = Display()
class AzureLogAnalyticsIngestionSource:
def __init__(
self,
dce_url,
dcr_id,
disable_attempts,
disable_on_failure,
client_id,
client_secret,
tenant_id,
stream_name,
include_task_args,
include_content,
timeout,
fqcn,
):
self.dce_url = dce_url
self.dcr_id = dcr_id
self.disabled = False
self.disable_attempts = disable_attempts
self.disable_on_failure = disable_on_failure
self.client_id = client_id
self.client_secret = client_secret
self.failures = 0
self.tenant_id = tenant_id
self.stream_name = stream_name
self.include_task_args = include_task_args
self.include_content = include_content
self.token_expiration_time = None
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
self.user = getpass.getuser()
self.timeout = timeout
self.fqcn = fqcn
self.bearer_token = self.get_bearer_token()
# OAuth2 authentication method to get a Bearer token
# This replaces the shared_key authentication mechanism
def get_bearer_token(self):
url = f"https://login.microsoftonline.com/{self.tenant_id}/oauth2/v2.0/token"
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = urlencode(
{
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
# The scope value comes from https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#headers
# and https://learn.microsoft.com/en-us/entra/identity-platform/scopes-oidc#the-default-scope
"scope": "https://monitor.azure.com/.default",
}
)
response = open_url(url, data=data, force=True, headers=headers, method="POST", timeout=self.timeout)
j = json.loads(response.read().decode("utf-8"))
self.token_expiration_time = datetime.now() + timedelta(seconds=j.get("expires_in"))
return j.get("access_token")
def is_token_valid(self):
return datetime.now() + timedelta(seconds=10) < self.token_expiration_time
# Method to send event data to the Azure Logs Ingestion API
# This replaces the legacy API call and now uses the Logs Ingestion API endpoint
def send_event(self, event_data):
if not self.is_token_valid():
self.bearer_token = self.get_bearer_token()
ingestion_url = (
f"{self.dce_url}/dataCollectionRules/{self.dcr_id}/streams/{self.stream_name}?api-version=2023-01-01"
)
headers = {"Authorization": f"Bearer {self.bearer_token}", "Content-Type": "application/json"}
open_url(ingestion_url, data=json.dumps(event_data), headers=headers, method="POST", timeout=self.timeout)
def _rfc1123date(self):
return datetime.now(timezone.utc).strftime("%a, %d %b %Y %H:%M:%S GMT")
# This method wraps the private method with the appropriate error handling.
def send_to_loganalytics(self, playbook_name, result, state):
if self.disabled:
return
try:
self._send_to_loganalytics(playbook_name, result, state)
except Exception as e:
display.warning(f"{self.fqcn} callback plugin failure: {e}.")
if self.disable_on_failure:
self.failures += 1
if self.failures >= self.disable_attempts:
display.warning(
f"{self.fqcn} callback plugin failures exceed maximum of '{self.disable_attempts}'! Disabling plugin!"
)
self.disabled = True
else:
display.v(f"{self.fqcn} callback plugin failure {self.failures}/{self.disable_attempts}")
def _send_to_loganalytics(self, playbook_name, result, state):
ansible_role = str(result._task._role) if result._task._role else None
# Include/Exclude task args
if not self.include_task_args:
result._task_fields.pop("args", None)
# Include/Exclude content
if not self.include_content:
result._result.pop("content", None)
# Build the event data
event_data = [
{
"TimeGenerated": self._rfc1123date(),
"Host": result._host.name,
"User": self.user,
"Playbook": playbook_name,
"Role": ansible_role,
"TaskName": result._task.get_name(),
"Task": result._task_fields,
"Action": result._task_fields["action"],
"State": state,
"Result": result._result,
"Session": self.session,
}
]
# The data displayed here can be used as a sample file in order to create the table's schema.
display.vvv(f"Event Data: {json.dumps(event_data)}")
self.send_event(event_data)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "loganalytics_ingestion"
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super().__init__(display=display)
self.start_datetimes = {}
self.playbook_name = None
self.azure_loganalytics = None
self.fqcn = f"community.general.{self.CALLBACK_NAME}"
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# Set options for the new Azure Logs Ingestion API configuration
self.client_id = self.get_option("client_id")
self.client_secret = self.get_option("client_secret")
self.dce_url = self.get_option("dce_url")
self.dcr_id = self.get_option("dcr_id")
self.disable_attempts = self.get_option("disable_attempts")
self.disable_on_failure = self.get_option("disable_on_failure")
self.include_content = self.get_option("include_content")
self.include_task_args = self.get_option("include_task_args")
self.stream_name = self.get_option("stream_name")
self.tenant_id = self.get_option("tenant_id")
self.timeout = self.get_option("timeout")
# Initialize the AzureLogAnalyticsIngestionSource with the new settings
self.azure_loganalytics = AzureLogAnalyticsIngestionSource(
self.dce_url,
self.dcr_id,
self.disable_attempts,
self.disable_on_failure,
self.client_id,
self.client_secret,
self.tenant_id,
self.stream_name,
self.include_task_args,
self.include_content,
self.timeout,
self.fqcn,
)
def v2_playbook_on_start(self, playbook):
self.playbook_name = basename(playbook._file_name)
# Build event data and send it to the Logs Ingestion API
def v2_runner_on_failed(self, result, **kwargs):
self.azure_loganalytics.send_to_loganalytics(self.playbook_name, result, "FAILED")
def v2_runner_on_ok(self, result, **kwargs):
self.azure_loganalytics.send_to_loganalytics(self.playbook_name, result, "OK")

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Samir Musali <samir.musali@logdna.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -55,17 +56,15 @@ options:
default: ansible
"""
import json
import logging
import json
import socket
from uuid import getnode
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.plugins.callback import CallbackBase
from ansible.parsing.ajson import AnsibleJSONEncoder
try:
from logdna import LogDNAHandler
HAS_LOGDNA = True
except ImportError:
HAS_LOGDNA = False
@@ -74,12 +73,12 @@ except ImportError:
# Getting MAC Address of system:
def get_mac():
mac = f"{getnode():012x}"
return ":".join(map(lambda index: mac[index : index + 2], range(int(len(mac) / 2))))
return ":".join(map(lambda index: mac[index:index + 2], range(int(len(mac) / 2))))
# Getting hostname of system:
def get_hostname():
return str(socket.gethostname()).split(".local", 1)[0]
return str(socket.gethostname()).split('.local', 1)[0]
# Getting IP of system:
@@ -89,10 +88,10 @@ def get_ip():
except Exception:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
s.connect(("10.255.255.255", 1))
s.connect(('10.255.255.255', 1))
IP = s.getsockname()[0]
except Exception:
IP = "127.0.0.1"
IP = '127.0.0.1'
finally:
s.close()
return IP
@@ -109,13 +108,14 @@ def isJSONable(obj):
# LogDNA Callback Module:
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 0.1
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.logdna"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logdna'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.disabled = True
self.playbook_name = None
@@ -126,29 +126,29 @@ class CallbackModule(CallbackBase):
self.conf_tags = None
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.conf_key = self.get_option("conf_key")
self.plugin_ignore_errors = self.get_option("plugin_ignore_errors")
self.conf_hostname = self.get_option("conf_hostname")
self.conf_tags = self.get_option("conf_tags")
self.conf_key = self.get_option('conf_key')
self.plugin_ignore_errors = self.get_option('plugin_ignore_errors')
self.conf_hostname = self.get_option('conf_hostname')
self.conf_tags = self.get_option('conf_tags')
self.mac = get_mac()
self.ip = get_ip()
if self.conf_hostname is None:
self.conf_hostname = get_hostname()
self.conf_tags = self.conf_tags.split(",")
self.conf_tags = self.conf_tags.split(',')
if HAS_LOGDNA:
self.log = logging.getLogger("logdna")
self.log = logging.getLogger('logdna')
self.log.setLevel(logging.INFO)
self.options = {"hostname": self.conf_hostname, "mac": self.mac, "index_meta": True}
self.options = {'hostname': self.conf_hostname, 'mac': self.mac, 'index_meta': True}
self.log.addHandler(LogDNAHandler(self.conf_key, self.options))
self.disabled = False
else:
self.disabled = True
self._display.warning("WARNING:\nPlease, install LogDNA Python Package: `pip install logdna`")
self._display.warning('WARNING:\nPlease, install LogDNA Python Package: `pip install logdna`')
def metaIndexing(self, meta):
invalidKeys = []
@@ -160,25 +160,25 @@ class CallbackModule(CallbackBase):
if ninvalidKeys > 0:
for key in invalidKeys:
del meta[key]
meta["__errors"] = f"These keys have been sanitized: {', '.join(invalidKeys)}"
meta['__errors'] = f"These keys have been sanitized: {', '.join(invalidKeys)}"
return meta
def sanitizeJSON(self, data):
try:
return json.loads(json.dumps(data, sort_keys=True, cls=AnsibleJSONEncoder))
except Exception:
return {"warnings": ["JSON Formatting Issue", json.dumps(data, sort_keys=True, cls=AnsibleJSONEncoder)]}
return {'warnings': ['JSON Formatting Issue', json.dumps(data, sort_keys=True, cls=AnsibleJSONEncoder)]}
def flush(self, log, options):
if HAS_LOGDNA:
self.log.info(json.dumps(log), options)
def sendLog(self, host, category, logdata):
options = {"app": "ansible", "meta": {"playbook": self.playbook_name, "host": host, "category": category}}
logdata["info"].pop("invocation", None)
warnings = logdata["info"].pop("warnings", None)
options = {'app': 'ansible', 'meta': {'playbook': self.playbook_name, 'host': host, 'category': category}}
logdata['info'].pop('invocation', None)
warnings = logdata['info'].pop('warnings', None)
if warnings is not None:
self.flush({"warn": warnings}, options)
self.flush({'warn': warnings}, options)
self.flush(logdata, options)
def v2_playbook_on_start(self, playbook):
@@ -189,21 +189,21 @@ class CallbackModule(CallbackBase):
result = dict()
for host in stats.processed.keys():
result[host] = stats.summarize(host)
self.sendLog(self.conf_hostname, "STATS", {"info": self.sanitizeJSON(result)})
self.sendLog(self.conf_hostname, 'STATS', {'info': self.sanitizeJSON(result)})
def runner_on_failed(self, host, res, ignore_errors=False):
if self.plugin_ignore_errors:
ignore_errors = self.plugin_ignore_errors
self.sendLog(host, "FAILED", {"info": self.sanitizeJSON(res), "ignore_errors": ignore_errors})
self.sendLog(host, 'FAILED', {'info': self.sanitizeJSON(res), 'ignore_errors': ignore_errors})
def runner_on_ok(self, host, res):
self.sendLog(host, "OK", {"info": self.sanitizeJSON(res)})
self.sendLog(host, 'OK', {'info': self.sanitizeJSON(res)})
def runner_on_unreachable(self, host, res):
self.sendLog(host, "UNREACHABLE", {"info": self.sanitizeJSON(res)})
self.sendLog(host, 'UNREACHABLE', {'info': self.sanitizeJSON(res)})
def runner_on_async_failed(self, host, res, jid):
self.sendLog(host, "ASYNC_FAILED", {"info": self.sanitizeJSON(res), "job_id": jid})
self.sendLog(host, 'ASYNC_FAILED', {'info': self.sanitizeJSON(res), 'job_id': jid})
def runner_on_async_ok(self, host, res, jid):
self.sendLog(host, "ASYNC_OK", {"info": self.sanitizeJSON(res), "job_id": jid})
self.sendLog(host, 'ASYNC_OK', {'info': self.sanitizeJSON(res), 'job_id': jid})

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Logentries.com, Jimmy Tang <jimmy.tang@logentries.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -96,21 +97,19 @@ examples: >-
"""
import os
import random
import socket
import random
import time
import uuid
try:
import certifi
HAS_CERTIFI = True
except ImportError:
HAS_CERTIFI = False
try:
import flatdict
HAS_FLATDICT = True
except ImportError:
HAS_FLATDICT = False
@@ -122,8 +121,9 @@ from ansible.plugins.callback import CallbackBase
# * Better formatting of output before sending out to logentries data/api nodes.
class PlainTextSocketAppender:
def __init__(self, display, LE_API="data.logentries.com", LE_PORT=80, LE_TLS_PORT=443):
class PlainTextSocketAppender(object):
def __init__(self, display, LE_API='data.logentries.com', LE_PORT=80, LE_TLS_PORT=443):
self.LE_API = LE_API
self.LE_PORT = LE_PORT
self.LE_TLS_PORT = LE_TLS_PORT
@@ -132,7 +132,7 @@ class PlainTextSocketAppender:
# Error message displayed when an incorrect Token has been detected
self.INVALID_TOKEN = "\n\nIt appears the LOGENTRIES_TOKEN parameter you entered is incorrect!\n\n"
# Unicode Line separator character \u2028
self.LINE_SEP = "\u2028"
self.LINE_SEP = '\u2028'
self._display = display
self._conn = None
@@ -171,14 +171,14 @@ class PlainTextSocketAppender:
def put(self, data):
# Replace newlines with Unicode line separator
# for multi-line events
data = to_text(data, errors="surrogate_or_strict")
multiline = data.replace("\n", self.LINE_SEP)
data = to_text(data, errors='surrogate_or_strict')
multiline = data.replace('\n', self.LINE_SEP)
multiline += "\n"
# Send data, reconnect if needed
while True:
try:
self._conn.send(to_bytes(multiline, errors="surrogate_or_strict"))
except OSError:
self._conn.send(to_bytes(multiline, errors='surrogate_or_strict'))
except socket.error:
self.reopen_connection()
continue
break
@@ -188,7 +188,6 @@ class PlainTextSocketAppender:
try:
import ssl
HAS_SSL = True
except ImportError: # for systems without TLS support.
SocketAppender = PlainTextSocketAppender
@@ -200,28 +199,27 @@ else:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
context = ssl.create_default_context(
purpose=ssl.Purpose.SERVER_AUTH,
cafile=certifi.where(),
)
cafile=certifi.where(), )
sock = context.wrap_socket(
sock=sock,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
)
suppress_ragged_eofs=True, )
sock.connect((self.LE_API, self.LE_TLS_PORT))
self._conn = sock
SocketAppender = TLSSocketAppender # type: ignore
SocketAppender = TLSSocketAppender
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.logentries"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logentries'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
# TODO: allow for alternate posting methods (REST/UDP/agent/etc)
super().__init__()
super(CallbackModule, self).__init__()
# verify dependencies
if not HAS_SSL:
@@ -229,9 +227,7 @@ class CallbackModule(CallbackBase):
if not HAS_CERTIFI:
self.disabled = True
self._display.warning(
"The `certifi` python module is not installed.\nDisabling the Logentries callback plugin."
)
self._display.warning('The `certifi` python module is not installed.\nDisabling the Logentries callback plugin.')
self.le_jobid = str(uuid.uuid4())
@@ -239,47 +235,41 @@ class CallbackModule(CallbackBase):
self.timeout = 10
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# get options
try:
self.api_url = self.get_option("api")
self.api_port = self.get_option("port")
self.api_tls_port = self.get_option("tls_port")
self.use_tls = self.get_option("use_tls")
self.flatten = self.get_option("flatten")
self.api_url = self.get_option('api')
self.api_port = self.get_option('port')
self.api_tls_port = self.get_option('tls_port')
self.use_tls = self.get_option('use_tls')
self.flatten = self.get_option('flatten')
except KeyError as e:
self._display.warning(f"Missing option for Logentries callback plugin: {e}")
self.disabled = True
try:
self.token = self.get_option("token")
except KeyError:
self._display.warning(
"Logentries token was not provided, this is required for this callback to operate, disabling"
)
self.token = self.get_option('token')
except KeyError as e:
self._display.warning('Logentries token was not provided, this is required for this callback to operate, disabling')
self.disabled = True
if self.flatten and not HAS_FLATDICT:
self.disabled = True
self._display.warning(
"You have chosen to flatten and the `flatdict` python module is not installed.\nDisabling the Logentries callback plugin."
)
self._display.warning('You have chosen to flatten and the `flatdict` python module is not installed.\nDisabling the Logentries callback plugin.')
self._initialize_connections()
def _initialize_connections(self):
if not self.disabled:
if self.use_tls:
self._display.vvvv(f"Connecting to {self.api_url}:{self.api_tls_port} with TLS")
self._appender = TLSSocketAppender(
display=self._display, LE_API=self.api_url, LE_TLS_PORT=self.api_tls_port
)
self._appender = TLSSocketAppender(display=self._display, LE_API=self.api_url, LE_TLS_PORT=self.api_tls_port)
else:
self._display.vvvv(f"Connecting to {self.api_url}:{self.api_port}")
self._appender = PlainTextSocketAppender(
display=self._display, LE_API=self.api_url, LE_PORT=self.api_port
)
self._appender = PlainTextSocketAppender(display=self._display, LE_API=self.api_url, LE_PORT=self.api_port)
self._appender.reopen_connection()
def emit_formatted(self, record):
@@ -290,50 +280,50 @@ class CallbackModule(CallbackBase):
self.emit(self._dump_results(record))
def emit(self, record):
msg = record.rstrip("\n")
msg = record.rstrip('\n')
msg = f"{self.token} {msg}"
self._appender.put(msg)
self._display.vvvv("Sent event to logentries")
def _set_info(self, host, res):
return {"le_jobid": self.le_jobid, "hostname": host, "results": res}
return {'le_jobid': self.le_jobid, 'hostname': host, 'results': res}
def runner_on_ok(self, host, res):
results = self._set_info(host, res)
results["status"] = "OK"
results['status'] = 'OK'
self.emit_formatted(results)
def runner_on_failed(self, host, res, ignore_errors=False):
results = self._set_info(host, res)
results["status"] = "FAILED"
results['status'] = 'FAILED'
self.emit_formatted(results)
def runner_on_skipped(self, host, item=None):
results = self._set_info(host, item)
del results["results"]
results["status"] = "SKIPPED"
del results['results']
results['status'] = 'SKIPPED'
self.emit_formatted(results)
def runner_on_unreachable(self, host, res):
results = self._set_info(host, res)
results["status"] = "UNREACHABLE"
results['status'] = 'UNREACHABLE'
self.emit_formatted(results)
def runner_on_async_failed(self, host, res, jid):
results = self._set_info(host, res)
results["jid"] = jid
results["status"] = "ASYNC_FAILED"
results['jid'] = jid
results['status'] = 'ASYNC_FAILED'
self.emit_formatted(results)
def v2_playbook_on_play_start(self, play):
results = {}
results["le_jobid"] = self.le_jobid
results["started_by"] = os.getlogin()
results['le_jobid'] = self.le_jobid
results['started_by'] = os.getlogin()
if play.name:
results["play"] = play.name
results["hosts"] = play.hosts
results['play'] = play.name
results['hosts'] = play.hosts
self.emit_formatted(results)
def playbook_on_stats(self, stats):
"""close connection"""
""" close connection """
self._appender.close_connection()

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Yevhen Khmelenko <ujenmr@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -94,17 +95,15 @@ ansible.cfg: |
}
"""
import json
import logging
import os
import json
from ansible import context
import socket
import uuid
from ansible import context
import logging
try:
import logstash
HAS_LOGSTASH = True
except ImportError:
HAS_LOGSTASH = False
@@ -117,13 +116,14 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.logstash"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logstash'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
if not HAS_LOGSTASH:
self.disabled = True
@@ -133,11 +133,14 @@ class CallbackModule(CallbackBase):
def _init_plugin(self):
if not self.disabled:
self.logger = logging.getLogger("python-logstash-logger")
self.logger = logging.getLogger('python-logstash-logger')
self.logger.setLevel(logging.DEBUG)
self.handler = logstash.TCPLogstashHandler(
self.ls_server, self.ls_port, version=1, message_type=self.ls_type
self.ls_server,
self.ls_port,
version=1,
message_type=self.ls_type
)
self.logger.addHandler(self.handler)
@@ -145,36 +148,42 @@ class CallbackModule(CallbackBase):
self.session = str(uuid.uuid4())
self.errors = 0
self.base_data = {"session": self.session, "host": self.hostname}
self.base_data = {
'session': self.session,
'host': self.hostname
}
if self.ls_pre_command is not None:
self.base_data["ansible_pre_command_output"] = os.popen(self.ls_pre_command).read()
self.base_data['ansible_pre_command_output'] = os.popen(
self.ls_pre_command).read()
if context.CLIARGS is not None:
self.base_data["ansible_checkmode"] = context.CLIARGS.get("check")
self.base_data["ansible_tags"] = context.CLIARGS.get("tags")
self.base_data["ansible_skip_tags"] = context.CLIARGS.get("skip_tags")
self.base_data["inventory"] = context.CLIARGS.get("inventory")
self.base_data['ansible_checkmode'] = context.CLIARGS.get('check')
self.base_data['ansible_tags'] = context.CLIARGS.get('tags')
self.base_data['ansible_skip_tags'] = context.CLIARGS.get('skip_tags')
self.base_data['inventory'] = context.CLIARGS.get('inventory')
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.ls_server = self.get_option("server")
self.ls_port = int(self.get_option("port"))
self.ls_type = self.get_option("type")
self.ls_pre_command = self.get_option("pre_command")
self.ls_format_version = self.get_option("format_version")
self.ls_server = self.get_option('server')
self.ls_port = int(self.get_option('port'))
self.ls_type = self.get_option('type')
self.ls_pre_command = self.get_option('pre_command')
self.ls_format_version = self.get_option('format_version')
self._init_plugin()
def v2_playbook_on_start(self, playbook):
data = self.base_data.copy()
data["ansible_type"] = "start"
data["status"] = "OK"
data["ansible_playbook"] = playbook._file_name
data['ansible_type'] = "start"
data['status'] = "OK"
data['ansible_playbook'] = playbook._file_name
if self.ls_format_version == "v2":
self.logger.info("START PLAYBOOK | %s", data["ansible_playbook"], extra=data)
self.logger.info(
"START PLAYBOOK | %s", data['ansible_playbook'], extra=data
)
else:
self.logger.info("ansible start", extra=data)
@@ -191,13 +200,15 @@ class CallbackModule(CallbackBase):
status = "FAILED"
data = self.base_data.copy()
data["ansible_type"] = "finish"
data["status"] = status
data["ansible_playbook_duration"] = runtime.total_seconds()
data["ansible_result"] = json.dumps(summarize_stat) # deprecated field
data['ansible_type'] = "finish"
data['status'] = status
data['ansible_playbook_duration'] = runtime.total_seconds()
data['ansible_result'] = json.dumps(summarize_stat) # deprecated field
if self.ls_format_version == "v2":
self.logger.info("FINISH PLAYBOOK | %s", json.dumps(summarize_stat), extra=data)
self.logger.info(
"FINISH PLAYBOOK | %s", json.dumps(summarize_stat), extra=data
)
else:
self.logger.info("ansible stats", extra=data)
@@ -208,10 +219,10 @@ class CallbackModule(CallbackBase):
self.play_name = play.name
data = self.base_data.copy()
data["ansible_type"] = "start"
data["status"] = "OK"
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data['ansible_type'] = "start"
data['status'] = "OK"
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
if self.ls_format_version == "v2":
self.logger.info("START PLAY | %s", self.play_name, extra=data)
@@ -221,61 +232,64 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_task_start(self, task, is_conditional):
self.task_id = str(task._uuid)
"""
'''
Tasks and handler tasks are dealt with here
"""
'''
def v2_runner_on_ok(self, result, **kwargs):
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
data = self.base_data.copy()
if task_name == "setup":
data["ansible_type"] = "setup"
data["status"] = "OK"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_facts"] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info("SETUP FACTS | %s", self._dump_results(result._result), extra=data)
else:
self.logger.info("ansible facts", extra=data)
else:
if "changed" in result._result.keys():
data["ansible_changed"] = result._result["changed"]
else:
data["ansible_changed"] = False
data["ansible_type"] = "task"
data["status"] = "OK"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
if task_name == 'setup':
data['ansible_type'] = "setup"
data['status'] = "OK"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_facts'] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info(
"TASK OK | %s | RESULT | %s", task_name, self._dump_results(result._result), extra=data
"SETUP FACTS | %s", self._dump_results(result._result), extra=data
)
else:
self.logger.info("ansible facts", extra=data)
else:
if 'changed' in result._result.keys():
data['ansible_changed'] = result._result['changed']
else:
data['ansible_changed'] = False
data['ansible_type'] = "task"
data['status'] = "OK"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info(
"TASK OK | %s | RESULT | %s",
task_name, self._dump_results(result._result), extra=data
)
else:
self.logger.info("ansible ok", extra=data)
def v2_runner_on_skipped(self, result, **kwargs):
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
data = self.base_data.copy()
data["ansible_type"] = "task"
data["status"] = "SKIPPED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
data['ansible_type'] = "task"
data['status'] = "SKIPPED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info("TASK SKIPPED | %s", task_name, extra=data)
@@ -284,12 +298,12 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_import_for_host(self, result, imported_file):
data = self.base_data.copy()
data["ansible_type"] = "import"
data["status"] = "IMPORTED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["imported_file"] = imported_file
data['ansible_type'] = "import"
data['status'] = "IMPORTED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['imported_file'] = imported_file
if self.ls_format_version == "v2":
self.logger.info("IMPORT | %s", imported_file, extra=data)
@@ -298,12 +312,12 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_not_import_for_host(self, result, missing_file):
data = self.base_data.copy()
data["ansible_type"] = "import"
data["status"] = "NOT IMPORTED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["imported_file"] = missing_file
data['ansible_type'] = "import"
data['status'] = "NOT IMPORTED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['imported_file'] = missing_file
if self.ls_format_version == "v2":
self.logger.info("NOT IMPORTED | %s", missing_file, extra=data)
@@ -311,81 +325,75 @@ class CallbackModule(CallbackBase):
self.logger.info("ansible import", extra=data)
def v2_runner_on_failed(self, result, **kwargs):
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
data = self.base_data.copy()
if "changed" in result._result.keys():
data["ansible_changed"] = result._result["changed"]
if 'changed' in result._result.keys():
data['ansible_changed'] = result._result['changed']
else:
data["ansible_changed"] = False
data['ansible_changed'] = False
data["ansible_type"] = "task"
data["status"] = "FAILED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
data['ansible_type'] = "task"
data['status'] = "FAILED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
self.errors += 1
if self.ls_format_version == "v2":
self.logger.error(
"TASK FAILED | %s | HOST | %s | RESULT | %s",
task_name,
self.hostname,
self._dump_results(result._result),
extra=data,
task_name, self.hostname,
self._dump_results(result._result), extra=data
)
else:
self.logger.error("ansible failed", extra=data)
def v2_runner_on_unreachable(self, result, **kwargs):
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
data = self.base_data.copy()
data["ansible_type"] = "task"
data["status"] = "UNREACHABLE"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
data['ansible_type'] = "task"
data['status'] = "UNREACHABLE"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
self.errors += 1
if self.ls_format_version == "v2":
self.logger.error(
"UNREACHABLE | %s | HOST | %s | RESULT | %s",
task_name,
self.hostname,
self._dump_results(result._result),
extra=data,
task_name, self.hostname,
self._dump_results(result._result), extra=data
)
else:
self.logger.error("ansible unreachable", extra=data)
def v2_runner_on_async_failed(self, result, **kwargs):
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
data = self.base_data.copy()
data["ansible_type"] = "task"
data["status"] = "FAILED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
data['ansible_type'] = "task"
data['status'] = "FAILED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
self.errors += 1
if self.ls_format_version == "v2":
self.logger.error(
"ASYNC FAILED | %s | HOST | %s | RESULT | %s",
task_name,
self.hostname,
self._dump_results(result._result),
extra=data,
task_name, self.hostname,
self._dump_results(result._result), extra=data
)
else:
self.logger.error("ansible async", extra=data)

View File

@@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Dag Wieers <dag@wieers.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -79,10 +81,10 @@ options:
version_added: 8.2.0
"""
import email.utils
import json
import os
import re
import email.utils
import smtplib
from ansible.module_utils.common.text.converters import to_bytes
@@ -91,33 +93,33 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""This Ansible callback plugin mails errors to interested parties."""
''' This Ansible callback plugin mails errors to interested parties. '''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.mail"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.mail'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.sender = None
self.to = "root"
self.smtphost = os.getenv("SMTPHOST", "localhost")
self.to = 'root'
self.smtphost = os.getenv('SMTPHOST', 'localhost')
self.smtpport = 25
self.cc = None
self.bcc = None
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.sender = self.get_option("sender")
self.to = self.get_option("to")
self.smtphost = self.get_option("mta")
self.smtpport = self.get_option("mtaport")
self.cc = self.get_option("cc")
self.bcc = self.get_option("bcc")
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
def mail(self, subject="Ansible error mail", body=None):
self.sender = self.get_option('sender')
self.to = self.get_option('to')
self.smtphost = self.get_option('mta')
self.smtpport = self.get_option('mtaport')
self.cc = self.get_option('cc')
self.bcc = self.get_option('bcc')
def mail(self, subject='Ansible error mail', body=None):
if body is None:
body = subject
@@ -131,14 +133,14 @@ class CallbackModule(CallbackBase):
if self.bcc:
bcc_addresses = email.utils.getaddresses(self.bcc)
content = f"Date: {email.utils.formatdate()}\n"
content += f"From: {email.utils.formataddr(sender_address)}\n"
content = f'Date: {email.utils.formatdate()}\n'
content += f'From: {email.utils.formataddr(sender_address)}\n'
if self.to:
content += f"To: {', '.join([email.utils.formataddr(pair) for pair in to_addresses])}\n"
if self.cc:
content += f"Cc: {', '.join([email.utils.formataddr(pair) for pair in cc_addresses])}\n"
content += f"Message-ID: {email.utils.make_msgid(domain=self.get_option('message_id_domain'))}\n"
content += f"Subject: {subject.strip()}\n\n"
content += f'Subject: {subject.strip()}\n\n'
content += body
addresses = to_addresses
@@ -148,23 +150,23 @@ class CallbackModule(CallbackBase):
addresses += bcc_addresses
if not addresses:
self._display.warning("No receiver has been specified for the mail callback plugin.")
self._display.warning('No receiver has been specified for the mail callback plugin.')
smtp.sendmail(self.sender, [address for name, address in addresses], to_bytes(content))
smtp.quit()
def subject_msg(self, multiline, failtype, linenr):
msg = multiline.strip("\r\n").splitlines()[linenr]
return f"{failtype}: {msg}"
msg = multiline.strip('\r\n').splitlines()[linenr]
return f'{failtype}: {msg}'
def indent(self, multiline, indent=8):
return re.sub("^", " " * indent, multiline, flags=re.MULTILINE)
return re.sub('^', ' ' * indent, multiline, flags=re.MULTILINE)
def body_blob(self, multiline, texttype):
"""Turn some text output in a well-indented block for sending in a mail body"""
intro = f"with the following {texttype}:\n\n"
blob = "\n".join(multiline.strip("\r\n").splitlines())
''' Turn some text output in a well-indented block for sending in a mail body '''
intro = f'with the following {texttype}:\n\n'
blob = "\n".join(multiline.strip('\r\n').splitlines())
return f"{intro}{self.indent(blob)}\n"
def mail_result(self, result, failtype):
@@ -175,87 +177,83 @@ class CallbackModule(CallbackBase):
# Add subject
if self.itembody:
subject = self.itemsubject
elif result._result.get("failed_when_result") is True:
elif result._result.get('failed_when_result') is True:
subject = "Failed due to 'failed_when' condition"
elif result._result.get("msg"):
subject = self.subject_msg(result._result["msg"], failtype, 0)
elif result._result.get("stderr"):
subject = self.subject_msg(result._result["stderr"], failtype, -1)
elif result._result.get("stdout"):
subject = self.subject_msg(result._result["stdout"], failtype, -1)
elif result._result.get("exception"): # Unrelated exceptions are added to output :-/
subject = self.subject_msg(result._result["exception"], failtype, -1)
elif result._result.get('msg'):
subject = self.subject_msg(result._result['msg'], failtype, 0)
elif result._result.get('stderr'):
subject = self.subject_msg(result._result['stderr'], failtype, -1)
elif result._result.get('stdout'):
subject = self.subject_msg(result._result['stdout'], failtype, -1)
elif result._result.get('exception'): # Unrelated exceptions are added to output :-/
subject = self.subject_msg(result._result['exception'], failtype, -1)
else:
subject = f"{failtype}: {result._task.name or result._task.action}"
subject = f'{failtype}: {result._task.name or result._task.action}'
# Make playbook name visible (e.g. in Outlook/Gmail condensed view)
body = f"Playbook: {os.path.basename(self.playbook._file_name)}\n"
body = f'Playbook: {os.path.basename(self.playbook._file_name)}\n'
if result._task.name:
body += f"Task: {result._task.name}\n"
body += f"Module: {result._task.action}\n"
body += f"Host: {host}\n"
body += "\n"
body += f'Task: {result._task.name}\n'
body += f'Module: {result._task.action}\n'
body += f'Host: {host}\n'
body += '\n'
# Add task information (as much as possible)
body += "The following task failed:\n\n"
if "invocation" in result._result:
body += self.indent(
f"{result._task.action}: {json.dumps(result._result['invocation']['module_args'], indent=4)}\n"
)
body += 'The following task failed:\n\n'
if 'invocation' in result._result:
body += self.indent(f"{result._task.action}: {json.dumps(result._result['invocation']['module_args'], indent=4)}\n")
elif result._task.name:
body += self.indent(f"{result._task.name} ({result._task.action})\n")
body += self.indent(f'{result._task.name} ({result._task.action})\n')
else:
body += self.indent(f"{result._task.action}\n")
body += "\n"
body += self.indent(f'{result._task.action}\n')
body += '\n'
# Add item / message
if self.itembody:
body += self.itembody
elif result._result.get("failed_when_result") is True:
fail_cond_list = "\n- ".join(result._task.failed_when)
elif result._result.get('failed_when_result') is True:
fail_cond_list = '\n- '.join(result._task.failed_when)
fail_cond = self.indent(f"failed_when:\n- {fail_cond_list}")
body += f"due to the following condition:\n\n{fail_cond}\n\n"
elif result._result.get("msg"):
body += self.body_blob(result._result["msg"], "message")
elif result._result.get('msg'):
body += self.body_blob(result._result['msg'], 'message')
# Add stdout / stderr / exception / warnings / deprecations
if result._result.get("stdout"):
body += self.body_blob(result._result["stdout"], "standard output")
if result._result.get("stderr"):
body += self.body_blob(result._result["stderr"], "error output")
if result._result.get("exception"): # Unrelated exceptions are added to output :-/
body += self.body_blob(result._result["exception"], "exception")
if result._result.get("warnings"):
for i in range(len(result._result.get("warnings"))):
body += self.body_blob(result._result["warnings"][i], f"exception {i + 1}")
if result._result.get("deprecations"):
for i in range(len(result._result.get("deprecations"))):
body += self.body_blob(result._result["deprecations"][i], f"exception {i + 1}")
if result._result.get('stdout'):
body += self.body_blob(result._result['stdout'], 'standard output')
if result._result.get('stderr'):
body += self.body_blob(result._result['stderr'], 'error output')
if result._result.get('exception'): # Unrelated exceptions are added to output :-/
body += self.body_blob(result._result['exception'], 'exception')
if result._result.get('warnings'):
for i in range(len(result._result.get('warnings'))):
body += self.body_blob(result._result['warnings'][i], f'exception {i + 1}')
if result._result.get('deprecations'):
for i in range(len(result._result.get('deprecations'))):
body += self.body_blob(result._result['deprecations'][i], f'exception {i + 1}')
body += "and a complete dump of the error:\n\n"
body += self.indent(f"{failtype}: {json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4)}")
body += 'and a complete dump of the error:\n\n'
body += self.indent(f'{failtype}: {json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4)}')
self.mail(subject=subject, body=body)
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
self.itembody = ""
self.itembody = ''
def v2_runner_on_failed(self, result, ignore_errors=False):
if ignore_errors:
return
self.mail_result(result, "Failed")
self.mail_result(result, 'Failed')
def v2_runner_on_unreachable(self, result):
self.mail_result(result, "Unreachable")
self.mail_result(result, 'Unreachable')
def v2_runner_on_async_failed(self, result):
self.mail_result(result, "Async failure")
self.mail_result(result, 'Async failure')
def v2_runner_item_on_failed(self, result):
# Pass item information to task failure
self.itemsubject = result._result["msg"]
self.itembody += self.body_blob(
json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4), f"failed item dump '{result._result['item']}'"
)
self.itemsubject = result._result['msg']
self.itembody += self.body_blob(json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4), f"failed item dump '{result._result['item']}'")

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018 Remi Verchere <remi@verchere.fr>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -73,13 +74,13 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""
'''
send ansible-playbook to Nagios server using nrdp protocol
"""
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.nrdp"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.nrdp'
CALLBACK_NEEDS_WHITELIST = True
# Nagios states
@@ -89,35 +90,34 @@ class CallbackModule(CallbackBase):
UNKNOWN = 3
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
self.printed_playbook = False
self.playbook_name = None
self.play = None
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.url = self.get_option("url")
if not self.url.endswith("/"):
self.url += "/"
self.token = self.get_option("token")
self.hostname = self.get_option("hostname")
self.servicename = self.get_option("servicename")
self.validate_nrdp_certs = self.get_option("validate_certs")
self.url = self.get_option('url')
if not self.url.endswith('/'):
self.url += '/'
self.token = self.get_option('token')
self.hostname = self.get_option('hostname')
self.servicename = self.get_option('servicename')
self.validate_nrdp_certs = self.get_option('validate_certs')
if (self.url or self.token or self.hostname or self.servicename) is None:
self._display.warning(
"NRDP callback wants the NRDP_URL,"
" NRDP_TOKEN, NRDP_HOSTNAME,"
" NRDP_SERVICENAME"
" environment variables'."
" The NRDP callback plugin is disabled."
)
if (self.url or self.token or self.hostname or
self.servicename) is None:
self._display.warning("NRDP callback wants the NRDP_URL,"
" NRDP_TOKEN, NRDP_HOSTNAME,"
" NRDP_SERVICENAME"
" environment variables'."
" The NRDP callback plugin is disabled.")
self.disabled = True
def _send_nrdp(self, state, msg):
"""
'''
nrpd service check send XMLDATA like this:
<?xml version='1.0'?>
<checkresults>
@@ -128,7 +128,7 @@ class CallbackModule(CallbackBase):
<output>WARNING: Danger Will Robinson!|perfdata</output>
</checkresult>
</checkresults>
"""
'''
xmldata = "<?xml version='1.0'?>\n"
xmldata += "<checkresults>\n"
xmldata += "<checkresult type='service'>\n"
@@ -139,24 +139,31 @@ class CallbackModule(CallbackBase):
xmldata += "</checkresult>\n"
xmldata += "</checkresults>\n"
body = {"cmd": "submitcheck", "token": self.token, "XMLDATA": to_bytes(xmldata)}
body = {
'cmd': 'submitcheck',
'token': self.token,
'XMLDATA': to_bytes(xmldata)
}
try:
response = open_url(self.url, data=urlencode(body), method="POST", validate_certs=self.validate_nrdp_certs)
response = open_url(self.url,
data=urlencode(body),
method='POST',
validate_certs=self.validate_nrdp_certs)
return response.read()
except Exception as ex:
self._display.warning(f"NRDP callback cannot send result {ex}")
def v2_playbook_on_play_start(self, play):
"""
'''
Display Playbook and play start messages
"""
'''
self.play = play
def v2_playbook_on_stats(self, stats):
"""
'''
Display info about playbook statistics
"""
'''
name = self.play
gstats = ""
hosts = sorted(stats.processed.keys())
@@ -164,14 +171,13 @@ class CallbackModule(CallbackBase):
for host in hosts:
stat = stats.summarize(host)
gstats += (
f"'{host}_ok'={stat['ok']} '{host}_changed'={stat['changed']}"
f" '{host}_unreachable'={stat['unreachable']} '{host}_failed'={stat['failures']} "
f"'{host}_ok'={stat['ok']} '{host}_changed'={stat['changed']} '{host}_unreachable'={stat['unreachable']} '{host}_failed'={stat['failures']} "
)
# Critical when failed tasks or unreachable host
critical += stat["failures"]
critical += stat["unreachable"]
critical += stat['failures']
critical += stat['unreachable']
# Warning when changed tasks
warning += stat["changed"]
warning += stat['changed']
msg = f"{name} | {gstats}"
if critical:

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -20,10 +21,11 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""
'''
This callback won't print messages to stdout when new callback events are received.
"""
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.null"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.null'

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021, Victor Martinez <VictorMartinezRubio@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -145,18 +146,22 @@ from ansible.errors import AnsibleError
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.plugins.callback import CallbackBase
OTEL_LIBRARY_IMPORT_ERROR: ImportError | None
try:
from opentelemetry import trace
from opentelemetry.trace import SpanKind
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter as GRPCOTLPSpanExporter
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter as HTTPOTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, SimpleSpanProcessor
from opentelemetry.sdk.trace.export.in_memory_span_exporter import InMemorySpanExporter
from opentelemetry.trace import SpanKind
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.trace.status import Status, StatusCode
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
SimpleSpanProcessor
)
from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
InMemorySpanExporter
)
except ImportError as imp_exc:
OTEL_LIBRARY_IMPORT_ERROR = imp_exc
else:
@@ -181,9 +186,9 @@ class TaskData:
def add_host(self, host):
if host.uuid in self.host_data:
if host.status == "included":
if host.status == 'included':
# concatenate task include output from multiple items
host.result = f"{self.host_data[host.uuid].result}\n{host.result}"
host.result = f'{self.host_data[host.uuid].result}\n{host.result}'
else:
return
@@ -203,14 +208,14 @@ class HostData:
self.finish = time_ns()
class OpenTelemetrySource:
class OpenTelemetrySource(object):
def __init__(self, display):
self.ansible_playbook = ""
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
try:
self.ip_address = socket.gethostbyname(socket.gethostname())
except Exception:
except Exception as e:
self.ip_address = None
self.user = getpass.getuser()
@@ -218,11 +223,11 @@ class OpenTelemetrySource:
def traceparent_context(self, traceparent):
carrier = dict()
carrier["traceparent"] = traceparent
carrier['traceparent'] = traceparent
return TraceContextTextMapPropagator().extract(carrier=carrier)
def start_task(self, tasks_data, hide_task_arguments, play_name, task):
"""record the start of a task for one or more hosts"""
""" record the start of a task for one or more hosts """
uuid = task._uuid
@@ -240,51 +245,53 @@ class OpenTelemetrySource:
tasks_data[uuid] = TaskData(uuid, name, path, play_name, action, args)
def finish_task(self, tasks_data, status, result, dump):
"""record the results of a task for a single host"""
""" record the results of a task for a single host """
task_uuid = result._task._uuid
if hasattr(result, "_host") and result._host is not None:
if hasattr(result, '_host') and result._host is not None:
host_uuid = result._host._uuid
host_name = result._host.name
else:
host_uuid = "include"
host_name = "include"
host_uuid = 'include'
host_name = 'include'
task = tasks_data[task_uuid]
task.dump = dump
task.add_host(HostData(host_uuid, host_name, status, result))
def generate_distributed_traces(
self,
otel_service_name,
ansible_playbook,
tasks_data,
status,
traceparent,
disable_logs,
disable_attributes_in_logs,
otel_exporter_otlp_traces_protocol,
store_spans_in_file,
):
"""generate distributed traces from the collected TaskData and HostData"""
def generate_distributed_traces(self,
otel_service_name,
ansible_playbook,
tasks_data,
status,
traceparent,
disable_logs,
disable_attributes_in_logs,
otel_exporter_otlp_traces_protocol,
store_spans_in_file):
""" generate distributed traces from the collected TaskData and HostData """
tasks = []
parent_start_time = None
for task in tasks_data.values():
for task_uuid, task in tasks_data.items():
if parent_start_time is None:
parent_start_time = task.start
tasks.append(task)
trace.set_tracer_provider(TracerProvider(resource=Resource.create({SERVICE_NAME: otel_service_name})))
trace.set_tracer_provider(
TracerProvider(
resource=Resource.create({SERVICE_NAME: otel_service_name})
)
)
otel_exporter = None
if store_spans_in_file:
otel_exporter = InMemorySpanExporter()
processor = SimpleSpanProcessor(otel_exporter)
else:
if otel_exporter_otlp_traces_protocol == "grpc":
if otel_exporter_otlp_traces_protocol == 'grpc':
otel_exporter = GRPCOTLPSpanExporter()
else:
otel_exporter = HTTPOTLPSpanExporter()
@@ -294,12 +301,8 @@ class OpenTelemetrySource:
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span(
ansible_playbook,
context=self.traceparent_context(traceparent),
start_time=parent_start_time,
kind=SpanKind.SERVER,
) as parent:
with tracer.start_as_current_span(ansible_playbook, context=self.traceparent_context(traceparent),
start_time=parent_start_time, kind=SpanKind.SERVER) as parent:
parent.set_status(status)
# Populate trace metadata attributes
parent.set_attribute("ansible.version", ansible_version)
@@ -309,45 +312,43 @@ class OpenTelemetrySource:
parent.set_attribute("ansible.host.ip", self.ip_address)
parent.set_attribute("ansible.host.user", self.user)
for task in tasks:
for host_data in task.host_data.values():
for host_uuid, host_data in task.host_data.items():
with tracer.start_as_current_span(task.name, start_time=task.start, end_on_exit=False) as span:
self.update_span_data(task, host_data, span, disable_logs, disable_attributes_in_logs)
return otel_exporter
def update_span_data(self, task_data, host_data, span, disable_logs, disable_attributes_in_logs):
"""update the span with the given TaskData and HostData"""
""" update the span with the given TaskData and HostData """
name = f"[{host_data.name}] {task_data.play}: {task_data.name}"
name = f'[{host_data.name}] {task_data.play}: {task_data.name}'
message = "success"
message = 'success'
res = {}
rc = 0
status = Status(status_code=StatusCode.OK)
if host_data.status != "included":
if host_data.status != 'included':
# Support loops
enriched_error_message = None
if "results" in host_data.result._result:
if host_data.status == "failed":
message = self.get_error_message_from_results(host_data.result._result["results"], task_data.action)
enriched_error_message = self.enrich_error_message_from_results(
host_data.result._result["results"], task_data.action
)
if 'results' in host_data.result._result:
if host_data.status == 'failed':
message = self.get_error_message_from_results(host_data.result._result['results'], task_data.action)
enriched_error_message = self.enrich_error_message_from_results(host_data.result._result['results'], task_data.action)
else:
res = host_data.result._result
rc = res.get("rc", 0)
if host_data.status == "failed":
rc = res.get('rc', 0)
if host_data.status == 'failed':
message = self.get_error_message(res)
enriched_error_message = self.enrich_error_message(res)
if host_data.status == "failed":
if host_data.status == 'failed':
status = Status(status_code=StatusCode.ERROR, description=message)
# Record an exception with the task message
span.record_exception(BaseException(enriched_error_message))
elif host_data.status == "skipped":
message = res["skip_reason"] if "skip_reason" in res else "skipped"
elif host_data.status == 'skipped':
message = res['skip_reason'] if 'skip_reason' in res else 'skipped'
status = Status(status_code=StatusCode.UNSET)
elif host_data.status == "ignored":
elif host_data.status == 'ignored':
status = Status(status_code=StatusCode.UNSET)
span.set_status(status)
@@ -359,7 +360,7 @@ class OpenTelemetrySource:
"ansible.task.name": name,
"ansible.task.result": rc,
"ansible.task.host.name": host_data.name,
"ansible.task.host.status": host_data.status,
"ansible.task.host.status": host_data.status
}
if isinstance(task_data.args, dict) and "gather_facts" not in task_data.action:
names = tuple(self.transform_ansible_unicode_to_str(k) for k in task_data.args.keys())
@@ -379,10 +380,10 @@ class OpenTelemetrySource:
span.end(end_time=host_data.finish)
def set_span_attributes(self, span, attributes):
"""update the span attributes with the given attributes if not None"""
""" update the span attributes with the given attributes if not None """
if span is None and self._display is not None:
self._display.warning("span object is None. Please double check if that is expected.")
self._display.warning('span object is None. Please double check if that is expected.')
else:
if attributes is not None:
span.set_attributes(attributes)
@@ -410,18 +411,7 @@ class OpenTelemetrySource:
@staticmethod
def url_from_args(args):
# the order matters
url_args = (
"url",
"api_url",
"baseurl",
"repo",
"server_url",
"chart_repo_url",
"registry_url",
"endpoint",
"uri",
"updates_url",
)
url_args = ("url", "api_url", "baseurl", "repo", "server_url", "chart_repo_url", "registry_url", "endpoint", "uri", "updates_url")
for arg in url_args:
if args is not None and args.get(arg):
return args.get(arg)
@@ -446,33 +436,33 @@ class OpenTelemetrySource:
@staticmethod
def get_error_message(result):
if result.get("exception") is not None:
return OpenTelemetrySource._last_line(result["exception"])
return result.get("msg", "failed")
if result.get('exception') is not None:
return OpenTelemetrySource._last_line(result['exception'])
return result.get('msg', 'failed')
@staticmethod
def get_error_message_from_results(results, action):
for result in results:
if result.get("failed", False):
if result.get('failed', False):
return f"{action}({result.get('item', 'none')}) - {OpenTelemetrySource.get_error_message(result)}"
@staticmethod
def _last_line(text):
lines = text.strip().split("\n")
lines = text.strip().split('\n')
return lines[-1]
@staticmethod
def enrich_error_message(result):
message = result.get("msg", "failed")
exception = result.get("exception")
stderr = result.get("stderr")
return f'message: "{message}"\nexception: "{exception}"\nstderr: "{stderr}"'
message = result.get('msg', 'failed')
exception = result.get('exception')
stderr = result.get('stderr')
return f"message: \"{message}\"\nexception: \"{exception}\"\nstderr: \"{stderr}\""
@staticmethod
def enrich_error_message_from_results(results, action):
message = ""
for result in results:
if result.get("failed", False):
if result.get('failed', False):
message = f"{action}({result.get('item', 'none')}) - {OpenTelemetrySource.enrich_error_message(result)}\n{message}"
return message
@@ -483,12 +473,12 @@ class CallbackModule(CallbackBase):
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.opentelemetry"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.opentelemetry'
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.hide_task_arguments = None
self.disable_attributes_in_logs = None
self.disable_logs = None
@@ -504,7 +494,7 @@ class CallbackModule(CallbackBase):
if OTEL_LIBRARY_IMPORT_ERROR:
raise AnsibleError(
"The `opentelemetry-api`, `opentelemetry-exporter-otlp` or `opentelemetry-sdk` must be installed to use this plugin"
'The `opentelemetry-api`, `opentelemetry-exporter-otlp` or `opentelemetry-sdk` must be installed to use this plugin'
) from OTEL_LIBRARY_IMPORT_ERROR
self.tasks_data = OrderedDict()
@@ -512,35 +502,37 @@ class CallbackModule(CallbackBase):
self.opentelemetry = OpenTelemetrySource(display=self._display)
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
environment_variable = self.get_option("enable_from_environment")
if environment_variable is not None and os.environ.get(environment_variable, "false").lower() != "true":
environment_variable = self.get_option('enable_from_environment')
if environment_variable is not None and os.environ.get(environment_variable, 'false').lower() != 'true':
self.disabled = True
self._display.warning(
f"The `enable_from_environment` option has been set and {environment_variable} is not enabled. Disabling the `opentelemetry` callback plugin."
)
self.hide_task_arguments = self.get_option("hide_task_arguments")
self.hide_task_arguments = self.get_option('hide_task_arguments')
self.disable_attributes_in_logs = self.get_option("disable_attributes_in_logs")
self.disable_attributes_in_logs = self.get_option('disable_attributes_in_logs')
self.disable_logs = self.get_option("disable_logs")
self.disable_logs = self.get_option('disable_logs')
self.store_spans_in_file = self.get_option("store_spans_in_file")
self.store_spans_in_file = self.get_option('store_spans_in_file')
self.otel_service_name = self.get_option("otel_service_name")
self.otel_service_name = self.get_option('otel_service_name')
if not self.otel_service_name:
self.otel_service_name = "ansible"
self.otel_service_name = 'ansible'
# See https://github.com/open-telemetry/opentelemetry-specification/issues/740
self.traceparent = self.get_option("traceparent")
self.traceparent = self.get_option('traceparent')
self.otel_exporter_otlp_traces_protocol = self.get_option("otel_exporter_otlp_traces_protocol")
self.otel_exporter_otlp_traces_protocol = self.get_option('otel_exporter_otlp_traces_protocol')
def dump_results(self, task, result):
"""dump the results if disable_logs is not enabled"""
""" dump the results if disable_logs is not enabled """
if self.disable_logs:
return ""
# ansible.builtin.uri contains the response in the json field
@@ -560,40 +552,74 @@ class CallbackModule(CallbackBase):
self.play_name = play.get_name()
def v2_runner_on_no_hosts(self, task):
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_playbook_on_task_start(self, task, is_conditional):
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_playbook_on_cleanup_task_start(self, task):
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_playbook_on_handler_task_start(self, task):
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
def v2_runner_on_failed(self, result, ignore_errors=False):
if ignore_errors:
status = "ignored"
status = 'ignored'
else:
status = "failed"
status = 'failed'
self.errors += 1
self.opentelemetry.finish_task(
self.tasks_data, status, result, self.dump_results(self.tasks_data[result._task._uuid], result)
self.tasks_data,
status,
result,
self.dump_results(self.tasks_data[result._task._uuid], result)
)
def v2_runner_on_ok(self, result):
self.opentelemetry.finish_task(
self.tasks_data, "ok", result, self.dump_results(self.tasks_data[result._task._uuid], result)
self.tasks_data,
'ok',
result,
self.dump_results(self.tasks_data[result._task._uuid], result)
)
def v2_runner_on_skipped(self, result):
self.opentelemetry.finish_task(
self.tasks_data, "skipped", result, self.dump_results(self.tasks_data[result._task._uuid], result)
self.tasks_data,
'skipped',
result,
self.dump_results(self.tasks_data[result._task._uuid], result)
)
def v2_playbook_on_include(self, included_file):
self.opentelemetry.finish_task(self.tasks_data, "included", included_file, "")
self.opentelemetry.finish_task(
self.tasks_data,
'included',
included_file,
""
)
def v2_playbook_on_stats(self, stats):
if self.errors == 0:
@@ -609,7 +635,7 @@ class CallbackModule(CallbackBase):
self.disable_logs,
self.disable_attributes_in_logs,
self.otel_exporter_otlp_traces_protocol,
self.store_spans_in_file,
self.store_spans_in_file
)
if self.store_spans_in_file:

View File

@@ -1,8 +1,10 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2025, Max Mitschke <maxmitschke@fastmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r"""
name: print_task
@@ -22,13 +24,13 @@ ansible.cfg: |-
callbacks_enabled=community.general.print_task
"""
from yaml import dump, load
from yaml import load, dump
try:
from yaml import CSafeDumper as SafeDumper
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeDumper, SafeLoader # type: ignore
from yaml import SafeDumper, SafeLoader
from ansible.plugins.callback import CallbackBase
@@ -37,19 +39,18 @@ class CallbackModule(CallbackBase):
"""
This callback module tells you how long your plays ran for.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "aggregate"
CALLBACK_NAME = "community.general.print_task"
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'community.general.print_task'
CALLBACK_NEEDS_ENABLED = True
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
self._printed_message = False
def _print_task(self, task):
if hasattr(task, "_ds"):
if hasattr(task, '_ds'):
task_snippet = load(str([task._ds.copy()]), Loader=SafeLoader)
task_yaml = dump(task_snippet, sort_keys=False, Dumper=SafeDumper)
self._display.display(f"\n{task_yaml}\n")

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -18,9 +19,9 @@ description:
- This plugin uses C(say) or C(espeak) to "speak" about play events.
"""
import os
import platform
import subprocess
import os
from ansible.module_utils.common.process import get_bin_path
from ansible.plugins.callback import CallbackBase
@@ -30,14 +31,14 @@ class CallbackModule(CallbackBase):
"""
makes Ansible much more exciting.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.say"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.say'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
self.FAILED_VOICE = None
self.REGULAR_VOICE = None
@@ -45,23 +46,21 @@ class CallbackModule(CallbackBase):
self.LASER_VOICE = None
try:
self.synthesizer = get_bin_path("say")
if platform.system() != "Darwin":
self.synthesizer = get_bin_path('say')
if platform.system() != 'Darwin':
# 'say' binary available, it might be GNUstep tool which doesn't support 'voice' parameter
self._display.warning(
f"'say' executable found but system is '{platform.system()}': ignoring voice parameter"
)
self._display.warning(f"'say' executable found but system is '{platform.system()}': ignoring voice parameter")
else:
self.FAILED_VOICE = "Zarvox"
self.REGULAR_VOICE = "Trinoids"
self.HAPPY_VOICE = "Cellos"
self.LASER_VOICE = "Princess"
self.FAILED_VOICE = 'Zarvox'
self.REGULAR_VOICE = 'Trinoids'
self.HAPPY_VOICE = 'Cellos'
self.LASER_VOICE = 'Princess'
except ValueError:
try:
self.synthesizer = get_bin_path("espeak")
self.FAILED_VOICE = "klatt"
self.HAPPY_VOICE = "f5"
self.LASER_VOICE = "whisper"
self.synthesizer = get_bin_path('espeak')
self.FAILED_VOICE = 'klatt'
self.HAPPY_VOICE = 'f5'
self.LASER_VOICE = 'whisper'
except ValueError:
self.synthesizer = None
@@ -69,14 +68,12 @@ class CallbackModule(CallbackBase):
# ansible will not call any callback if disabled is set to True
if not self.synthesizer:
self.disabled = True
self._display.warning(
f"Unable to find either 'say' or 'espeak' executable, plugin {os.path.basename(__file__)} disabled"
)
self._display.warning(f"Unable to find either 'say' or 'espeak' executable, plugin {os.path.basename(__file__)} disabled")
def say(self, msg, voice):
cmd = [self.synthesizer, msg]
if voice:
cmd.extend(("-v", voice))
cmd.extend(('-v', voice))
subprocess.call(cmd)
def runner_on_failed(self, host, res, ignore_errors=False):

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) Fastly, inc 2016
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -39,19 +40,20 @@ EXAMPLES = r"""
import difflib
from ansible import constants as C
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback import CallbackBase
from ansible.module_utils.common.text.converters import to_text
DONT_COLORIZE = False
COLORS = {
"normal": "\033[0m",
"ok": f"\x1b[{C.COLOR_CODES[C.COLOR_OK]}m", # type: ignore
"bold": "\033[1m",
"not_so_bold": "\033[1m\033[34m",
"changed": f"\x1b[{C.COLOR_CODES[C.COLOR_CHANGED]}m", # type: ignore
"failed": f"\x1b[{C.COLOR_CODES[C.COLOR_ERROR]}m", # type: ignore
"endc": "\033[0m",
"skipped": f"\x1b[{C.COLOR_CODES[C.COLOR_SKIP]}m", # type: ignore
'normal': '\033[0m',
'ok': f'\x1b[{C.COLOR_CODES[C.COLOR_OK]}m',
'bold': '\033[1m',
'not_so_bold': '\033[1m\033[34m',
'changed': f'\x1b[{C.COLOR_CODES[C.COLOR_CHANGED]}m',
'failed': f'\x1b[{C.COLOR_CODES[C.COLOR_ERROR]}m',
'endc': '\033[0m',
'skipped': f'\x1b[{C.COLOR_CODES[C.COLOR_SKIP]}m',
}
@@ -77,21 +79,22 @@ class CallbackModule(CallbackBase):
"""selective.py callback plugin."""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.selective"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.selective'
def __init__(self, display=None):
"""selective.py callback plugin."""
super().__init__(display)
super(CallbackModule, self).__init__(display)
self.last_skipped = False
self.last_task_name = None
self.printed_last_task = False
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
global DONT_COLORIZE
DONT_COLORIZE = self.get_option("nocolor")
DONT_COLORIZE = self.get_option('nocolor')
def _print_task(self, task_name=None):
if task_name is None:
@@ -103,7 +106,7 @@ class CallbackModule(CallbackBase):
if self.last_skipped:
print()
line = f"# {task_name} "
msg = colorize(f"{line}{'*' * (line_length - len(line))}", "bold")
msg = colorize(f"{line}{'*' * (line_length - len(line))}", 'bold')
print(msg)
def _indent_text(self, text, indent_level):
@@ -111,51 +114,48 @@ class CallbackModule(CallbackBase):
result_lines = []
for l in lines:
result_lines.append(f"{' ' * indent_level}{l}")
return "\n".join(result_lines)
return '\n'.join(result_lines)
def _print_diff(self, diff, indent_level):
if isinstance(diff, dict):
try:
diff = "\n".join(
difflib.unified_diff(
diff["before"].splitlines(),
diff["after"].splitlines(),
fromfile=diff.get("before_header", "new_file"),
tofile=diff["after_header"],
)
)
diff = '\n'.join(difflib.unified_diff(diff['before'].splitlines(),
diff['after'].splitlines(),
fromfile=diff.get('before_header',
'new_file'),
tofile=diff['after_header']))
except AttributeError:
diff = dict_diff(diff["before"], diff["after"])
diff = dict_diff(diff['before'], diff['after'])
if diff:
diff = colorize(str(diff), "changed")
diff = colorize(str(diff), 'changed')
print(self._indent_text(diff, indent_level + 4))
def _print_host_or_item(self, host_or_item, changed, msg, diff, is_host, error, stdout, stderr):
if is_host:
indent_level = 0
name = colorize(host_or_item.name, "not_so_bold")
name = colorize(host_or_item.name, 'not_so_bold')
else:
indent_level = 4
if isinstance(host_or_item, dict):
if "key" in host_or_item.keys():
host_or_item = host_or_item["key"]
name = colorize(to_text(host_or_item), "bold")
if 'key' in host_or_item.keys():
host_or_item = host_or_item['key']
name = colorize(to_text(host_or_item), 'bold')
if error:
color = "failed"
change_string = colorize("FAILED!!!", color)
color = 'failed'
change_string = colorize('FAILED!!!', color)
else:
color = "changed" if changed else "ok"
color = 'changed' if changed else 'ok'
change_string = colorize(f"changed={changed}", color)
msg = colorize(msg, color)
line_length = 120
spaces = " " * (40 - len(name) - indent_level)
spaces = ' ' * (40 - len(name) - indent_level)
line = f"{' ' * indent_level} * {name}{spaces}- {change_string}"
if len(msg) < 50:
line += f" -- {msg}"
line += f' -- {msg}'
print(f"{line} {'-' * (line_length - len(line))}---------")
else:
print(f"{line} {'-' * (line_length - len(line))}")
@@ -164,10 +164,10 @@ class CallbackModule(CallbackBase):
if diff:
self._print_diff(diff, indent_level)
if stdout:
stdout = colorize(stdout, "failed")
stdout = colorize(stdout, 'failed')
print(self._indent_text(stdout, indent_level + 4))
if stderr:
stderr = colorize(stderr, "failed")
stderr = colorize(stderr, 'failed')
print(self._indent_text(stderr, indent_level + 4))
def v2_playbook_on_play_start(self, play):
@@ -182,61 +182,61 @@ class CallbackModule(CallbackBase):
def _print_task_result(self, result, error=False, **kwargs):
"""Run when a task finishes correctly."""
if "print_action" in result._task.tags or error or self._display.verbosity > 1:
if 'print_action' in result._task.tags or error or self._display.verbosity > 1:
self._print_task()
self.last_skipped = False
msg = to_text(result._result.get("msg", "")) or to_text(result._result.get("reason", ""))
msg = to_text(result._result.get('msg', '')) or\
to_text(result._result.get('reason', ''))
stderr = [result._result.get("exception", None), result._result.get("module_stderr", None)]
stderr = [result._result.get('exception', None),
result._result.get('module_stderr', None)]
stderr = "\n".join([e for e in stderr if e]).strip()
self._print_host_or_item(
result._host,
result._result.get("changed", False),
msg,
result._result.get("diff", None),
is_host=True,
error=error,
stdout=result._result.get("module_stdout", None),
stderr=stderr.strip(),
)
if "results" in result._result:
for r in result._result["results"]:
failed = "failed" in r and r["failed"]
self._print_host_or_item(result._host,
result._result.get('changed', False),
msg,
result._result.get('diff', None),
is_host=True,
error=error,
stdout=result._result.get('module_stdout', None),
stderr=stderr.strip(),
)
if 'results' in result._result:
for r in result._result['results']:
failed = 'failed' in r and r['failed']
stderr = [r.get("exception", None), r.get("module_stderr", None)]
stderr = [r.get('exception', None), r.get('module_stderr', None)]
stderr = "\n".join([e for e in stderr if e]).strip()
self._print_host_or_item(
r[r["ansible_loop_var"]],
r.get("changed", False),
to_text(r.get("msg", "")),
r.get("diff", None),
is_host=False,
error=failed,
stdout=r.get("module_stdout", None),
stderr=stderr.strip(),
)
self._print_host_or_item(r[r['ansible_loop_var']],
r.get('changed', False),
to_text(r.get('msg', '')),
r.get('diff', None),
is_host=False,
error=failed,
stdout=r.get('module_stdout', None),
stderr=stderr.strip(),
)
else:
self.last_skipped = True
print(".", end="")
print('.', end="")
def v2_playbook_on_stats(self, stats):
"""Display info about playbook statistics."""
print()
self.printed_last_task = False
self._print_task("STATS")
self._print_task('STATS')
hosts = sorted(stats.processed.keys())
for host in hosts:
s = stats.summarize(host)
if s["failures"] or s["unreachable"]:
color = "failed"
elif s["changed"]:
color = "changed"
if s['failures'] or s['unreachable']:
color = 'failed'
elif s['changed']:
color = 'changed'
else:
color = "ok"
color = 'ok'
msg = (
f"{host} : ok={s['ok']}\tchanged={s['changed']}\tfailed={s['failures']}\tunreachable="
@@ -251,13 +251,14 @@ class CallbackModule(CallbackBase):
self.last_skipped = False
line_length = 120
spaces = " " * (31 - len(result._host.name) - 4)
spaces = ' ' * (31 - len(result._host.name) - 4)
line = f" * {colorize(result._host.name, 'not_so_bold')}{spaces}- {colorize('skipped', 'skipped')}"
reason = result._result.get("skipped_reason", "") or result._result.get("skip_reason", "")
reason = result._result.get('skipped_reason', '') or \
result._result.get('skip_reason', '')
if len(reason) < 50:
line += f" -- {reason}"
line += f' -- {reason}'
print(f"{line} {'-' * (line_length - len(line))}---------")
else:
print(f"{line} {'-' * (line_length - len(line))}")

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2015, Matt Martz <matt@sivel.net>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -70,7 +71,6 @@ from ansible.plugins.callback import CallbackBase
try:
import prettytable
HAS_PRETTYTABLE = True
except ImportError:
HAS_PRETTYTABLE = False
@@ -80,20 +80,20 @@ class CallbackModule(CallbackBase):
"""This is an ansible callback plugin that sends status
updates to a Slack channel during playbook execution.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.slack"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.slack'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
if not HAS_PRETTYTABLE:
self.disabled = True
self._display.warning(
"The `prettytable` python module is not installed. Disabling the Slack callback plugin."
)
self._display.warning('The `prettytable` python module is not '
'installed. Disabling the Slack callback '
'plugin.')
self.playbook_name = None
@@ -103,34 +103,34 @@ class CallbackModule(CallbackBase):
self.guid = uuid.uuid4().hex[:6]
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.webhook_url = self.get_option("webhook_url")
self.channel = self.get_option("channel")
self.username = self.get_option("username")
self.show_invocation = self._display.verbosity > 1
self.validate_certs = self.get_option("validate_certs")
self.http_agent = self.get_option("http_agent")
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.webhook_url = self.get_option('webhook_url')
self.channel = self.get_option('channel')
self.username = self.get_option('username')
self.show_invocation = (self._display.verbosity > 1)
self.validate_certs = self.get_option('validate_certs')
self.http_agent = self.get_option('http_agent')
if self.webhook_url is None:
self.disabled = True
self._display.warning(
"Slack Webhook URL was not provided. The "
"Slack Webhook URL can be provided using "
"the `SLACK_WEBHOOK_URL` environment "
"variable."
)
self._display.warning('Slack Webhook URL was not provided. The '
'Slack Webhook URL can be provided using '
'the `SLACK_WEBHOOK_URL` environment '
'variable.')
def send_msg(self, attachments):
headers = {
"Content-type": "application/json",
'Content-type': 'application/json',
}
payload = {
"channel": self.channel,
"username": self.username,
"attachments": attachments,
"parse": "none",
"icon_url": ("https://cdn2.hubspot.net/hub/330046/file-449187601-png/ansible_badge.png"),
'channel': self.channel,
'username': self.username,
'attachments': attachments,
'parse': 'none',
'icon_url': ('https://cdn2.hubspot.net/hub/330046/'
'file-449187601-png/ansible_badge.png'),
}
data = json.dumps(payload)
@@ -146,63 +146,67 @@ class CallbackModule(CallbackBase):
)
return response.read()
except Exception as e:
self._display.warning(f"Could not submit message to Slack: {e}")
self._display.warning(f'Could not submit message to Slack: {e}')
def v2_playbook_on_start(self, playbook):
self.playbook_name = os.path.basename(playbook._file_name)
title = [f"*Playbook initiated* (_{self.guid}_)"]
title = [
f'*Playbook initiated* (_{self.guid}_)'
]
invocation_items = []
if context.CLIARGS and self.show_invocation:
tags = context.CLIARGS["tags"]
skip_tags = context.CLIARGS["skip_tags"]
extra_vars = context.CLIARGS["extra_vars"]
subset = context.CLIARGS["subset"]
inventory = [os.path.abspath(i) for i in context.CLIARGS["inventory"]]
tags = context.CLIARGS['tags']
skip_tags = context.CLIARGS['skip_tags']
extra_vars = context.CLIARGS['extra_vars']
subset = context.CLIARGS['subset']
inventory = [os.path.abspath(i) for i in context.CLIARGS['inventory']]
invocation_items.append(f"Inventory: {', '.join(inventory)}")
if tags and tags != ["all"]:
if tags and tags != ['all']:
invocation_items.append(f"Tags: {', '.join(tags)}")
if skip_tags:
invocation_items.append(f"Skip Tags: {', '.join(skip_tags)}")
if subset:
invocation_items.append(f"Limit: {subset}")
invocation_items.append(f'Limit: {subset}')
if extra_vars:
invocation_items.append(f"Extra Vars: {' '.join(extra_vars)}")
title.append(f"by *{context.CLIARGS['remote_user']}*")
title.append(f"\n\n*{self.playbook_name}*")
msg_items = [" ".join(title)]
title.append(f'\n\n*{self.playbook_name}*')
msg_items = [' '.join(title)]
if invocation_items:
_inv_item = "\n".join(invocation_items)
msg_items.append(f"```\n{_inv_item}\n```")
_inv_item = '\n'.join(invocation_items)
msg_items.append(f'```\n{_inv_item}\n```')
msg = "\n".join(msg_items)
msg = '\n'.join(msg_items)
attachments = [
{
"fallback": msg,
"fields": [{"value": msg}],
"color": "warning",
"mrkdwn_in": ["text", "fallback", "fields"],
}
]
attachments = [{
'fallback': msg,
'fields': [
{
'value': msg
}
],
'color': 'warning',
'mrkdwn_in': ['text', 'fallback', 'fields'],
}]
self.send_msg(attachments=attachments)
def v2_playbook_on_play_start(self, play):
"""Display Play start messages"""
name = play.name or f"Play name not specified ({play._uuid})"
msg = f"*Starting play* (_{self.guid}_)\n\n*{name}*"
name = play.name or f'Play name not specified ({play._uuid})'
msg = f'*Starting play* (_{self.guid}_)\n\n*{name}*'
attachments = [
{
"fallback": msg,
"text": msg,
"color": "warning",
"mrkdwn_in": ["text", "fallback", "fields"],
'fallback': msg,
'text': msg,
'color': 'warning',
'mrkdwn_in': ['text', 'fallback', 'fields'],
}
]
self.send_msg(attachments=attachments)
@@ -212,7 +216,8 @@ class CallbackModule(CallbackBase):
hosts = sorted(stats.processed.keys())
t = prettytable.PrettyTable(["Host", "Ok", "Changed", "Unreachable", "Failures", "Rescued", "Ignored"])
t = prettytable.PrettyTable(['Host', 'Ok', 'Changed', 'Unreachable',
'Failures', 'Rescued', 'Ignored'])
failures = False
unreachable = False
@@ -220,28 +225,38 @@ class CallbackModule(CallbackBase):
for h in hosts:
s = stats.summarize(h)
if s["failures"] > 0:
if s['failures'] > 0:
failures = True
if s["unreachable"] > 0:
if s['unreachable'] > 0:
unreachable = True
t.add_row([h] + [s[k] for k in ["ok", "changed", "unreachable", "failures", "rescued", "ignored"]])
t.add_row([h] + [s[k] for k in ['ok', 'changed', 'unreachable',
'failures', 'rescued', 'ignored']])
attachments = []
msg_items = [f"*Playbook Complete* (_{self.guid}_)"]
msg_items = [
f'*Playbook Complete* (_{self.guid}_)'
]
if failures or unreachable:
color = "danger"
msg_items.append("\n*Failed!*")
color = 'danger'
msg_items.append('\n*Failed!*')
else:
color = "good"
msg_items.append("\n*Success!*")
color = 'good'
msg_items.append('\n*Success!*')
msg_items.append(f"```\n{t}\n```")
msg_items.append(f'```\n{t}\n```')
msg = "\n".join(msg_items)
msg = '\n'.join(msg_items)
attachments.append(
{"fallback": msg, "fields": [{"value": msg}], "color": color, "mrkdwn_in": ["text", "fallback", "fields"]}
)
attachments.append({
'fallback': msg,
'fields': [
{
'value': msg
}
],
'color': color,
'mrkdwn_in': ['text', 'fallback', 'fields']
})
self.send_msg(attachments=attachments)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -83,10 +84,11 @@ examples: >-
authtoken = f23blad6-5965-4537-bf69-5b5a545blabla88
"""
import getpass
import json
import socket
import uuid
import socket
import getpass
from os.path import basename
from ansible.module_utils.ansible_release import __version__ as ansible_version
@@ -99,7 +101,7 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
)
class SplunkHTTPCollectorSource:
class SplunkHTTPCollectorSource(object):
def __init__(self):
self.ansible_check_mode = False
self.ansible_playbook = ""
@@ -109,7 +111,7 @@ class SplunkHTTPCollectorSource:
self.user = getpass.getuser()
def send_event(self, url, authtoken, validate_certs, include_milliseconds, batch, state, result, runtime):
if result._task_fields["args"].get("_ansible_check_mode") is True:
if result._task_fields['args'].get('_ansible_check_mode') is True:
self.ansible_check_mode = True
if result._task._role:
@@ -117,33 +119,33 @@ class SplunkHTTPCollectorSource:
else:
ansible_role = None
if "args" in result._task_fields:
del result._task_fields["args"]
if 'args' in result._task_fields:
del result._task_fields['args']
data = {}
data["uuid"] = result._task._uuid
data["session"] = self.session
data['uuid'] = result._task._uuid
data['session'] = self.session
if batch is not None:
data["batch"] = batch
data["status"] = state
data['batch'] = batch
data['status'] = state
if include_milliseconds:
time_format = "%Y-%m-%d %H:%M:%S.%f +0000"
time_format = '%Y-%m-%d %H:%M:%S.%f +0000'
else:
time_format = "%Y-%m-%d %H:%M:%S +0000"
time_format = '%Y-%m-%d %H:%M:%S +0000'
data["timestamp"] = now().strftime(time_format)
data["host"] = self.host
data["ip_address"] = self.ip_address
data["user"] = self.user
data["runtime"] = runtime
data["ansible_version"] = ansible_version
data["ansible_check_mode"] = self.ansible_check_mode
data["ansible_host"] = result._host.name
data["ansible_playbook"] = self.ansible_playbook
data["ansible_role"] = ansible_role
data["ansible_task"] = result._task_fields
data["ansible_result"] = result._result
data['timestamp'] = now().strftime(time_format)
data['host'] = self.host
data['ip_address'] = self.ip_address
data['user'] = self.user
data['runtime'] = runtime
data['ansible_version'] = ansible_version
data['ansible_check_mode'] = self.ansible_check_mode
data['ansible_host'] = result._host.name
data['ansible_playbook'] = self.ansible_playbook
data['ansible_role'] = ansible_role
data['ansible_task'] = result._task_fields
data['ansible_result'] = result._result
# This wraps the json payload in and outer json event needed by Splunk
jsondata = json.dumps({"event": data}, cls=AnsibleJSONEncoder, sort_keys=True)
@@ -151,20 +153,23 @@ class SplunkHTTPCollectorSource:
open_url(
url,
jsondata,
headers={"Content-type": "application/json", "Authorization": f"Splunk {authtoken}"},
method="POST",
validate_certs=validate_certs,
headers={
'Content-type': 'application/json',
'Authorization': f"Splunk {authtoken}"
},
method='POST',
validate_certs=validate_certs
)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.splunk"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.splunk'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.start_datetimes = {} # Collect task start times
self.url = None
self.authtoken = None
@@ -174,40 +179,41 @@ class CallbackModule(CallbackBase):
self.splunk = SplunkHTTPCollectorSource()
def _runtime(self, result):
return (now() - self.start_datetimes[result._task._uuid]).total_seconds()
return (
now() -
self.start_datetimes[result._task._uuid]
).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
self.url = self.get_option("url")
self.url = self.get_option('url')
if self.url is None:
self.disabled = True
self._display.warning(
"Splunk HTTP collector source URL was "
"not provided. The Splunk HTTP collector "
"source URL can be provided using the "
"`SPLUNK_URL` environment variable or "
"in the ansible.cfg file."
)
self._display.warning('Splunk HTTP collector source URL was '
'not provided. The Splunk HTTP collector '
'source URL can be provided using the '
'`SPLUNK_URL` environment variable or '
'in the ansible.cfg file.')
self.authtoken = self.get_option("authtoken")
self.authtoken = self.get_option('authtoken')
if self.authtoken is None:
self.disabled = True
self._display.warning(
"Splunk HTTP collector requires an authentication"
"token. The Splunk HTTP collector "
"authentication token can be provided using the "
"`SPLUNK_AUTHTOKEN` environment variable or "
"in the ansible.cfg file."
)
self._display.warning('Splunk HTTP collector requires an authentication'
'token. The Splunk HTTP collector '
'authentication token can be provided using the '
'`SPLUNK_AUTHTOKEN` environment variable or '
'in the ansible.cfg file.')
self.validate_certs = self.get_option("validate_certs")
self.validate_certs = self.get_option('validate_certs')
self.include_milliseconds = self.get_option("include_milliseconds")
self.include_milliseconds = self.get_option('include_milliseconds')
self.batch = self.get_option("batch")
self.batch = self.get_option('batch')
def v2_playbook_on_start(self, playbook):
self.splunk.ansible_playbook = basename(playbook._file_name)
@@ -225,9 +231,9 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
"OK",
'OK',
result,
self._runtime(result),
self._runtime(result)
)
def v2_runner_on_skipped(self, result, **kwargs):
@@ -237,9 +243,9 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
"SKIPPED",
'SKIPPED',
result,
self._runtime(result),
self._runtime(result)
)
def v2_runner_on_failed(self, result, **kwargs):
@@ -249,21 +255,21 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
"FAILED",
'FAILED',
result,
self._runtime(result),
self._runtime(result)
)
def v2_runner_on_async_failed(self, result, **kwargs):
def runner_on_async_failed(self, result, **kwargs):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
self.include_milliseconds,
self.batch,
"FAILED",
'FAILED',
result,
self._runtime(result),
self._runtime(result)
)
def v2_runner_on_unreachable(self, result, **kwargs):
@@ -273,7 +279,7 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
"UNREACHABLE",
'UNREACHABLE',
result,
self._runtime(result),
self._runtime(result)
)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -40,10 +41,11 @@ examples: |-
url = https://endpoint1.collection.us2.sumologic.com/receiver/v1/http/R8moSv1d8EW9LAUFZJ6dbxCFxwLH6kfCdcBfddlfxCbLuL-BN5twcTpMk__pYy_cDmp==
"""
import getpass
import json
import socket
import uuid
import socket
import getpass
from os.path import basename
from ansible.module_utils.ansible_release import __version__ as ansible_version
@@ -56,7 +58,7 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
)
class SumologicHTTPCollectorSource:
class SumologicHTTPCollectorSource(object):
def __init__(self):
self.ansible_check_mode = False
self.ansible_playbook = ""
@@ -66,7 +68,7 @@ class SumologicHTTPCollectorSource:
self.user = getpass.getuser()
def send_event(self, url, state, result, runtime):
if result._task_fields["args"].get("_ansible_check_mode") is True:
if result._task_fields['args'].get('_ansible_check_mode') is True:
self.ansible_check_mode = True
if result._task._role:
@@ -74,63 +76,67 @@ class SumologicHTTPCollectorSource:
else:
ansible_role = None
if "args" in result._task_fields:
del result._task_fields["args"]
if 'args' in result._task_fields:
del result._task_fields['args']
data = {}
data["uuid"] = result._task._uuid
data["session"] = self.session
data["status"] = state
data["timestamp"] = now().strftime("%Y-%m-%d %H:%M:%S +0000")
data["host"] = self.host
data["ip_address"] = self.ip_address
data["user"] = self.user
data["runtime"] = runtime
data["ansible_version"] = ansible_version
data["ansible_check_mode"] = self.ansible_check_mode
data["ansible_host"] = result._host.name
data["ansible_playbook"] = self.ansible_playbook
data["ansible_role"] = ansible_role
data["ansible_task"] = result._task_fields
data["ansible_result"] = result._result
data['uuid'] = result._task._uuid
data['session'] = self.session
data['status'] = state
data['timestamp'] = now().strftime('%Y-%m-%d %H:%M:%S +0000')
data['host'] = self.host
data['ip_address'] = self.ip_address
data['user'] = self.user
data['runtime'] = runtime
data['ansible_version'] = ansible_version
data['ansible_check_mode'] = self.ansible_check_mode
data['ansible_host'] = result._host.name
data['ansible_playbook'] = self.ansible_playbook
data['ansible_role'] = ansible_role
data['ansible_task'] = result._task_fields
data['ansible_result'] = result._result
open_url(
url,
data=json.dumps(data, cls=AnsibleJSONEncoder, sort_keys=True),
headers={"Content-type": "application/json", "X-Sumo-Host": data["ansible_host"]},
method="POST",
headers={
'Content-type': 'application/json',
'X-Sumo-Host': data['ansible_host']
},
method='POST'
)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.sumologic"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.sumologic'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super().__init__(display=display)
super(CallbackModule, self).__init__(display=display)
self.start_datetimes = {} # Collect task start times
self.url = None
self.sumologic = SumologicHTTPCollectorSource()
def _runtime(self, result):
return (now() - self.start_datetimes[result._task._uuid]).total_seconds()
return (
now() -
self.start_datetimes[result._task._uuid]
).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.url = self.get_option("url")
self.url = self.get_option('url')
if self.url is None:
self.disabled = True
self._display.warning(
"Sumologic HTTP collector source URL was "
"not provided. The Sumologic HTTP collector "
"source URL can be provided using the "
"`SUMOLOGIC_URL` environment variable or "
"in the ansible.cfg file."
)
self._display.warning('Sumologic HTTP collector source URL was '
'not provided. The Sumologic HTTP collector '
'source URL can be provided using the '
'`SUMOLOGIC_URL` environment variable or '
'in the ansible.cfg file.')
def v2_playbook_on_start(self, playbook):
self.sumologic.ansible_playbook = basename(playbook._file_name)
@@ -142,16 +148,41 @@ class CallbackModule(CallbackBase):
self.start_datetimes[task._uuid] = now()
def v2_runner_on_ok(self, result, **kwargs):
self.sumologic.send_event(self.url, "OK", result, self._runtime(result))
self.sumologic.send_event(
self.url,
'OK',
result,
self._runtime(result)
)
def v2_runner_on_skipped(self, result, **kwargs):
self.sumologic.send_event(self.url, "SKIPPED", result, self._runtime(result))
self.sumologic.send_event(
self.url,
'SKIPPED',
result,
self._runtime(result)
)
def v2_runner_on_failed(self, result, **kwargs):
self.sumologic.send_event(self.url, "FAILED", result, self._runtime(result))
self.sumologic.send_event(
self.url,
'FAILED',
result,
self._runtime(result)
)
def runner_on_async_failed(self, result, **kwargs):
self.sumologic.send_event(self.url, "FAILED", result, self._runtime(result))
self.sumologic.send_event(
self.url,
'FAILED',
result,
self._runtime(result)
)
def v2_runner_on_unreachable(self, result, **kwargs):
self.sumologic.send_event(self.url, "UNREACHABLE", result, self._runtime(result))
self.sumologic.send_event(
self.url,
'UNREACHABLE',
result,
self._runtime(result)
)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -56,6 +57,7 @@ options:
import logging
import logging.handlers
import socket
from ansible.plugins.callback import CallbackBase
@@ -67,89 +69,62 @@ class CallbackModule(CallbackBase):
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.syslog_json"
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.syslog_json'
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
syslog_host = self.get_option("server")
syslog_port = int(self.get_option("port"))
syslog_facility = self.get_option("facility")
self.logger = logging.getLogger("ansible logger")
self.logger = logging.getLogger('ansible logger')
self.logger.setLevel(logging.DEBUG)
self.handler = logging.handlers.SysLogHandler(address=(syslog_host, syslog_port), facility=syslog_facility)
self.handler = logging.handlers.SysLogHandler(
address=(syslog_host, syslog_port),
facility=syslog_facility
)
self.logger.addHandler(self.handler)
self.hostname = socket.gethostname()
def v2_runner_on_failed(self, result, ignore_errors=False):
res = result._result
host = result._host.get_name()
self.logger.error(
"%s ansible-command: task execution FAILED; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
self.logger.error('%s ansible-command: task execution FAILED; host: %s; message: %s', self.hostname, host, self._dump_results(res))
def v2_runner_on_ok(self, result):
res = result._result
host = result._host.get_name()
if result._task.action != "gather_facts" or self.get_option("setup"):
self.logger.info(
"%s ansible-command: task execution OK; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
self.logger.info('%s ansible-command: task execution OK; host: %s; message: %s', self.hostname, host, self._dump_results(res))
def v2_runner_on_skipped(self, result):
host = result._host.get_name()
self.logger.info(
"%s ansible-command: task execution SKIPPED; host: %s; message: %s", self.hostname, host, "skipped"
)
self.logger.info('%s ansible-command: task execution SKIPPED; host: %s; message: %s', self.hostname, host, 'skipped')
def v2_runner_on_unreachable(self, result):
res = result._result
host = result._host.get_name()
self.logger.error(
"%s ansible-command: task execution UNREACHABLE; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
self.logger.error('%s ansible-command: task execution UNREACHABLE; host: %s; message: %s', self.hostname, host, self._dump_results(res))
def v2_runner_on_async_failed(self, result):
res = result._result
host = result._host.get_name()
# jid = result._result.get("ansible_job_id")
self.logger.error(
"%s ansible-command: task execution FAILED; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
jid = result._result.get('ansible_job_id')
self.logger.error('%s ansible-command: task execution FAILED; host: %s; message: %s', self.hostname, host, self._dump_results(res))
def v2_playbook_on_import_for_host(self, result, imported_file):
host = result._host.get_name()
self.logger.info(
"%s ansible-command: playbook IMPORTED; host: %s; message: imported file %s",
self.hostname,
host,
imported_file,
)
self.logger.info('%s ansible-command: playbook IMPORTED; host: %s; message: imported file %s', self.hostname, host, imported_file)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
host = result._host.get_name()
self.logger.info(
"%s ansible-command: playbook NOT IMPORTED; host: %s; message: missing file %s",
self.hostname,
host,
missing_file,
)
self.logger.info('%s ansible-command: playbook NOT IMPORTED; host: %s; message: missing file %s', self.hostname, host, missing_file)

View File

@@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2025, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -50,8 +52,8 @@ from ansible.plugins.callback.default import CallbackModule as Default
class CallbackModule(Default):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.tasks_only"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.tasks_only'
def v2_playbook_on_play_start(self, play):
pass
@@ -60,7 +62,7 @@ class CallbackModule(Default):
pass
def set_options(self, *args, **kwargs):
result = super().set_options(*args, **kwargs)
result = super(CallbackModule, self).set_options(*args, **kwargs)
self.number_of_columns = self.get_option("number_of_columns")
if self.number_of_columns is not None:
self._display.columns = self.number_of_columns

View File

@@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, kurokobo <kurokobo@protonmail.com>
# Copyright (c) 2014, Michael DeHaan <michael.dehaan@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -5,6 +7,7 @@
from __future__ import annotations
DOCUMENTATION = r"""
name: timestamp
type: stdout
@@ -48,13 +51,12 @@ extends_documentation_fragment:
"""
import sys
import types
from datetime import datetime
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback.default import CallbackModule as Default
from ansible.utils.display import get_text_width
from ansible.module_utils.common.text.converters import to_text
from datetime import datetime
import types
import sys
# Store whether the zoneinfo module is available
_ZONEINFO_AVAILABLE = sys.version_info >= (3, 9)
@@ -89,7 +91,7 @@ def banner(self, msg, color=None, cows=True):
msg = msg.strip()
try:
star_len = self.columns - get_text_width(msg) - timestamp_len
except OSError:
except EnvironmentError:
star_len = self.columns - len(msg) - timestamp_len
if star_len <= 3:
star_len = 3
@@ -103,13 +105,13 @@ class CallbackModule(Default):
CALLBACK_NAME = "community.general.timestamp"
def __init__(self):
super().__init__()
super(CallbackModule, self).__init__()
# Replace the banner method of the display object with the custom one
self._display.banner = types.MethodType(banner, self._display)
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# Store zoneinfo for specified timezone if available
tzinfo = None
@@ -119,5 +121,5 @@ class CallbackModule(Default):
tzinfo = ZoneInfo(self.get_option("timezone"))
# Inject options into the display object
self._display.timestamp_tzinfo = tzinfo
self._display.timestamp_format_string = self.get_option("format_string")
setattr(self._display, "timestamp_tzinfo", tzinfo)
setattr(self._display, "timestamp_format_string", self.get_option("format_string"))

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2023, Al Bowles <@akatch>
# Copyright (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -20,16 +21,16 @@ requirements:
"""
from os.path import basename
from ansible import constants as C
from ansible import context
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
from ansible.utils.color import colorize, hostcolor
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
class CallbackModule(CallbackModule_default):
"""
'''
Design goals:
- Print consolidated output that looks like a *NIX startup log
- Defaults should avoid displaying unnecessary information wherever possible
@@ -39,16 +40,14 @@ class CallbackModule(CallbackModule_default):
- Add option to display all hostnames on a single line in the appropriate result color (failures may have a separate line)
- Consolidate stats display
- Don't show play name if no hosts found
"""
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.unixy"
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.unixy'
def _run_is_verbose(self, result):
return (
self._display.verbosity > 0 or "_ansible_verbose_always" in result._result
) and "_ansible_verbose_override" not in result._result
return ((self._display.verbosity > 0 or '_ansible_verbose_always' in result._result) and '_ansible_verbose_override' not in result._result)
def _get_task_display_name(self, task):
self.task_display_name = None
@@ -61,8 +60,8 @@ class CallbackModule(CallbackModule_default):
self.task_display_name = task_display_name
def _preprocess_result(self, result):
self.delegated_vars = result._result.get("_ansible_delegated_vars", None)
self._handle_exception(result._result, use_stderr=self.get_option("display_failed_stderr"))
self.delegated_vars = result._result.get('_ansible_delegated_vars', None)
self._handle_exception(result._result, use_stderr=self.get_option('display_failed_stderr'))
self._handle_warnings(result._result)
def _process_result_output(self, result, msg):
@@ -74,16 +73,16 @@ class CallbackModule(CallbackModule_default):
return task_result
if self.delegated_vars:
task_delegate_host = self.delegated_vars["ansible_host"]
task_delegate_host = self.delegated_vars['ansible_host']
task_result = f"{task_host} -> {task_delegate_host} {msg}"
if result._result.get("msg") and result._result.get("msg") != "All items completed":
if result._result.get('msg') and result._result.get('msg') != "All items completed":
task_result += f" | msg: {to_text(result._result.get('msg'))}"
if result._result.get("stdout"):
if result._result.get('stdout'):
task_result += f" | stdout: {result._result.get('stdout')}"
if result._result.get("stderr"):
if result._result.get('stderr'):
task_result += f" | stderr: {result._result.get('stderr')}"
return task_result
@@ -91,7 +90,7 @@ class CallbackModule(CallbackModule_default):
def v2_playbook_on_task_start(self, task, is_conditional):
self._get_task_display_name(task)
if self.task_display_name is not None:
if task.check_mode and self.get_option("check_mode_markers"):
if task.check_mode and self.get_option('check_mode_markers'):
self._display.display(f"{self.task_display_name} (check mode)...")
else:
self._display.display(f"{self.task_display_name}...")
@@ -99,14 +98,14 @@ class CallbackModule(CallbackModule_default):
def v2_playbook_on_handler_task_start(self, task):
self._get_task_display_name(task)
if self.task_display_name is not None:
if task.check_mode and self.get_option("check_mode_markers"):
if task.check_mode and self.get_option('check_mode_markers'):
self._display.display(f"{self.task_display_name} (via handler in check mode)... ")
else:
self._display.display(f"{self.task_display_name} (via handler)... ")
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if play.check_mode and self.get_option("check_mode_markers"):
if play.check_mode and self.get_option('check_mode_markers'):
if name and play.hosts:
msg = f"\n- {name} (in check mode) on hosts: {','.join(play.hosts)} -"
else:
@@ -120,7 +119,7 @@ class CallbackModule(CallbackModule_default):
self._display.display(msg)
def v2_runner_on_skipped(self, result, ignore_errors=False):
if self.get_option("display_skipped_hosts"):
if self.get_option('display_skipped_hosts'):
self._preprocess_result(result)
display_color = C.COLOR_SKIP
msg = "skipped"
@@ -139,12 +138,12 @@ class CallbackModule(CallbackModule_default):
msg += f" | item: {item_value}"
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color, stderr=self.get_option("display_failed_stderr"))
self._display.display(f" {task_result}", display_color, stderr=self.get_option('display_failed_stderr'))
def v2_runner_on_ok(self, result, msg="ok", display_color=C.COLOR_OK):
self._preprocess_result(result)
result_was_changed = "changed" in result._result and result._result["changed"]
result_was_changed = ('changed' in result._result and result._result['changed'])
if result_was_changed:
msg = "done"
item_value = self._get_item_label(result._result)
@@ -153,7 +152,7 @@ class CallbackModule(CallbackModule_default):
display_color = C.COLOR_CHANGED
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color)
elif self.get_option("display_ok_hosts"):
elif self.get_option('display_ok_hosts'):
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color)
@@ -173,17 +172,17 @@ class CallbackModule(CallbackModule_default):
display_color = C.COLOR_UNREACHABLE
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color, stderr=self.get_option("display_failed_stderr"))
self._display.display(f" {task_result}", display_color, stderr=self.get_option('display_failed_stderr'))
def v2_on_file_diff(self, result):
if result._task.loop and "results" in result._result:
for res in result._result["results"]:
if "diff" in res and res["diff"] and res.get("changed", False):
diff = self._get_diff(res["diff"])
if result._task.loop and 'results' in result._result:
for res in result._result['results']:
if 'diff' in res and res['diff'] and res.get('changed', False):
diff = self._get_diff(res['diff'])
if diff:
self._display.display(diff)
elif "diff" in result._result and result._result["diff"] and result._result.get("changed", False):
diff = self._get_diff(result._result["diff"])
elif 'diff' in result._result and result._result['diff'] and result._result.get('changed', False):
diff = self._get_diff(result._result['diff'])
if diff:
self._display.display(diff)
@@ -199,30 +198,30 @@ class CallbackModule(CallbackModule_default):
f" {hostcolor(h, t)} : {colorize('ok', t['ok'], C.COLOR_OK)} {colorize('changed', t['changed'], C.COLOR_CHANGED)} "
f"{colorize('unreachable', t['unreachable'], C.COLOR_UNREACHABLE)} {colorize('failed', t['failures'], C.COLOR_ERROR)} "
f"{colorize('rescued', t['rescued'], C.COLOR_OK)} {colorize('ignored', t['ignored'], C.COLOR_WARN)}",
screen_only=True,
screen_only=True
)
self._display.display(
f" {hostcolor(h, t, False)} : {colorize('ok', t['ok'], None)} {colorize('changed', t['changed'], None)} "
f"{colorize('unreachable', t['unreachable'], None)} {colorize('failed', t['failures'], None)} {colorize('rescued', t['rescued'], None)} "
f"{colorize('ignored', t['ignored'], None)}",
log_only=True,
log_only=True
)
if stats.custom and self.get_option("show_custom_stats"):
if stats.custom and self.get_option('show_custom_stats'):
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == "_run":
if k == '_run':
continue
stat_val = self._dump_results(stats.custom[k], indent=1).replace("\n", "")
self._display.display(f"\t{k}: {stat_val}")
stat_val = self._dump_results(stats.custom[k], indent=1).replace('\n', '')
self._display.display(f'\t{k}: {stat_val}')
# print per run custom stats
if "_run" in stats.custom:
if '_run' in stats.custom:
self._display.display("", screen_only=True)
stat_val_run = self._dump_results(stats.custom["_run"], indent=1).replace("\n", "")
self._display.display(f"\tRUN: {stat_val_run}")
stat_val_run = self._dump_results(stats.custom['_run'], indent=1).replace('\n', '')
self._display.display(f'\tRUN: {stat_val_run}')
self._display.display("", screen_only=True)
def v2_playbook_on_no_hosts_matched(self):
@@ -232,24 +231,21 @@ class CallbackModule(CallbackModule_default):
self._display.display(" Ran out of hosts!", color=C.COLOR_ERROR)
def v2_playbook_on_start(self, playbook):
if context.CLIARGS["check"] and self.get_option("check_mode_markers"):
if context.CLIARGS['check'] and self.get_option('check_mode_markers'):
self._display.display(f"Executing playbook {basename(playbook._file_name)} in check mode")
else:
self._display.display(f"Executing playbook {basename(playbook._file_name)}")
# show CLI arguments
if self._display.verbosity > 3:
if context.CLIARGS.get("args"):
self._display.display(
f"Positional arguments: {' '.join(context.CLIARGS['args'])}",
color=C.COLOR_VERBOSE,
screen_only=True,
)
if context.CLIARGS.get('args'):
self._display.display(f"Positional arguments: {' '.join(context.CLIARGS['args'])}",
color=C.COLOR_VERBOSE, screen_only=True)
for argument in (a for a in context.CLIARGS if a != "args"):
for argument in (a for a in context.CLIARGS if a != 'args'):
val = context.CLIARGS[argument]
if val:
self._display.vvvv(f"{argument}: {val}")
self._display.vvvv(f'{argument}: {val}')
def v2_runner_retry(self, result):
msg = f" Retrying... ({result._result['attempts']} of {result._result['retries']})"

195
plugins/callback/yaml.py Normal file
View File

@@ -0,0 +1,195 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Make coding more python3-ish
from __future__ import annotations
DOCUMENTATION = r"""
author: Unknown (!UNKNOWN)
name: yaml
type: stdout
short_description: YAML-ized Ansible screen output
deprecated:
removed_in: 12.0.0
why: Starting in ansible-core 2.13, the P(ansible.builtin.default#callback) callback has support for printing output in
YAML format.
alternative: Use O(ansible.builtin.default#callback:result_format=yaml).
description:
- Ansible output that can be quite a bit easier to read than the default JSON formatting.
extends_documentation_fragment:
- default_callback
requirements:
- set as stdout in configuration
seealso:
- plugin: ansible.builtin.default
plugin_type: callback
description: >-
There is a parameter O(ansible.builtin.default#callback:result_format) in P(ansible.builtin.default#callback) that allows
you to change the output format to YAML.
notes:
- With ansible-core 2.13 or newer, you can instead specify V(yaml) for the parameter O(ansible.builtin.default#callback:result_format)
in P(ansible.builtin.default#callback).
"""
import yaml
import json
import re
import string
from collections.abc import Mapping, Sequence
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback import strip_internal_keys, module_response_deepcopy
from ansible.plugins.callback.default import CallbackModule as Default
# from http://stackoverflow.com/a/15423007/115478
def should_use_block(value):
"""Returns true if string should be in block format"""
for c in "\u000a\u000d\u001c\u001d\u001e\u0085\u2028\u2029":
if c in value:
return True
return False
def adjust_str_value_for_block(value):
# we care more about readable than accuracy, so...
# ...no trailing space
value = value.rstrip()
# ...and non-printable characters
value = ''.join(x for x in value if x in string.printable or ord(x) >= 0xA0)
# ...tabs prevent blocks from expanding
value = value.expandtabs()
# ...and odd bits of whitespace
value = re.sub(r'[\x0b\x0c\r]', '', value)
# ...as does trailing space
value = re.sub(r' +\n', '\n', value)
return value
def create_string_node(tag, value, style, default_style):
if style is None:
if should_use_block(value):
style = '|'
value = adjust_str_value_for_block(value)
else:
style = default_style
return yaml.representer.ScalarNode(tag, value, style=style)
try:
from ansible.module_utils.common.yaml import HAS_LIBYAML
# import below was added in https://github.com/ansible/ansible/pull/85039,
# first contained in ansible-core 2.19.0b2:
from ansible.utils.vars import transform_to_native_types
if HAS_LIBYAML:
from yaml.cyaml import CSafeDumper as SafeDumper
else:
from yaml import SafeDumper
class MyDumper(SafeDumper):
def represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
node = create_string_node(tag, value, style, self.default_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
except ImportError:
# In case transform_to_native_types cannot be imported, we either have ansible-core 2.19.0b1
# (or some random commit from the devel or stable-2.19 branch after merging the DT changes
# and before transform_to_native_types was added), or we have a version without the DT changes.
# Here we simply assume we have a version without the DT changes, and thus can continue as
# with ansible-core 2.18 and before.
transform_to_native_types = None
from ansible.parsing.yaml.dumper import AnsibleDumper
class MyDumper(AnsibleDumper): # pylint: disable=inherit-non-class
def represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
node = create_string_node(tag, value, style, self.default_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
def transform_recursively(value, transform):
# Since 2.19.0b7, this should no longer be needed:
# https://github.com/ansible/ansible/issues/85325
# https://github.com/ansible/ansible/pull/85389
if isinstance(value, Mapping):
return {transform(k): transform(v) for k, v in value.items()}
if isinstance(value, Sequence) and not isinstance(value, (str, bytes)):
return [transform(e) for e in value]
return transform(value)
class CallbackModule(Default):
"""
Variation of the Default output which uses nicely readable YAML instead
of JSON for printing results.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.yaml'
def __init__(self):
super(CallbackModule, self).__init__()
def _dump_results(self, result, indent=None, sort_keys=True, keep_invocation=False):
if result.get('_ansible_no_log', False):
return json.dumps(dict(censored="The output has been hidden due to the fact that 'no_log: true' was specified for this result"))
# All result keys stating with _ansible_ are internal, so remove them from the result before we output anything.
abridged_result = strip_internal_keys(module_response_deepcopy(result))
# remove invocation unless specifically wanting it
if not keep_invocation and self._display.verbosity < 3 and 'invocation' in result:
del abridged_result['invocation']
# remove diff information from screen output
if self._display.verbosity < 3 and 'diff' in result:
del abridged_result['diff']
# remove exception from screen output
if 'exception' in abridged_result:
del abridged_result['exception']
dumped = ''
# put changed and skipped into a header line
if 'changed' in abridged_result:
dumped += f"changed={str(abridged_result['changed']).lower()} "
del abridged_result['changed']
if 'skipped' in abridged_result:
dumped += f"skipped={str(abridged_result['skipped']).lower()} "
del abridged_result['skipped']
# if we already have stdout, we don't need stdout_lines
if 'stdout' in abridged_result and 'stdout_lines' in abridged_result:
abridged_result['stdout_lines'] = '<omitted>'
# if we already have stderr, we don't need stderr_lines
if 'stderr' in abridged_result and 'stderr_lines' in abridged_result:
abridged_result['stderr_lines'] = '<omitted>'
if abridged_result:
dumped += '\n'
if transform_to_native_types is not None:
abridged_result = transform_recursively(abridged_result, lambda v: transform_to_native_types(v, redact=False))
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=MyDumper, default_flow_style=False))
# indent by a couple of spaces
dumped = '\n '.join(dumped.split('\n')).rstrip()
return dumped
def _serialize_diff(self, diff):
return to_text(yaml.dump(diff, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# (c) 2013, Maykel Moya <mmoya@speedyrails.com>
@@ -80,26 +81,26 @@ from ansible.errors import AnsibleError
from ansible.module_utils.basic import is_executable
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.connection import BUFSIZE, ConnectionBase
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.utils.display import Display
display = Display()
class Connection(ConnectionBase):
"""Local chroot based connections"""
""" Local chroot based connections """
transport = "community.general.chroot"
transport = 'community.general.chroot'
has_pipelining = True
# su currently has an undiagnosed issue with calculating the file
# checksums (so copy, for instance, doesn't work right)
# Have to look into that before re-enabling this
has_tty = False
default_user = "root"
default_user = 'root'
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self.chroot = self._play_context.remote_addr
@@ -107,7 +108,7 @@ class Connection(ConnectionBase):
if not os.path.isdir(self.chroot):
raise AnsibleError(f"{self.chroot} is not a directory")
chrootsh = os.path.join(self.chroot, "bin/sh")
chrootsh = os.path.join(self.chroot, 'bin/sh')
# Want to check for a usable bourne shell inside the chroot.
# is_executable() == True is sufficient. For symlinks it
# gets really complicated really fast. So we punt on finding that
@@ -116,46 +117,46 @@ class Connection(ConnectionBase):
raise AnsibleError(f"{self.chroot} does not look like a chrootable dir (/bin/sh missing)")
def _connect(self):
"""connect to the chroot"""
if not self.get_option("disable_root_check") and os.geteuid() != 0:
""" connect to the chroot """
if not self.get_option('disable_root_check') and os.geteuid() != 0:
raise AnsibleError(
"chroot connection requires running as root. "
"You can override this check with the `disable_root_check` option."
)
"You can override this check with the `disable_root_check` option.")
if os.path.isabs(self.get_option("chroot_exe")):
self.chroot_cmd = self.get_option("chroot_exe")
if os.path.isabs(self.get_option('chroot_exe')):
self.chroot_cmd = self.get_option('chroot_exe')
else:
try:
self.chroot_cmd = get_bin_path(self.get_option("chroot_exe"))
self.chroot_cmd = get_bin_path(self.get_option('chroot_exe'))
except ValueError as e:
raise AnsibleError(str(e)) from e
raise AnsibleError(str(e))
super()._connect()
super(Connection, self)._connect()
if not self._connected:
display.vvv("THIS IS A LOCAL CHROOT DIR", host=self.chroot)
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
"""run a command on the chroot. This is only needed for implementing
""" run a command on the chroot. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
"""
executable = self.get_option("executable")
local_cmd = [self.chroot_cmd, self.chroot, executable, "-c", cmd]
executable = self.get_option('executable')
local_cmd = [self.chroot_cmd, self.chroot, executable, '-c', cmd]
display.vvv(f"EXEC {local_cmd}", host=self.chroot)
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
"""run a command on the chroot"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
""" run a command on the chroot """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
@@ -164,70 +165,70 @@ class Connection(ConnectionBase):
@staticmethod
def _prefix_login_path(remote_path):
"""Make sure that we put files into a standard path
""" Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
Can revisit using $HOME instead if it is a problem
Can revisit using $HOME instead if it is a problem
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
"""transfer a file from local to chroot"""
super().put_file(in_path, out_path)
""" transfer a file from local to chroot """
super(Connection, self).put_file(in_path, out_path)
display.vvv(f"PUT {in_path} TO {out_path}", host=self.chroot)
out_path = shlex_quote(self._prefix_login_path(out_path))
try:
with open(to_bytes(in_path, errors="surrogate_or_strict"), "rb") as in_file:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
if not os.fstat(in_file.fileno()).st_size:
count = " count=0"
count = ' count=0'
else:
count = ""
count = ''
try:
p = self._buffered_exec_command(f"dd of={out_path} bs={BUFSIZE}{count}", stdin=in_file)
except OSError as e:
raise AnsibleError("chroot connection requires dd command in the chroot") from e
p = self._buffered_exec_command(f'dd of={out_path} bs={BUFSIZE}{count}', stdin=in_file)
except OSError:
raise AnsibleError("chroot connection requires dd command in the chroot")
try:
stdout, stderr = p.communicate()
except Exception as e:
except Exception:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
if p.returncode != 0:
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{stdout}\n{stderr}")
except OSError as e:
raise AnsibleError(f"file or module does not exist at: {in_path}") from e
except IOError:
raise AnsibleError(f"file or module does not exist at: {in_path}")
def fetch_file(self, in_path, out_path):
"""fetch a file from chroot to local"""
super().fetch_file(in_path, out_path)
""" fetch a file from chroot to local """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(f"FETCH {in_path} TO {out_path}", host=self.chroot)
in_path = shlex_quote(self._prefix_login_path(in_path))
try:
p = self._buffered_exec_command(f"dd if={in_path} bs={BUFSIZE}")
except OSError as e:
raise AnsibleError("chroot connection requires dd command in the chroot") from e
p = self._buffered_exec_command(f'dd if={in_path} bs={BUFSIZE}')
except OSError:
raise AnsibleError("chroot connection requires dd command in the chroot")
with open(to_bytes(out_path, errors="surrogate_or_strict"), "wb+") as out_file:
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except Exception as e:
except Exception:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{stdout}\n{stderr}")
def close(self):
"""terminate the connection; nothing to do here"""
super().close()
""" terminate the connection; nothing to do here """
super(Connection, self).close()
self._connected = False

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Copyright (c) 2013, Michael Scherer <misc@zarb.org>
@@ -29,14 +30,13 @@ options:
HAVE_FUNC = False
try:
import func.overlord.client as fc
HAVE_FUNC = True
except ImportError:
pass
import os
import shutil
import tempfile
import shutil
from ansible.errors import AnsibleError
from ansible.plugins.connection import ConnectionBase
@@ -46,7 +46,7 @@ display = Display()
class Connection(ConnectionBase):
"""Func-based connections"""
""" Func-based connections """
has_pipelining = False
@@ -65,7 +65,7 @@ class Connection(ConnectionBase):
return self
def exec_command(self, cmd, in_data=None, sudoable=True):
"""run a command on the remote minion"""
""" run a command on the remote minion """
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
@@ -83,16 +83,16 @@ class Connection(ConnectionBase):
return os.path.join(prefix, normpath[1:])
def put_file(self, in_path, out_path):
"""transfer a file from local to remote"""
""" transfer a file from local to remote """
out_path = self._normalize_path(out_path, "/")
out_path = self._normalize_path(out_path, '/')
display.vvv(f"PUT {in_path} TO {out_path}", host=self.host)
self.client.local.copyfile.send(in_path, out_path)
def fetch_file(self, in_path, out_path):
"""fetch a file from remote to local"""
""" fetch a file from remote to local """
in_path = self._normalize_path(in_path, "/")
in_path = self._normalize_path(in_path, '/')
display.vvv(f"FETCH {in_path} TO {out_path}", host=self.host)
# need to use a tmp dir due to difference of semantic for getfile
# ( who take a # directory as destination) and fetch_file, who
@@ -103,5 +103,5 @@ class Connection(ConnectionBase):
shutil.rmtree(tmpdir)
def close(self):
"""terminate the connection; nothing to do here"""
""" terminate the connection; nothing to do here """
pass

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on lxd.py (c) 2016, Matt Clay <matt@mystile.com>
# (c) 2023, Stephane Graber <stgraber@stgraber.org>
# Copyright (c) 2023 Ansible Project
@@ -13,9 +14,6 @@ short_description: Run tasks in Incus instances using the Incus CLI
description:
- Run commands or put/fetch files to an existing Incus instance using Incus CLI.
version_added: "8.2.0"
notes:
- When using this collection for Windows virtual machines, set C(ansible_shell_type) to C(powershell) or C(cmd) as a variable to the host in
the inventory.
options:
remote_addr:
description:
@@ -78,127 +76,78 @@ options:
"""
import os
import re
from subprocess import PIPE, Popen, call
from subprocess import call, Popen, PIPE
from ansible.errors import AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
"""Incus based connections"""
""" Incus based connections """
transport = "incus"
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._incus_cmd = get_bin_path("incus")
if not self._incus_cmd:
raise AnsibleError("incus command not found in PATH")
if getattr(self._shell, "_IS_WINDOWS", False):
# Initializing regular expression patterns to match on a PowerShell or cmd command line.
self.powershell_regex_pattern = re.compile(
r'^"?(?P<executable>(?:[a-z]:\\)?[a-z0-9 ()\\.]*powershell(?:\.exe)?)"?\s+(?P<args>.*)(?P<command>-c(?:ommand)?)\s+(?P<post_args>.*(\n.*)*)',
re.IGNORECASE,
)
self.cmd_regex_pattern = re.compile(
r'^"?(?P<executable>(?:[a-z]:\\)?[a-z0-9 ()\\.]*cmd(?:\.exe)?)"?\s+(?P<args>.*)(?P<command>/c)\s+(?P<post_args>.*)',
re.IGNORECASE,
)
# Basic setup for a Windows host.
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = (".ps1", ".exe", "")
self.allow_executable = False
def _connect(self):
"""connect to Incus (nothing to do here)"""
super()._connect()
"""connect to Incus (nothing to do here) """
super(Connection, self)._connect()
if not self._connected:
self._display.vvv(
f"ESTABLISH Incus CONNECTION FOR USER: {self.get_option('remote_user')}", host=self._instance()
)
self._display.vvv(f"ESTABLISH Incus CONNECTION FOR USER: {self.get_option('remote_user')}",
host=self._instance())
self._connected = True
def _build_command(self, cmd) -> list[str]:
def _build_command(self, cmd) -> str:
"""build the command to execute on the incus host"""
# Force pseudo-terminal allocation if the active become plugin
# requires one (e.g. community.general.machinectl), otherwise the
# become helper runs without a controlling tty and silently fails.
require_tty = self.become is not None and getattr(self.become, "require_tty", False)
exec_cmd: list[str] = [
exec_cmd = [
self._incus_cmd,
"--project",
self.get_option("project"),
"--project", self.get_option("project"),
"exec",
*(["-T"] if getattr(self._shell, "_IS_WINDOWS", False) else []),
*(["-t"] if require_tty and not getattr(self._shell, "_IS_WINDOWS", False) else []),
f"{self.get_option('remote')}:{self._instance()}",
"--",
]
"--"]
if getattr(self._shell, "_IS_WINDOWS", False):
if regex_match := self.powershell_regex_pattern.match(cmd):
regex_pattern = self.powershell_regex_pattern
elif regex_match := self.cmd_regex_pattern.match(cmd):
regex_pattern = self.cmd_regex_pattern
if self.get_option("remote_user") != "root":
self._display.vvv(
f"INFO: Running as non-root user: {self.get_option('remote_user')}, \
trying to run 'incus exec' with become method: {self.get_option('incus_become_method')}",
host=self._instance(),
)
exec_cmd.extend(
[self.get_option("incus_become_method"), self.get_option("remote_user"), "-c"]
)
if regex_match:
self._display.vvvvvv(
f'Found keyword: "{regex_match.group("command")}" based on regex: {regex_pattern.pattern}',
host=self._instance(),
)
# To avoid splitting on a space contained in the path, set the executable as the first argument.
exec_cmd.append(regex_match.group("executable"))
if args := regex_match.group("args"):
exec_cmd.extend(args.strip().split(" "))
# Set the command argument depending on cmd or powershell and the rest of it
exec_cmd.append(regex_match.group("command"))
if post_args := regex_match.group("post_args"):
exec_cmd.append(post_args.strip())
else:
# For anything else using -EncodedCommand or else, just split on space.
exec_cmd.extend(cmd.split(" "))
else:
if self.get_option("remote_user") != "root":
self._display.vvv(
f"INFO: Running as non-root user: {self.get_option('remote_user')}, \
trying to run 'incus exec' with become method: {self.get_option('incus_become_method')}",
host=self._instance(),
)
exec_cmd.extend([self.get_option("incus_become_method"), self.get_option("remote_user"), "-c"])
exec_cmd.extend([self.get_option("executable"), "-c", cmd])
exec_cmd.extend([self.get_option("executable"), "-c", cmd])
return exec_cmd
def _instance(self):
# Return only the leading part of the FQDN as the instance name
# as Incus instance names cannot be a FQDN.
return self.get_option("remote_addr").split(".")[0]
return self.get_option('remote_addr').split(".")[0]
def exec_command(self, cmd, in_data=None, sudoable=True):
"""execute a command on the Incus host"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
""" execute a command on the Incus host """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
self._display.vvv(f"EXEC {cmd}", host=self._instance())
self._display.vvv(f"EXEC {cmd}",
host=self._instance())
local_cmd = self._build_command(cmd)
self._display.vvvvv(f"EXEC {local_cmd}", host=self._instance())
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
in_data = to_bytes(in_data, errors="surrogate_or_strict", nonstring="passthru")
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate(in_data)
@@ -206,22 +155,32 @@ class Connection(ConnectionBase):
stdout = to_text(stdout)
stderr = to_text(stderr)
if stderr.startswith("Error: ") and stderr.rstrip().endswith(": Instance is not running"):
if stderr.startswith("Error: ") and stderr.rstrip().endswith(
": Instance is not running"
):
raise AnsibleConnectionFailure(
f"instance not running: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
if stderr.startswith("Error: ") and stderr.rstrip().endswith(": Instance not found"):
if stderr.startswith("Error: ") and stderr.rstrip().endswith(
": Instance not found"
):
raise AnsibleConnectionFailure(
f"instance not found: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
if stderr.startswith("Error: ") and ": User does not have permission " in stderr:
if (
stderr.startswith("Error: ")
and ": User does not have permission " in stderr
):
raise AnsibleConnectionFailure(
f"instance access denied: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
if stderr.startswith("Error: ") and ": User does not have entitlement " in stderr:
if (
stderr.startswith("Error: ")
and ": User does not have entitlement " in stderr
):
raise AnsibleConnectionFailure(
f"instance access denied: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
@@ -233,26 +192,31 @@ class Connection(ConnectionBase):
rc, uid_out, err = self.exec_command("/bin/id -u")
if rc != 0:
raise AnsibleError(f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}")
raise AnsibleError(
f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}"
)
uid = uid_out.strip()
rc, gid_out, err = self.exec_command("/bin/id -g")
if rc != 0:
raise AnsibleError(f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}")
raise AnsibleError(
f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}"
)
gid = gid_out.strip()
return int(uid), int(gid)
def put_file(self, in_path, out_path):
"""put a file from local to Incus"""
super().put_file(in_path, out_path)
""" put a file from local to Incus """
super(Connection, self).put_file(in_path, out_path)
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self._instance())
self._display.vvv(f"PUT {in_path} TO {out_path}",
host=self._instance())
if not os.path.isfile(to_bytes(in_path, errors="surrogate_or_strict")):
if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound(f"input path is not a file: {in_path}")
if not getattr(self._shell, "_IS_WINDOWS", False) and self.get_option("remote_user") != "root":
if self.get_option("remote_user") != "root":
uid, gid = self._get_remote_uid_gid()
local_cmd = [
self._incus_cmd,
@@ -282,33 +246,30 @@ class Connection(ConnectionBase):
self._display.vvvvv(f"PUT {local_cmd}", host=self._instance())
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
call(local_cmd)
def fetch_file(self, in_path, out_path):
"""fetch a file from Incus to local"""
super().fetch_file(in_path, out_path)
""" fetch a file from Incus to local """
super(Connection, self).fetch_file(in_path, out_path)
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self._instance())
self._display.vvv(f"FETCH {in_path} TO {out_path}",
host=self._instance())
local_cmd = [
self._incus_cmd,
"--project",
self.get_option("project"),
"file",
"pull",
"--quiet",
"--project", self.get_option("project"),
"file", "pull", "--quiet",
f"{self.get_option('remote')}:{self._instance()}/{in_path}",
out_path,
]
out_path]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
call(local_cmd)
def close(self):
"""close the connection (nothing to do here)"""
super().close()
""" close the connection (nothing to do here) """
super(Connection, self).close()
self._connected = False

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on jail.py
# (c) 2013, Michael Scherer <misc@zarb.org>
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
@@ -33,43 +34,40 @@ options:
import subprocess
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_native
from ansible.utils.display import Display
from ansible_collections.community.general.plugins.connection.jail import Connection as Jail
from ansible.module_utils.common.text.converters import to_native
from ansible.errors import AnsibleError
from ansible.utils.display import Display
display = Display()
class Connection(Jail):
"""Local iocage based connections"""
""" Local iocage based connections """
transport = "community.general.iocage"
transport = 'community.general.iocage'
def __init__(self, play_context, new_stdin, *args, **kwargs):
self.ioc_jail = play_context.remote_addr
self.iocage_cmd = Jail._search_executable("iocage")
self.iocage_cmd = Jail._search_executable('iocage')
jail_uuid = self.get_jail_uuid()
kwargs[Jail.modified_jailname_key] = f"ioc-{jail_uuid}"
kwargs[Jail.modified_jailname_key] = f'ioc-{jail_uuid}'
display.vvv(
f"Jail {self.ioc_jail} has been translated to {kwargs[Jail.modified_jailname_key]}",
host=kwargs[Jail.modified_jailname_key],
host=kwargs[Jail.modified_jailname_key]
)
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
def get_jail_uuid(self):
p = subprocess.Popen(
[self.iocage_cmd, "get", "host_hostuuid", self.ioc_jail],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
p = subprocess.Popen([self.iocage_cmd, 'get', 'host_hostuuid', self.ioc_jail],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = p.communicate()
@@ -85,4 +83,4 @@ class Connection(Jail):
if p.returncode != 0:
raise AnsibleError(f"iocage returned an error: {stdout}")
return stdout.strip("\n")
return stdout.strip('\n')

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on local.py by Michael DeHaan <michael.dehaan@gmail.com>
# and chroot.py by Maykel Moya <mmoya@speedyrails.com>
# Copyright (c) 2013, Michael Scherer <misc@zarb.org>
@@ -42,25 +43,25 @@ from shlex import quote as shlex_quote
from ansible.errors import AnsibleError
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.plugins.connection import BUFSIZE, ConnectionBase
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.utils.display import Display
display = Display()
class Connection(ConnectionBase):
"""Local BSD Jail based connections"""
""" Local BSD Jail based connections """
modified_jailname_key = "conn_jail_name"
modified_jailname_key = 'conn_jail_name'
transport = "community.general.jail"
transport = 'community.general.jail'
# Pipelining may work. Someone needs to test by setting this to True and
# having pipelining=True in their ansible.cfg
has_pipelining = True
has_tty = False
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self.jail = self._play_context.remote_addr
if self.modified_jailname_key in kwargs:
@@ -69,8 +70,8 @@ class Connection(ConnectionBase):
if os.geteuid() != 0:
raise AnsibleError("jail connection requires running as root")
self.jls_cmd = self._search_executable("jls")
self.jexec_cmd = self._search_executable("jexec")
self.jls_cmd = self._search_executable('jls')
self.jexec_cmd = self._search_executable('jexec')
if self.jail not in self.list_jails():
raise AnsibleError(f"incorrect jail name {self.jail}")
@@ -79,27 +80,27 @@ class Connection(ConnectionBase):
def _search_executable(executable):
try:
return get_bin_path(executable)
except ValueError as e:
raise AnsibleError(f"{executable} command not found in PATH") from e
except ValueError:
raise AnsibleError(f"{executable} command not found in PATH")
def list_jails(self):
p = subprocess.Popen(
[self.jls_cmd, "-q", "name"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
p = subprocess.Popen([self.jls_cmd, '-q', 'name'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
return to_text(stdout, errors="surrogate_or_strict").split()
return to_text(stdout, errors='surrogate_or_strict').split()
def _connect(self):
"""connect to the jail; nothing to do here"""
super()._connect()
""" connect to the jail; nothing to do here """
super(Connection, self)._connect()
if not self._connected:
display.vvv(f"ESTABLISH JAIL CONNECTION FOR USER: {self._play_context.remote_user}", host=self.jail)
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
"""run a command on the jail. This is only needed for implementing
""" run a command on the jail. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
@@ -108,24 +109,25 @@ class Connection(ConnectionBase):
"""
local_cmd = [self.jexec_cmd]
set_env = ""
set_env = ''
if self._play_context.remote_user is not None:
local_cmd += ["-U", self._play_context.remote_user]
local_cmd += ['-U', self._play_context.remote_user]
# update HOME since -U does not update the jail environment
set_env = f"HOME=~{self._play_context.remote_user} "
local_cmd += [self.jail, self._play_context.executable, "-c", set_env + cmd]
local_cmd += [self.jail, self._play_context.executable, '-c', set_env + cmd]
display.vvv(f"EXEC {local_cmd}", host=self.jail)
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
"""run a command on the jail"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
""" run a command on the jail """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
@@ -134,74 +136,70 @@ class Connection(ConnectionBase):
@staticmethod
def _prefix_login_path(remote_path):
"""Make sure that we put files into a standard path
""" Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
Can revisit using $HOME instead if it is a problem
Can revisit using $HOME instead if it is a problem
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
"""transfer a file from local to jail"""
super().put_file(in_path, out_path)
""" transfer a file from local to jail """
super(Connection, self).put_file(in_path, out_path)
display.vvv(f"PUT {in_path} TO {out_path}", host=self.jail)
out_path = shlex_quote(self._prefix_login_path(out_path))
try:
with open(to_bytes(in_path, errors="surrogate_or_strict"), "rb") as in_file:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
if not os.fstat(in_file.fileno()).st_size:
count = " count=0"
count = ' count=0'
else:
count = ""
count = ''
try:
p = self._buffered_exec_command(f"dd of={out_path} bs={BUFSIZE}{count}", stdin=in_file)
except OSError as e:
raise AnsibleError("jail connection requires dd command in the jail") from e
p = self._buffered_exec_command(f'dd of={out_path} bs={BUFSIZE}{count}', stdin=in_file)
except OSError:
raise AnsibleError("jail connection requires dd command in the jail")
try:
stdout, stderr = p.communicate()
except Exception as e:
except Exception:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
if p.returncode != 0:
raise AnsibleError(
f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}"
)
except OSError as e:
raise AnsibleError(f"file or module does not exist at: {in_path}") from e
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}")
except IOError:
raise AnsibleError(f"file or module does not exist at: {in_path}")
def fetch_file(self, in_path, out_path):
"""fetch a file from jail to local"""
super().fetch_file(in_path, out_path)
""" fetch a file from jail to local """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(f"FETCH {in_path} TO {out_path}", host=self.jail)
in_path = shlex_quote(self._prefix_login_path(in_path))
try:
p = self._buffered_exec_command(f"dd if={in_path} bs={BUFSIZE}")
except OSError as e:
raise AnsibleError("jail connection requires dd command in the jail") from e
p = self._buffered_exec_command(f'dd if={in_path} bs={BUFSIZE}')
except OSError:
raise AnsibleError("jail connection requires dd command in the jail")
with open(to_bytes(out_path, errors="surrogate_or_strict"), "wb+") as out_file:
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except Exception as e:
except Exception:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError(
f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}"
)
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}")
def close(self):
"""terminate the connection; nothing to do here"""
super().close()
""" terminate the connection; nothing to do here """
super(Connection, self).close()
self._connected = False

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# (c) 2015, Joerg Thalheim <joerg@higgsboson.tk>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -31,17 +32,16 @@ options:
- name: ansible_lxc_executable
"""
import errno
import fcntl
import os
import select
import shutil
import traceback
import select
import fcntl
import errno
HAS_LIBLXC = False
try:
import lxc as _lxc
HAS_LIBLXC = True
except ImportError:
pass
@@ -52,27 +52,27 @@ from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
"""Local lxc based connections"""
""" Local lxc based connections """
transport = "community.general.lxc"
transport = 'community.general.lxc'
has_pipelining = True
default_user = "root"
default_user = 'root'
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self.container_name = None
self.container = None
def _connect(self):
"""connect to the lxc; nothing to do here"""
super()._connect()
""" connect to the lxc; nothing to do here """
super(Connection, self)._connect()
if not HAS_LIBLXC:
msg = "lxc python bindings are not installed"
raise errors.AnsibleError(msg)
container_name = self.get_option("remote_addr")
container_name = self.get_option('remote_addr')
if self.container and self.container_name == container_name:
return
@@ -94,12 +94,12 @@ class Connection(ConnectionBase):
while len(read_fds) > 0 or len(write_fds) > 0:
try:
ready_reads, ready_writes, dummy = select.select(read_fds, write_fds, [])
except OSError as e:
except select.error as e:
if e.args[0] == errno.EINTR:
continue
raise
for fd in ready_writes:
in_data = in_data[os.write(fd, in_data) :]
in_data = in_data[os.write(fd, in_data):]
if len(in_data) == 0:
write_fds.remove(fd)
for fd in ready_reads:
@@ -118,12 +118,12 @@ class Connection(ConnectionBase):
return fd
def exec_command(self, cmd, in_data=None, sudoable=False):
"""run a command on the chroot"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
""" run a command on the chroot """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
# python2-lxc needs bytes. python3-lxc needs text.
executable = to_native(self.get_option("executable"), errors="surrogate_or_strict")
local_cmd = [executable, "-c", to_native(cmd, errors="surrogate_or_strict")]
executable = to_native(self.get_option('executable'), errors='surrogate_or_strict')
local_cmd = [executable, '-c', to_native(cmd, errors='surrogate_or_strict')]
read_stdout, write_stdout = None, None
read_stderr, write_stderr = None, None
@@ -134,14 +134,14 @@ class Connection(ConnectionBase):
read_stderr, write_stderr = os.pipe()
kwargs = {
"stdout": self._set_nonblocking(write_stdout),
"stderr": self._set_nonblocking(write_stderr),
"env_policy": _lxc.LXC_ATTACH_CLEAR_ENV,
'stdout': self._set_nonblocking(write_stdout),
'stderr': self._set_nonblocking(write_stderr),
'env_policy': _lxc.LXC_ATTACH_CLEAR_ENV
}
if in_data:
read_stdin, write_stdin = os.pipe()
kwargs["stdin"] = self._set_nonblocking(read_stdin)
kwargs['stdin'] = self._set_nonblocking(read_stdin)
self._display.vvv(f"EXEC {local_cmd}", host=self.container_name)
pid = self.container.attach(_lxc.attach_run_command, local_cmd, **kwargs)
@@ -154,77 +154,82 @@ class Connection(ConnectionBase):
if read_stdin:
read_stdin = os.close(read_stdin)
return self._communicate(pid, in_data, write_stdin, read_stdout, read_stderr)
return self._communicate(pid,
in_data,
write_stdin,
read_stdout,
read_stderr)
finally:
fds = [read_stdout, write_stdout, read_stderr, write_stderr, read_stdin, write_stdin]
fds = [read_stdout,
write_stdout,
read_stderr,
write_stderr,
read_stdin,
write_stdin]
for fd in fds:
if fd:
os.close(fd)
def put_file(self, in_path, out_path):
"""transfer a file from local to lxc"""
super().put_file(in_path, out_path)
''' transfer a file from local to lxc '''
super(Connection, self).put_file(in_path, out_path)
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self.container_name)
in_path = to_bytes(in_path, errors="surrogate_or_strict")
out_path = to_bytes(out_path, errors="surrogate_or_strict")
in_path = to_bytes(in_path, errors='surrogate_or_strict')
out_path = to_bytes(out_path, errors='surrogate_or_strict')
if not os.path.exists(in_path):
msg = f"file or module does not exist: {in_path}"
raise errors.AnsibleFileNotFound(msg)
try:
src_file = open(in_path, "rb")
except OSError as e:
except IOError:
traceback.print_exc()
raise errors.AnsibleError(f"failed to open input file to {in_path}") from e
raise errors.AnsibleError(f"failed to open input file to {in_path}")
try:
def write_file(args):
with open(out_path, "wb+") as dst_file:
with open(out_path, 'wb+') as dst_file:
shutil.copyfileobj(src_file, dst_file)
try:
self.container.attach_wait(write_file, None)
except OSError as e:
except IOError:
traceback.print_exc()
msg = f"failed to transfer file to {out_path}"
raise errors.AnsibleError(msg) from e
raise errors.AnsibleError(msg)
finally:
src_file.close()
def fetch_file(self, in_path, out_path):
"""fetch a file from lxc to local"""
super().fetch_file(in_path, out_path)
''' fetch a file from lxc to local '''
super(Connection, self).fetch_file(in_path, out_path)
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self.container_name)
in_path = to_bytes(in_path, errors="surrogate_or_strict")
out_path = to_bytes(out_path, errors="surrogate_or_strict")
in_path = to_bytes(in_path, errors='surrogate_or_strict')
out_path = to_bytes(out_path, errors='surrogate_or_strict')
try:
dst_file = open(out_path, "wb")
except OSError as e:
except IOError:
traceback.print_exc()
msg = f"failed to open output file {out_path}"
raise errors.AnsibleError(msg) from e
raise errors.AnsibleError(msg)
try:
def write_file(args):
try:
with open(in_path, "rb") as src_file:
with open(in_path, 'rb') as src_file:
shutil.copyfileobj(src_file, dst_file)
finally:
# this is needed in the lxc child process
# to flush internal python buffers
dst_file.close()
try:
self.container.attach_wait(write_file, None)
except OSError as e:
except IOError:
traceback.print_exc()
msg = f"failed to transfer file from {in_path} to {out_path}"
raise errors.AnsibleError(msg) from e
raise errors.AnsibleError(msg)
finally:
dst_file.close()
def close(self):
"""terminate the connection; nothing to do here"""
super().close()
''' terminate the connection; nothing to do here '''
super(Connection, self).close()
self._connected = False

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2016 Matt Clay <matt@mystile.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -74,44 +75,44 @@ options:
"""
import os
from subprocess import PIPE, Popen
from subprocess import Popen, PIPE
from ansible.errors import AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
"""lxd based connections"""
""" lxd based connections """
transport = "community.general.lxd"
transport = 'community.general.lxd'
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
try:
self._lxc_cmd = get_bin_path("lxc")
except ValueError as e:
raise AnsibleError("lxc command not found in PATH") from e
except ValueError:
raise AnsibleError("lxc command not found in PATH")
def _host(self):
"""translate remote_addr to lxd (short) hostname"""
""" translate remote_addr to lxd (short) hostname """
return self.get_option("remote_addr").split(".", 1)[0]
def _connect(self):
"""connect to lxd (nothing to do here)"""
super()._connect()
"""connect to lxd (nothing to do here) """
super(Connection, self)._connect()
if not self._connected:
self._display.vvv(f"ESTABLISH LXD CONNECTION FOR USER: {self.get_option('remote_user')}", host=self._host())
self._connected = True
def _build_command(self, cmd) -> list[str]:
def _build_command(self, cmd) -> str:
"""build the command to execute on the lxd host"""
exec_cmd: list[str] = [self._lxc_cmd]
exec_cmd = [self._lxc_cmd]
if self.get_option("project"):
exec_cmd.extend(["--project", self.get_option("project")])
@@ -124,23 +125,25 @@ class Connection(ConnectionBase):
trying to run 'lxc exec' with become method: {self.get_option('lxd_become_method')}",
host=self._host(),
)
exec_cmd.extend([self.get_option("lxd_become_method"), self.get_option("remote_user"), "-c"])
exec_cmd.extend(
[self.get_option("lxd_become_method"), self.get_option("remote_user"), "-c"]
)
exec_cmd.extend([self.get_option("executable"), "-c", cmd])
return exec_cmd
def exec_command(self, cmd, in_data=None, sudoable=True):
"""execute a command on the lxd host"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
""" execute a command on the lxd host """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
self._display.vvv(f"EXEC {cmd}", host=self._host())
local_cmd = self._build_command(cmd)
self._display.vvvvv(f"EXEC {local_cmd}", host=self._host())
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
in_data = to_bytes(in_data, errors="surrogate_or_strict", nonstring="passthru")
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate(in_data)
@@ -163,23 +166,27 @@ class Connection(ConnectionBase):
rc, uid_out, err = self.exec_command("/bin/id -u")
if rc != 0:
raise AnsibleError(f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}")
raise AnsibleError(
f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}"
)
uid = uid_out.strip()
rc, gid_out, err = self.exec_command("/bin/id -g")
if rc != 0:
raise AnsibleError(f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}")
raise AnsibleError(
f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}"
)
gid = gid_out.strip()
return int(uid), int(gid)
def put_file(self, in_path, out_path):
"""put a file from local to lxd"""
super().put_file(in_path, out_path)
""" put a file from local to lxd """
super(Connection, self).put_file(in_path, out_path)
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self._host())
if not os.path.isfile(to_bytes(in_path, errors="surrogate_or_strict")):
if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound(f"input path is not a file: {in_path}")
local_cmd = [self._lxc_cmd]
@@ -212,29 +219,33 @@ class Connection(ConnectionBase):
self._display.vvvvv(f"PUT {local_cmd}", host=self._host())
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
process.communicate()
def fetch_file(self, in_path, out_path):
"""fetch a file from lxd to local"""
super().fetch_file(in_path, out_path)
""" fetch a file from lxd to local """
super(Connection, self).fetch_file(in_path, out_path)
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self._host())
local_cmd = [self._lxc_cmd]
if self.get_option("project"):
local_cmd.extend(["--project", self.get_option("project")])
local_cmd.extend(["file", "pull", f"{self.get_option('remote')}:{self._host()}/{in_path}", out_path])
local_cmd.extend([
"file", "pull",
f"{self.get_option('remote')}:{self._host()}/{in_path}",
out_path
])
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
process.communicate()
def close(self):
"""close the connection (nothing to do here)"""
super().close()
""" close the connection (nothing to do here) """
super(Connection, self).close()
self._connected = False

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on the buildah connection plugin
# Copyright (c) 2017 Ansible Project
# 2018 Kushal Das
@@ -9,6 +10,7 @@
from __future__ import annotations
DOCUMENTATION = r"""
name: qubes
short_description: Interact with an existing QubesOS AppVM
@@ -39,9 +41,9 @@ options:
import subprocess
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.connection import ConnectionBase, ensure_connect
from ansible.errors import AnsibleConnectionFailure
from ansible.utils.display import Display
display = Display()
@@ -52,11 +54,11 @@ class Connection(ConnectionBase):
"""This is a connection plugin for qubes: it uses qubes-run-vm binary to interact with the containers."""
# String used to identify this Connection class from other classes
transport = "community.general.qubes"
transport = 'community.general.qubes'
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._remote_vmname = self._play_context.remote_addr
self._connected = False
@@ -87,29 +89,28 @@ class Connection(ConnectionBase):
local_cmd.append(shell)
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
display.vvvv("Local cmd: ", local_cmd)
display.vvv(f"RUN {local_cmd}", host=self._remote_vmname)
p = subprocess.Popen(
local_cmd, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
p = subprocess.Popen(local_cmd, shell=False, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Here we are writing the actual command to the remote bash
p.stdin.write(to_bytes(cmd, errors="surrogate_or_strict"))
p.stdin.write(to_bytes(cmd, errors='surrogate_or_strict'))
stdout, stderr = p.communicate(input=in_data)
return p.returncode, stdout, stderr
def _connect(self):
"""No persistent connection is being maintained."""
super()._connect()
super(Connection, self)._connect()
self._connected = True
@ensure_connect # type: ignore # TODO: for some reason, the type infos for ensure_connect suck...
@ensure_connect
def exec_command(self, cmd, in_data=None, sudoable=False):
"""Run specified command in a running QubesVM"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""Run specified command in a running QubesVM """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvvv(f"CMD IS: {cmd}")
@@ -119,25 +120,25 @@ class Connection(ConnectionBase):
return rc, stdout, stderr
def put_file(self, in_path, out_path):
"""Place a local file located in 'in_path' inside VM at 'out_path'"""
super().put_file(in_path, out_path)
""" Place a local file located in 'in_path' inside VM at 'out_path' """
super(Connection, self).put_file(in_path, out_path)
display.vvv(f"PUT {in_path} TO {out_path}", host=self._remote_vmname)
with open(in_path, "rb") as fobj:
source_data = fobj.read()
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}"\n', source_data, "qubes.VMRootShell")
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}\"\n', source_data, "qubes.VMRootShell")
# if qubes.VMRootShell service not supported, fallback to qubes.VMShell and
# hope it will have appropriate permissions
if retcode == 127:
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}"\n', source_data)
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}\"\n', source_data)
if retcode != 0:
raise AnsibleConnectionFailure(f"Failed to put_file to {out_path}")
raise AnsibleConnectionFailure(f'Failed to put_file to {out_path}')
def fetch_file(self, in_path, out_path):
"""Obtain file specified via 'in_path' from the container and place it at 'out_path'"""
super().fetch_file(in_path, out_path)
"""Obtain file specified via 'in_path' from the container and place it at 'out_path' """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(f"FETCH {in_path} TO {out_path}", host=self._remote_vmname)
# We are running in dom0
@@ -146,9 +147,9 @@ class Connection(ConnectionBase):
p = subprocess.Popen(cmd_args_list, shell=False, stdout=fobj)
p.communicate()
if p.returncode != 0:
raise AnsibleConnectionFailure(f"Failed to fetch file to {out_path}")
raise AnsibleConnectionFailure(f'Failed to fetch file to {out_path}')
def close(self):
"""Closing the connection"""
super().close()
""" Closing the connection """
super(Connection, self).close()
self._connected = False

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Based on func.py
@@ -16,8 +17,8 @@ description:
- This allows you to use existing Saltstack infrastructure to connect to targets.
"""
import base64
import os
import base64
from ansible import errors
from ansible.plugins.connection import ConnectionBase
@@ -25,22 +26,21 @@ from ansible.plugins.connection import ConnectionBase
HAVE_SALTSTACK = False
try:
import salt.client as sc
HAVE_SALTSTACK = True
except ImportError:
pass
class Connection(ConnectionBase):
"""Salt-based connections"""
""" Salt-based connections """
has_pipelining = False
# while the name of the product is salt, naming that module salt cause
# trouble with module import
transport = "community.general.saltstack"
transport = 'community.general.saltstack'
def __init__(self, play_context, new_stdin, *args, **kwargs):
super().__init__(play_context, new_stdin, *args, **kwargs)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self.host = self._play_context.remote_addr
def _connect(self):
@@ -52,22 +52,20 @@ class Connection(ConnectionBase):
return self
def exec_command(self, cmd, in_data=None, sudoable=False):
"""run a command on the remote minion"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
""" run a command on the remote minion """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
self._display.vvv(f"EXEC {cmd}", host=self.host)
# need to add 'true;' to work around https://github.com/saltstack/salt/issues/28077
res = self.client.cmd(self.host, "cmd.exec_code_all", ["bash", f"true;{cmd}"])
res = self.client.cmd(self.host, 'cmd.exec_code_all', ['bash', f"true;{cmd}"])
if self.host not in res:
raise errors.AnsibleError(
f"Minion {self.host} didn't answer, check if salt-minion is running and the name is correct"
)
raise errors.AnsibleError(f"Minion {self.host} didn't answer, check if salt-minion is running and the name is correct")
p = res[self.host]
return p["retcode"], p["stdout"], p["stderr"]
return p['retcode'], p['stdout'], p['stderr']
@staticmethod
def _normalize_path(path, prefix):
@@ -77,27 +75,27 @@ class Connection(ConnectionBase):
return os.path.join(prefix, normpath[1:])
def put_file(self, in_path, out_path):
"""transfer a file from local to remote"""
""" transfer a file from local to remote """
super().put_file(in_path, out_path)
super(Connection, self).put_file(in_path, out_path)
out_path = self._normalize_path(out_path, "/")
out_path = self._normalize_path(out_path, '/')
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self.host)
with open(in_path, "rb") as in_fh:
with open(in_path, 'rb') as in_fh:
content = in_fh.read()
self.client.cmd(self.host, "hashutil.base64_decodefile", [base64.b64encode(content), out_path])
self.client.cmd(self.host, 'hashutil.base64_decodefile', [base64.b64encode(content), out_path])
# TODO test it
def fetch_file(self, in_path, out_path):
"""fetch a file from remote to local"""
""" fetch a file from remote to local """
super().fetch_file(in_path, out_path)
super(Connection, self).fetch_file(in_path, out_path)
in_path = self._normalize_path(in_path, "/")
in_path = self._normalize_path(in_path, '/')
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self.host)
content = self.client.cmd(self.host, "cp.get_file_str", [in_path])[self.host]
open(out_path, "wb").write(content)
content = self.client.cmd(self.host, 'cp.get_file_str', [in_path])[self.host]
open(out_path, 'wb').write(content)
def close(self):
"""terminate the connection; nothing to do here"""
""" terminate the connection; nothing to do here """
pass

Some files were not shown because too many files have changed in this diff Show More